Exploring Topics in the History and Philosophy of Logic 9783110435047, 9783110442236

While post-Fregean logicians tend to ignore or even denigrate the traditional logic of Aristotle and the Scholastics, ne

198 101 1MB

English Pages 196 [198] Year 2015

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Getting Oriented
Contents
1 Liar’s Lookout
1.1 Preliminaries: Our First Stop
1.2 A Scenic (but “Risky”) View
1.3 Disarming the Bandit
1.4 Cassationism: Where We Meet Some Fellow Travelers
2 Ryle’s Way With the Liar by Fred Sommers
2.1 Anaphoric Conditions for Successful Comment
2.2 The Ryle Approach
2.3 The “Official” View
2.4 Risky Business
2.1 Where the Risk Really Lies
2.3 Nesting Propositions
2.5 Propositional Depth
2.6 The Structure of Comments
2.7 The Requirement of Determinate Depth
2.19 Depth and the Anaphoric Background
2.11 Comments on Sentences
2.12 Summarizing Conclusion
3 The Logic Mountain Range
3.1 The Negativity Scene
3.2 Aristotle’s Peak
3.3 Predication Without Copulation?
4 On the Term Functor Trail
4.1 Charging Up the Hill
4.2 A Better View
4.3 Deriver’s License Required
4.4 Down the Hill Without a Variable
5 The Four Corners
5.1 Around the Square
5.2 Is Something There?
5.3 We Could be Lost
5.4 Nonsense!
5.1 The Hexagonal Square
5.6 Squaring the Square
5.7 The Punch Line
5.8 Truth or Dare
5.9 Proposition Opposition
6 Referential Falls
6.1 Into the Semantic Forest
6.2 The Curious Case of the Singular Team
6.3 Not, Else, But, Other Than, Except
7 Strawberry Fields
7.1 Naked Denotation
7.2 Peter, Paul and Mary, the Beatles, and Other Roadside Attractions
7.3 Where We Get Together
8 Into the Metaphysical Bogs
8.1 Ontology In … Ontology Out
8.2 What Were We Thinking? 139
Recommend Papers

Exploring Topics in the History and Philosophy of Logic
 9783110435047, 9783110442236

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

George Englebretsen Exploring Topics in the History and Philosophy of Logic

Philosophische Analyse/ Philosophical Analysis

Herausgegeben von/Edited by Herbert Hochberg, Rafael Hüntelmann, Christian Kanzian, Richard Schantz, Erwin Tegtmeier

Band/Volume 67

George Englebretsen

Exploring Topics in the History and Philosophy of Logic

ISBN 978-3-11-044223-6 e-ISBN (PDF) 978-3-11-043504-7 e-ISBN (EPUB) 978-3-11-043381-4 ISSN 2198-2066 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2015 Walter de Gruyter GmbH, Berlin/Boston Printing: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com

| I want to dedicate this work, in the first instance, to Fred Sommers, who passed away soon after it was completed. Fred contributed two chapters here. More importantly, he enriched both my career and life in many ways over half a century as both a friend and mentor. My sincere hope and expectation is that his work in logic, ontology, and the philosophy of language will continue to provoke, engage, and inspire many new generations of scholars. Shakespeare wrote, in A Midsummer Night’s Dream, that “to say the truth, reason and love keep little company nowadays.” But he put those words in the mouth of foolish Bottom. For it is not the truth. Even a logician can love, as I love Libbey, to whom this work is also dedicated.

Getting Oriented The last thing that we find in making a book is to know what we must put first. Pascal Whatever Logic is good enough to tell you is worth writing down. Lewis Carroll

There has long been a question about just how logic relates to philosophy, a question that can only be approached after a period of exploration in the area of logic itself. Is logic a part of philosophy, or is it a tool for doing philosophy, or is it the essence of philosophy? Is logic the foundation of mathematics, or is it the regimentation of rational thought, or is it merely a game of symbols? When it comes to philosophy, is logic the transcendent articulation of what there is, or is logic simply beside the point? A day or two of wandering through the country of logic would seem a good way to make a start on such questions. I want to take you on a walking tour along a very long, very old trail through that abstract country of language and thought, of terms and concepts, of sentences and propositions. Boots and hiking gear are not required, but close scrutiny of what we say and think is absolutely necessary. We are setting off on the logic trail that wanders through that countryside. As I say, it’s very old. It was first blazed by Aristotle over two millennia ago. And it’s a long trail since over the centuries many explorers and hikers, logicians as well as philosophers, mathematicians and others, have pushed its reach not only farther along but also into various side-trails as well. Moreover, many of these explorers have often set up permanent or temporary campsites at the most prominent places along the way. A century or so ago some trailblazers took off into a radically different part of this country, one that had already been glimpsed at least as far back as the 17th century. In the process of establishing this new trail these modern explorers have constructed what is now the most heavily travelled trail in the country. It is paved and fast and rich in tributaries. It’s where you find most logicians now, though hiking no longer accurately describes their speedy, motor-driven progress. That new highway is a wonderful thing. Many awe-inspiring sites and wonders are accessible along it and its side-roads. In spite of these attractions, our journey will keep mostly to the old, 2400-year-old, walking trail. It too offers uncounted wonders, delights, dangers, puzzles and challenges. My plan is to take you to various places here and there along the old trail (not ignoring many of the places where it joins with the new highway). There are several places of interest that would demand our attention and offer delight, but, because I am the primary trail guide, I get to choose where we stop for sightseeing. Since our

VIII | Getting Oriented

journey is an intellectual one, through an abstract country, we will be free to go from one site to any other instantaneously. My intention is to provide us with opportunities to get a view of the lay of the land, stopping at various sites of interest or wonder. I stumbled into this country a long time ago, wandering about, getting lost, tiring. Eventually, I began to make discoveries, finding scenes and objects both delightful and intriguing. Of course, I was young and naive. I thought I was a new Columbus. Sure, there were others here before I came along, but I was seeing things with fresh eyes. It didn’t take me long, however, to begin to notice that the discoveries I was making, the puzzles I was encountering, the barriers I was trying to overcome, the areas I was mapping were already showing signs of major exploration and study (even exploitation) well before I stumbled down the trail. And even when I took a turn into the more extreme parts of the wilderness, they turned out to have already been surveyed and civilized. I needed guidance. A modest understanding of parts of mathematics lead me onto Frege’s highway. It was a heady ride. Still, I wasn’t satisfied. Fortunately, Fred Sommers soon came along, took me by the hand, and, with much patience, showed me the delights of getting off the highway and avoiding the beaten track. I did eventually make a few new minor discoveries, accounting for a few bits of perhaps overlooked flora or fauna, but not much. This logical country’s new, modern, multi-lane highway is efficient for getting you where you want to go – fast. I know. I’ve spent the past half-century or so driving new visitors (sometimes even settlers) speedily through the more popular parts of the territory. Yet a slower pace usually affords the traveller a better view. That’s a lesson I often found hard to learn. I learned in part from my artist wife, who, as a young woman, spent several years living alone in a small, uninsulated, unwired cabin at the edge of a small rural village in southern Québec. There she painted – when she wasn’t splitting wood for her little stove, hauling snow in buckets to be melted and heated on that stove, walking or skiing several miles to the next village for her meagre supplies. On occasion, she accepted rides from friends. I stopped once or twice, not often, to give her a ride to town. I was busy and tended to be in a hurry. It has taken me all this time to adjust my pace to hers – so I’m seeing more things as I go. In the country of logic there still runs the old trail. It’s long and winding. It’s unpaved and the rest stops that it provides have neither internet access nor fivestar accommodations. But it’s a good trail. You can get where you want to go. Before we set off, some “official” thanks: to David Oderberg, editor of Ratio, and John Wiley & Sons, for permission to include here Fred Sommers’ “Ratiocination: An Empirical Account,” Ratio, 21 (2008). Fred, generous as always, gave

Getting Oriented | IX

me permission to include both that essay and his unpublished “Ryle’s Way With the Liar.” Nearly half a century ago, Gilbert Ryle encouraged me to stay in the country of logic. I should have thanked him then. I do so now. More recently, Adam Kearney offered me valuable help with computer things like formatting and diagraming and Holly McMillan patiently guided me through word processing forests. Finally, I want to give special thanks to two people who have tried hard to keep me on the straight and narrow – both in logic and in life: Fred Sommers, who first set me On the Logic Trail, and my dear wife, Libbey Griffith, to both of whom this book is dedicated.

Contents Getting Oriented | VII 1 1.1 1.2 1.3 1.4

Liar’s Lookout | 1 Preliminaries: Our First Stop | 1 A Scenic (but “Risky”) View | 2 Disarming the Bandit | 6 Cassationism: Where We Meet Some Fellow Travelers | 11

2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12

Ryle’s Way With the Liar by Fred Sommers | 15 Anaphoric Conditions for Successful Comment | 15 The Ryle Approach | 17 The “Official” View | 18 Risky Business | 19 Where the Risk Really Lies | 21 Nesting Propositions | 22 Propositional Depth | 23 The Structure of Comments | 23 The Requirement of Determinate Depth | 25 Depth and the Anaphoric Background | 26 Comments on Sentences | 27 Summarizing Conclusion | 30

3 3.1 3.2 3.3

The Logic Mountain Range | 33 The Negativity Scene | 33 Aristotle’s Peak | 37 Predication Without Copulation? | 49

4 4.1 4.2 4.3 4.4

On the Term Functor Trail | 55 Charging Up the Hill | 55 A Better View | 70 Deriver’s License Required | 74 Down the Hill Without a Variable | 78

XII | Contents

5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9

The Four Corners | 81 Around the Square | 81 Is Something There? | 87 We Could be Lost | 89 Nonsense! | 89 The Hexagonal Square | 90 Squaring the Square | 91 The Punch Line | 94 Truth or Dare | 95 Proposition Opposition | 97

6 6.1 6.2 6.3

Referential Falls | 99 Into the Semantic Forest | 99 The Curious Case of the Singular Team | 111 Not, Else, But, Other Than, Except | 119

7 7.1 7.2 7.3

Strawberry Fields | 121 Naked Denotation | 121 Peter, Paul and Mary, the Beatles, and Other Roadside Attractions | 125 Where We Get Together | 127

8 8.1 8.2

Into the Metaphysical Bogs | 133 Ontology In … Ontology Out | 133 What Were We Thinking? | 139

Contents | XIII

9 9.1 9.2 9.3 9.4 9.4.1 9.4.2 9.5 9.6 9.7 9.7.1 9.7.2 9.7.3 9.8

Ratiocination: An Empirical Account by Fred Sommers | 147 Introduction | 147 Unnatural Argumentation | 148 Why the General Inattention to Natural Argumentation | 151 How We Do It | 154 The charged character of the natural formatives | 155 What This Suggests About Our Cognitive Development | 157 Pace Frege and Pace Chomsky | 158 The Virtues of Term Functor Logic | 159 An Anticipated Confirmation | 160 Regimentation in TFL | 160 Inferences from two or more premises | 161 The colt/horse inference | 162 Conclusion: Learning about Logic by Attending to Ratiocination | 163

10

Back to Logic Lodge, Base Camp | 165

References | 167 Index | 175

1 Liar’s Lookout In the country of concepts only a series of successful and unsuccessful prosecutions for trespass suffices to determine the boundaries and rights of way. Ryle How wonderful that we have met with paradox. Now we have some hope of making progress. Niels Bohr These are old fond paradoxes to make fools laugh i’ the alehouses. Shakespeare

1.1 Preliminaries: Our First Stop A philosophical problem has the form: I don’t know my way about. Wittgenstein It is a wholesome plan, in thinking about logic, to stock the mind with as many puzzles as possible. Russell

On his first expedition to Antarctica, Roald Amundsen and his crew were forced to spend the winter of 1898 in an ice field. Months of cold and darkness, coupled with inactivity, produced extreme physical and psychological disintegration in the crew. More food, less food, more exercise, less exercise, more sleep, less sleep – none of this had more than limited effect. The men had vitamin C deficiency, scurvy. As the British navy had learned a century earlier, citrus juices could alleviate the problem but until the 20th century no one could explain why. Amundsen and his men had no citrus fruit. Eventually the ship’s doctor noted that while well-cooked meat had no curative effect, lightly cooked, nearly raw meat (in this case penguin and seal) did. As it turns out, heat destroys vitamin C. (For much more on the expedition see Brown 2012, 28-32.) Logicians and other philosophers so love the kinds of anomalies offered by paradoxes. The numbers of paradoxes, puzzles, and dilemmas is legion. In the country we are exploring, a good way to get name recognition is to dream up an example of a logical, mathematical, epistemic, moral, etc. paradox and attach one’s name to it. There are, for example, Moore’s, Russell’s, Curry’s, Yablo’s, Grelling’s, Richard’s, Cantor’s, and Berry’s. Other paradoxes have more colorful names. Unlike Elliot’s take on the naming of cats, the naming of paradoxes is

2 | Liar’s Lookout

not “a difficult matter” but more like “just one of your holiday games.” There are, for example, the Raven, Grue, No-No, the Ship of Theseus, the Hooded Man, the Truth-teller, the Arrow, the Racetrack, the ‘Heterological’, the Hangman, the Heap, the Prisoner’s Dilemma, Hilbert’s Hotel, not to mention what the Tortoise said to Achilles. Most famous of all is the ancient, but ever-popular Liar paradox. Anomalies, of any kind, and wherever found, as the doctor on the Amundsen expedition surely knew, are instructive. They have causes. Until these are found, examined and understood, attempts to alleviate them or mitigate their unwelcome consequences will tend to multiply and yet be limited in their results. Logical anomalies, such as the Liar paradox (and its many cousins), have been subject to alleviate their symptoms. Moreover, many of these have involved theoretical explanations of the causes of such anomalies. Often these programs of logical preventative medicine are clever and sophisticated. Yet just as often they either address only a small range of the symptoms or require cures that may be worse than the disease. As with matters of health, even the seriousness of logical pathologies has been subject to dispute. Ancient Stoic logicians, especially Chrysippus, took Liar-type paradoxes to be lethal, threatening not only their logic but their epistemology and ethics as well (see Papazian 2012). By contrast, later medieval scholastic logicians saw such paradoxes to be little more than curiosities taking the Liar (the star among other paradoxical sentences – the so-called insolubilia) to be interesting but hardly dangerous (see Dutilh Novaes 2008). Contemporary mathematical logicians, especially after Tarski (Tarski 1935), look upon these paradoxes with something verging on horror. They are often seen as a frightening symptom of a fatal disease at the core of natural language, one that precludes the very possibility of any satisfactory definition or account of truth. Consequently, they have amassed a formidable arsenal to combat the ever-present threat of the Liar and its allies.

1.2 A Scenic (but “Risky”) View Only paths that can be kept can be strayed from. Ryle

First, a general statement of the paradox: The Liar: A speaker says that she is lying (that what she is saying is false).

A Scenic (but “Risky”) View | 3

Needless to say, the speaker’s statement can be formulated in a number of ways (‘What I am now saying is false’, ‘I am now lying’, ‘This very statement is false’, etc.). The obvious question that confronts us is this: Is the Liar statement true or false? Neither answer will quite do. If it is true then it has the property ascribed to it – falsity; if it is false then is doesn’t have that property but rather the contrary – truth. It seems, per impossible, to be both true and false. Or else it seems, again per impossible, to be neither true nor false. As you would guess, some logicians are eager to accept the first choice (Liar-type statements are both true and false), others are happy to allow that such statements are neither true nor false. A very old, venerable principle of traditional logic is the Law of Bivalence, which holds that every statement is either true or false but not both. Both solutions above challenge the law. The first admits at least one truth value other than the standard truth values (true, false), namely both-true-and-false. This is often referred to as truth value glut. The second allows some statements (viz., paradoxical ones) to be neither true nor false, lacking a truth value, truth valueless. This is often referred to as a truth value gap. It goes without saying that gluts and gaps hardly exhaust the ways of dealing with Liars. Moreover, solutions for the Liar generate their own, further Liar-type paradox – the Strengthened Liar: The Strengthened Liar: A speaker says (in effect) that what she is saying is false or bothtrue-and-false or neither-true-nor-false.

This is known as the Liar’s Revenge – if the Liar is defeated it will have its revenge via the Strengthened Liar (see Beall 2007). We will soon see that this revenge poses little more real danger than the Liar itself. For our present purposes we need only consider the classical Liar. An immediately obvious and striking thing about the Liar is that it involves what some speaker says. Just what is it that a speaker says? Suppose someone, in an appropriate context, with an appropriate tone of voice, etc., utters the declarative sentence, “Je demeure à deux pas d’ici.” Now you ask me what the speaker said. I could accurately answer in several ways. Among them, I could 1. quote the speaker directly. She said, ‘Je demeure à deux pas d’ici’. 2. translate the sentence word for word. She said, ‘I live at two steps from here’. 3. translate the sentence into idiomatic English. She said, ‘I live nearby’. 4. quote her indirectly, using English. She said that she lives nearby. 5. quote her indirectly, using French. Elle a dit qu’elle demeure près d’ici.

4 | Liar’s Lookout

This last answers your question by offering a reference to the proposition (that she lives nearby) which is expressed by any number of sentences (including the speakers original French sentence and its translations). The salient distinction here is between sentences and propositions. Yet not all philosophers and logicians are sanguine about the prospects either of drawing this distinction or even countenancing the idea of propositions at all. For example, Saul Kripke, a formidable logician to say the least, has cast doubt on the utility of propositions (being “unsure that the apparatus of ‘propositions’ does not break down”) (Kripke 1972, 21). For him and many others, sentences are all the logician and philosopher of language needs. Propositions are unnecessary abstractions since sentences are the proper bearers of truth values. As Kripke says, “Sentences are the official truth vehicles” (Kripke 1975, 691, n.1). This may well be the “official” doctrine, but a case can be made that propositions are the proper bearers of truth and falsity. Concepts and thoughts are prior (in every way, as Aristotle would say) to terms and sentences. The former are creatures of the intellect. They can be kept private, but often they are made public by expression. We humans use our language to express our concepts and thoughts (and we use it for much else besides). Whatever the ontological status of these two pairs might be, it is certain that the abstract products of conception and thought are profoundly different sorts of things from the perceptible bits of language used to express them. They are categorially distinct in the same way that prime numbers and prime ministers are categorially distinct. “There is a categorical difference between sentence and statement or proposition” (Goldstein 2006, 20). There are things we can sensibly (truly or falsely) say about the one kind of thing that we cannot say in the same sense about the other (unless we are speaking nonsense or using words nonliterally). Prime numbers are even or odd; prime ministers are competent or incompetent. Numbers aren’t the sort of things that can sensibly be said to be either competent or incompetent. Prime ministers are not the sort of things that can sensibly be said to be even, and though some are odd, they are not at all odd in the same way the 7 and 15 are odd. Sentences are the sort of things that can sensibly be said to be French or English or Greek; propositions are not. When I told you earlier what the French speaker said by quoting her indirectly, it didn’t matter which language her sentence was in; I could use any language to quote it indirectly. Sentences must be in some specified natural language, but the propositions they are used to express are not (though I must use some language to specify them). It makes no sense to say of a proposition that it is French or English or Greek.

A Scenic (but “Risky”) View | 5

Although they are originally products of language, propositions are not part of any language. Unlike the sentential utterance that brings a proposition into existence, the proposition itself is neither grammatical nor ungrammatical, neither English nor French and so forth. Strictly speaking, it is the proposition, not the sentence, that is true or false; when we call something that was said boring, original, false or plausible, we are characterizing a proposition and not the sentence that was used for saying it. (Sommers 1994)

Sentences are often used to express thoughts (propositions) with the implicit understanding that these thoughts are meant to be taken as true. Such sentences, when used this way, are statements (other kinds and uses of sentences are questions, commands, etc.). Since what is expressed (private, abstract concepts and thoughts) are categorially distinct from the vehicles of expression (publicly available terms and sentences), statements and propositions are categorially distinct. To understand an expression is to grasp the concept or proposition it is being used to express. If you understand French you know what our French speaker has said, otherwise you only hear her statement and are still in the dark about where she lives. I can express the same proposition using either an English or French expression, but the proposition is neither. Statements (again, sentences used to make a truth-claim) are the sort of things that can be written in ink, heard on a radio, typed in an email, etc. Propositions cannot sensibly be said to be written in ink, heard on a radio, or typed in an email. They can be said to be true, false, surprising, believed, doubted, well-understood, known for more than 2000 years, etc. Statements are said to have such attributes only secondarily, by being used to express propositions that have them primarily (again, they are prior in every way to statements). The proposition that the moon causes the tides can be sensibly said to have been wellunderstood 2000 years ago. Indeed, we could say that that proposition was true 2000 years ago. And this in spite of the obvious fact that the sentence ‘The moon causes the tides’ could hardly be said to be well-understood or known 2000 years ago, there being no English sentences at that time. Including ships, and shoes and sealing wax, cabbages and kings, we can talk of many things. Sentences are certainly among the things we talk about, and so are propositions. For example, we could say something about the sentence our French speaker used, saying of it that it is short, grammatical, and French. We could likewise say something about the proposition she expressed with her sentence, saying of it that it is true, profound, well-known. Given the fact that a speaker can talk just as well about sentences and propositions, a speaker could use a sentence to say something about any sentence, including that sentence itself. In that case we would say that the sentence is selfreferential. Examples of self-referential sentences are ‘This sentence has five

6 | Liar’s Lookout

words’, ‘This sentence has 42 words’, ‘This sentence has the word “has” in it’, ‘This sentence is English’, ‘This sentence is French’. As well, a speaker could use a sentence to say something about any proposition, including the one that sentence is being used to express. For example: ‘The proposition expressed by this sentence is true’, ‘What I’m now saying is profound’, ‘I’m telling the truth now’, ‘I am lying’, ‘What I am now saying is false’. Notice that, since a statement (sentence used to make a truth-claim) is true or false only insofar as the proposition it is being used to express is true or false, when a speaker says, ‘This sentence is x’ (where ‘x’ is any semantic predicate), she produces a sentence that entails a sentence of the form ‘The proposition expressed by this sentence is x’.

1.3 Disarming the Bandit Logic has made me hated in the world. Abelard

The Liar says that he is lying. How? By uttering a sentence like ‘I am lying’ or ‘What I am now saying is false’ or (in effect) ‘The proposition I am now expressing is false’. And that is paradoxical because the question, ‘Is what he says true or false?’ seems unanswerable. If his proposition is true, then it must have the semantic property that he attributes to it – falsity; if his proposition is false, then it must not have that property, i.e., it must be true. Self-referential sentences can be false (e.g., ‘This sentence has 42 words’) or even contradictory (‘This sentence has 42 words and an odd number of words’), but they do not lead to paradox. By contrast, a sentence expressing a proposition about the very proposition it is being used to express always lead to paradox. As we have seen, normally nothing is amiss when a speaker says something about a proposition. A speaker can use a sentence to express a proposition about a different proposition expressed by the use of a different sentence, saying of it that it is true, false, profound, well-understood, etc. To say something about, to make a statement about, a proposition is to make a comment (Srzednicki 1966, Sommers 1969, 280 and Sommers 1994). A comment about itself is a self-comment. Unlike self-referential sentences, paradox arises from self-comment. The Liar’s comment is a self-comment, and “no statement can state of itself that it is not true” (Goldstein 2006, 11). Most attempts to solve the Liar (and its kin) concentrate on the feature of falsity. Falsity gets most of the blame for causing trouble (and truth gets hauled in from time to time as an accomplice). But falsity is only one of a very large number of semantic properties that can apply to propositions. Blaming falsity

Disarming the Bandit | 7

for the Liar is like blaming hunger for the scurvy found in the Amundsen crew. The blame should be placed elsewhere – on the self-commenting nature of the Liar’s statement (Englebretsen 2005, 45-47; Englebretsen 2006, ch. 6; Englebretsen 2008). Self-comments have the following general form (where ‘S’ is some semantic predicate, such as ‘true’, ‘false’, ‘senseless’, ‘well-known’, etc.): A: The proposition expressed by A is S We will indicate the proposition being expressed by a sentence A by [A] (see Sommers 1969, 268). We can then render our general form for self-comments as: A: [A] is S It goes without saying that different sentences can be synonymous, used to express the same proposition. And, of course, different propositions can be expressed by the same sentence. Such facts in no way strain our account of commenting here. We are using ‘statement’ for any sentence used to express a proposition with the intention that that proposition is to be taken by the audience as true (in brief, a statement is a sentence used to make a truth-claim). The key to disarming paradoxes of self-commenting, such as the Liar, is an appreciation of a universal and deep constraint on any statement (Sommers 1969, 269 and Sommers 1994). We can formulate this as a necessary condition for any statement (Englebretsen 2006, 158). PROPOSITIONAL DEPTH REQUIREMENT: Every meaningful statement must be assumed to have a determinate propositional depth.

To see what the notion of propositional depth amounts to, consider a statement that is not a comment, e.g., Plato’s statement, ‘Socrates is wise’. Let that sentence be S. It is a statement about Socrates, it refers to Socrates, it expresses a proposition (viz., that Socrates is wise, i.e., [S]), but it does not refer to any proposition – especially the one it expresses, [S]. We will say that such noncommenting statements have a propositional depth of 0. Next, consider a statement such as Aristotle’s statement ‘What Plato said is well-known’. In this case, Aristotle is commenting on what Plato said, he is referring to Plato’s proposition and saying something about that proposition (that it is well-known). Of course, Aristotle’s own statement also expresses a proposition, but it is not the one Plato expressed. Let A be Aristotle’s statement. That statement expresses

8 | Liar’s Lookout

[A], but what is [A]? It is the proposition that [S] is well-known, i.e., [[S] is wellknown]. Here’s what we have thus far: S = ‘Socrates is wise’ S expresses [S] [S] = [Socrates is wise] A = ‘[S] is well-known’ A expresses [A] [A] = [[S] is well-known] Whatever the propositional depth of Plato’s statement (in this particular case, 0), the depth of Aristotle’s comment must be greater – for it embeds [S] (Sommers 1969, 272). The propositional depth of S is 0; the propositional depth of A, i.e., ‘[S] is well-known’, is 1. Since most of the statements we make are not comments they have a propositional depth of 0. Comments have a propositional depth determined by how many levels of further propositional depth they embed. Suppose a third speaker, X, comments on Aristotle’s comment, saying of it that it is true. Then: X = [A] is true X expresses [[A] is true] [A] = [[S] is well-known] X expresses [[[S] is well-known] is true] Here the propositional depth of X’s comment is 2. There is no (theoretical) limit to the propositional depths that speakers can go to in their commenting. What is important is that in each of these cases we can, in principle at least, determine just what the depth of each of these statements is. If it is not a comment, makes no reference to any proposition, its depth is 0; if it is a comment, one only needs to count the levels of commenting to determine its propositional depth. These are the normal cases. Self-comments are not normal, their propositional depths cannot be determined, their propositional depths are not determinate they fail to meet the propositional depth requirement (Goldstein refers to this lack of determination as “underspecification” in Goldstein 2006, 11). Suppose B says that what C says is insulting, and that C says that what D says is foolish. So far, no problem.

Disarming the Bandit | 9

B = ‘[C] is insulting’ C = ‘[D] is foolish’ Thus: B expresses [[[D] is foolish] is insulting] But now suppose that what D says is that what B says is profound. Then: D expresses [[[[D] is foolish] is insulting] is profound] The comments of B and C are innocent, but, in their context, D’s comment has no determinate propositional depth. Let the propositional depth of C’s comment be n, then the depth of D’s comment must be n-1. But the depth of B’s comment is one greater than C’s, so it is n+1. The depth of D’s comment must be one greater than B’s (since D is commenting on what B said). It is impossible to assign a determinate propositional depth to D’s comment (and likewise for the comments of B and C, which embed [D]). The entire conversation has only the appearance of sense. This is the very point Kripke had in mind in saying that our statements can sometimes be “risky” (Kripke 1975, 54-55). The lesson he draws from this is that the problem lies with the semantic properties of truth and falsity, which, as I’ve said, is the wrong place to lay blame. In general, if a sentence ... asserts that (all, some, most, etc.) of the sentences of a certain class C are true, its truth value can be ascertained if the truth values of the sentences in the class C are ascertained. If some of these sentences themselves involve the notion of truth, their truth value in turn must be ascertained by looking at other sentences, and so on. If ultimately this process terminates in sentences not mentioning the concept of truth, so that the truth value of the original statement can be ascertained, we call the original sentence grounded; otherwise, ungrounded (Kripke 1975, 57 for the original account of groundedness see Herzberger 1970).

Kripke is right to look for “groundedness” if this is understood as nothing more than the condition a comment enjoys by having a determinate propositional depth (where the deepest embedded proposition has a depth of 0), but mistaken to link groundedness to truth and falsity. As our conversation with B, C and D shows, ungroundedness (in the sense of lacking determinate propositional depth) is a risk that need not involve truth or falsity – any semantic predicate might do.

10 | Liar’s Lookout

Self-comments, statements about the very propositions they express, are simply limiting cases of indeterminate propositional depth. B, C and D got in danger without any one of their statements being a self-comment, but by producing a context of comments (not self-comments) that rendered what they said senseless since it yielded statements of no determinate propositional depth. Self-comments are risky, indeed senseless, all on their own. Let L be the Liar’s statement. L express [L] It is clear that L has no determinate propositional depth, for [L] just is [[L] is false]. Whatever the depth of [L], [[L] is false] must have a greater one – but also a lesser one! The Liar’s statement fails to meet the propositional depth requirement, it is meaningless. And so are: L.1: L.2: L.3: L.4: L.5: L.6: etc.

L.1 is not true L.2 is meaningless L.3 is false or paradoxical L.4 has no truth value Every statement I make is S (where S is any semantic predicate) Every statement is S

We may talk about a proposition using such referring expressions as ‘what Tom will say’ or ‘Penrod’s last remark’ but there must be the theoretical possibility that these phrases could be explicated by sentential “namely riders” that explicate the reference to [s]. The process of forming sentences about propositions has a Chinese box structure. In commenting on a proposition, I box a sentence that could be used to express it and my comment then expresses a proposition subject to further comments that more deeply box the original sentence. No box can contain itself and the proposition a comment is about cannot be the proposition the comment is expressing (Sommers 1994). The members of the Liar gang are far from dangerous once they are unmasked as nothing but nonsense, uttering threats but saying nothing. They owe their paradoxical nature to their failure to achieve a determinate propositional depth. Indeed, we can say generally that such paradoxes as the Liar are like grains of sand in an oyster, not something to be feared or removed but studied and cultivated in hopes of a pearl.

Cassationism: Where We Meet Some Fellow Travelers | 11

Before continuing, it’s important for us to remember the important difference between self-referential sentences and self-comments. Consider the statement ‘This sentence is English’ E: ‘E’ is English Compare E with D: D: The proposition expressed by D is English, i.e., [D] is English D fails on two grounds. (1) Even if it succeeds in expressing a proposition, since ‘English’ (unlike genuine semantic predicates such as ‘true’, ‘false’, ‘wellknown’, etc.) doesn’t sensibly apply to propositions (any more than ‘colored’ applies to numbers), D would be category mistaken. (2) D is a self-comment, attempting to say something (semantic or otherwise) about the very proposition it purports to express. Any attempt to find that proposition leads inevitably to an infinite regress; D has no determinate propositional depth. To sum up: Every statement (sentence used to make a truth claim) either has a determinate propositional depth or it does not. If it has a determinate propositional depth, then it expresses a proposition that is either true or false and not both true and false. So the Law of Bivalence applies to all propositions; but it does not apply to all sentences, since the semantic features of sentences depend on the semantic features of the propositions they express. A sentence that fails to express any proposition has no truth value. Self-comments have no determinate propositional depth, nor do sentences that occur in mutually commenting chains. Commenting is always “risky.”

1.4

Cassationism: Where We Meet Some Fellow Travelers

Logic takes care of itself; all we have to do is to look and see how it does it. Wittgenstein

So far, so good. We met some interesting characters and found that, though their band is much larger than expected, they pose no real threat. We are about to meet some fellow travelers who have treated those sheep in wolves’ clothing is a way very similar to ours. It’s always comforting to know that we are not completely alone on this stretch of the trail. As we noted earlier, the ancient Stoic logician Chrysippus, like most modern logicians, took the Liar to constitute a mortal threat. Chrysippus had his way

12 | Liar’s Lookout

of dealing with the Liar, arguing that such statements fail to express propositions. This view is called cassationism (a medieval Latin term that stems from the root casso, to annul, bring to nothing, make void). The best account of the cassationism in the attack on the Liar by Chrysippus is found in Michael Papazian’s “Chrysippus Confronts the Liar: The Case for Stoic Cassationism” (Papazian 2012). So let’s take a moment to consider this Stoic cassationism. The Stoics were committed to a kind of fatalism. Suppose I wake up one morning and think that either I will die today or I will not die today. If the first proposition I entertain (that I will die today) is true, then there is nothing I can do to change matters, for truth depends on the way things in fact are. If the second proposition (that I will not die today) is true, then, again, there is nothing I can do to change matters. So, in either case, there is nothing that I can do – fatalism. Notice that this Stoic argument depends upon the acceptance of at least two important principles. First, it takes propositions (what can be thought and, sometimes, expressed by the use of sentences but are languageindependent) as the proper bearers of truth and falsity. Second, it takes the Law of Bivalence very seriously. Law of Bivalence: Every proposition is either true or false, and no proposition is both true and false.

Chrysippus took this to apply only to propositions – not sentences. The Liar seems to be either both true and false or neither true nor false. For Chrysippus, at least, this shows that the Liar expresses no proposition at all (i.e., the purported proposition is annulled, brought to nothing, made void – the cassationist response). The choice was clear: either (i) abandon the Law of Bivalence and admit either truth value gaps or gluts or (ii) retain the Law and allow propositions to be the proper bearers of truth values, subject to the Law. It is the failure to follow the Law that is evidence that the Liar expresses no proposition. Of course, the Liar and his gang are not the only outlaws here. Category mistakes are sentences that affirm or deny a property (expressed by a predicate term) of a thing (expressed by a subject term) which fails to belong to the appropriate category for that property. For example, the number 2 is not the sort of thing that can sensibly be said to be either colored or uncolored (there’s no third choice here). So the statements ‘2 is colored’ and ‘2 is uncolored’ are category mistakes. Such statements appear to be either true or false (or neither), but, in fact, they express no proposition at all. If the sense of a sentence is the proposition it is being used to express, then category mistakes are senseless (see Papazian 2012, 208). We’ll soon have another view of the territory of category mistakes, from a different prospect.

Cassationism: Where We Meet Some Fellow Travelers | 13

The Liar, even when strengthened (Papazian 2012, 211-212) fails to express any proposition because any attempt to specify what that proposition might be necessarily leads to an infinite regress. Such sentences “are ungrounded because they do not single out a specific proposition” (Papazian 2012, 209). Sentential self-reference, as we have seen, is innocuous and leads to no paradox. It is the self-referential (in fact, self-commenting) nature of the Liar that leads to paradox, for “the latter are really making claims about propositions, the principal bearers of truth values, and yet when asked to produce the proposition in question, one ends up with an infinite regress and no proposition” (Papazian 2012, 210). In other words, self-comments like the Liar (as well as comment circles like ‘A said that what B said is x’, B said that what C said is y’, ‘C said what A said is z), etc.) have no determinate propositional depth. Needless to say, not all (or even most) ancient logicians, not even other Stoics, agreed with Chrysippus’s cassationist way of dissolving the Liar. In the period from the 12th to the 15th centuries, medieval scholastic logicians were still debating about the proper approach to take when dealing with Liar and other such anomalies (insolubilia). The difference now was that the medieval logicians tended to look on the Liar as posing no great threat (not to logic, language, knowledge, metaphysics, or morality). It was merely an intriguing puzzle, one which might, if carefully analyzed, shed light on foundational principles of language and logic – but nothing more. A wide variety of analyses were on offer, cassationism being just one (see Dutilh Novaes 2008). Cassationism was initially widely held during this period, but by mid-13th century had fallen out of favor in the competition among a variety of projects for dealing with the Liar and other insolubilia. In the following century we read Bradwardine rejecting the cassationists with a sneer “because these nullifiers ... appear so asinine, we do not need to argue with them any further” (quoted in Dutilh Novaes 2008, 232). Nonetheless, the idea that sentences like the Liar do not succeed in expressing any proposition, make no statement, has survived. In modern times a number of logicians and philosophers of language have recognized the virtues of such an idea. In her broad assessment of cassationism, Dutilh Novaes writes: In sum, while it is a widely known position in the medieval period, the cassantes position was not really given the attention it deserved after c. 1225. It is based on the sound idea that grammaticality and even meaningfulness are not sufficient conditions for making statements; one of its strengths is that it gives rise to discussions on the conditions for making statements (a topic very dear to Austin, for example) in order to demonstrate how the paradoxical sentences of the Liar family fail these conditions. (Dutilh Novaes 2008, 233)

14 | Liar’s Lookout

As we have already seen, Laurence Goldstein and Fred Sommers have been among the more prominent advocates of cassationism in the 21st century. “[W]e typically use token sentences to make statements, and ... It is the statements so made that have content and truth value. But there are occasions when, though we go through the motions of making a statement no statement results” (Goldstein 2006, 16). “According to the cassantes, attempted uses of paradoxical sentences to make statements are nullified, so that, although such sentences have meaning, no proposition or content gets to be expressed by them” (Goldstein 2006, 19). “A sentential utterance to which no depth can in principle be assigned is expressively vacuous. Such a sentence may have a conventional meaning, ... But it ‘makes no statement’” (Sommers 1994). Since a statement is a sentence being used for saying something, no statement/sentence can fail to express a proposition. Some comments that give the appearance of being legitimate statements seem to generate contradictions. I say they seem to because only utterances that express propositions can be actually involved in contradiction and now strongly suspect that the standard Liar type utterances do not express propositions. Indeed any adequate theory of propositions must explain the inexpressiveness of paradoxical propositional utterances and expose them as nonstatements. (Sommers 1994)

Back in the middle of the 20th century, Gilbert Ryle was using a version of cassationism to defeat the Liar gang. Contemporary cassationists recognize this. “The resolution of the Liar paradox is in the spirit of Gilbert Ryle’s informal observation that the Liar suffers from a regress of ‘namely riders’ (Sommers 1994). Before going on to our next stop, I want to let Fred Sommers, the philosopher and logician who first led me on this trail, take the lead. He has something instructive to say about most of the places we will visit. Here he will tell us in detail just how he followed Ryle’s path to Liar’s Lookout and then went further in dealing with Liars he found there.

2 Ryle’s Way With the Liar∗ by Fred Sommers Arguments of the following kind all have this end in view; ‘If it makes no difference whether one uses the term or the definition of it, and “double” and “double of half” are the same thing, then if “double” is “double of half” it will be “double of half of half’; and if “double of half” be substituted again for “double,” there will be a triple repetition, “double of half of half of half.”’ Aristotle

2.1 Anaphoric Conditions for Successful Comment P.F. Strawson and Gilbert Ryle, the leading figures of Oxford philosophy in the 20th century, shared a propensity to regard certain anomalous utterances as “non-starters” and “neither true nor false”. Where Ryle spoke of ‘Saturday is in bed’ as category nonsense, Quine, who represented the more conservative American style, complained that Ryle’s category mistakes are best regarded as simply false (“and false by meaning if one likes”). And when Strawson argued that, for lack of a proper anaphoric background, utterances with definite referring subjects like ‘The present king of France’ “make no statements” and have no truth value, Quine stayed pat with Russell who read (1) The present king of France is bald. as a complex general statement that makes a false existential claim. In my opinion, history will judge Strawson’s interpretation of (1) as a singular sentence the more plausible reading (see Sommers 1982, 207ff). On the other hand, Strawson himself was not adamant about denying truth value to sentences whose subjects are vacuous definite descriptions (see Strawson 1974, 61-62 and 65-66). And that too seems right. When someone who believes that France has a king asserts (1), both his tacit belief that France is a monarchy and his assertion that its monarch is bald are false and not “neither true nor false.” Strawson’s anaphoric requirements for the successful referring use of definite subjects come fully into their own when applied to comments. A comment (in the narrow sense we here use this term) is a sentence that refers to and characterizes something said. The subject of a comment may be a referring expression such as ‘What Lincoln said’ or ‘That Bush was declared the winner’ as in || ∗ This previously unpublished essay appears here with permission from Fred Sommers.

16 | Ryle’s Way With the Liar by Fred Sommers

‘What Lincoln said at Gettysburg remains memorable’ or ‘That Bush was declared the winner distressed Gore’. For a comment on what was said to be successful, it must be possible to track a sentence that expresses the proposition in question that the comment purports to characterize. Some comment-like utterances give the appearance of satisfying this condition but the sentences to which we trace them turn out not to be saying anything that could be evaluated as true or false. A standard way in English of commenting on what was said by someone who has used a sentence, S, to say it, is to prefix the saying sentence by the word ‘that’, thereby forming the referring expression ‘That S’, for identifying and denoting what was said. Less explicit reference to what was said is made by comments with referring expressions of form ‘What X said’, as in ‘What X said is P’. In such comments, what X said is not specified, but the reference to it and the comment on it succeeds because we can identify what X said by a “namely rider” – ‘What Lincoln said, namely that ...’. According to Strawson, the reference of a definite singular subject is very like the reference of a pronoun. Thus, just as the pronominal subject of (1*) He is bald may hark back to the antecedent (2) A king rules France so (2) is the presupposed antecedent of (1) The present king of France is bald. Strawson’s anaphoric condition for definite reference also applies to comments on what is said. If someone asserts (2), one may comment on it by (3) What you just said (namely, that a king rules France) is preposterous; France is a Republic! or by (3*) That’s preposterous; France is a Republic!

The Ryle Approach | 17

In (3*) the subject ‘That’ is a “protosentence” that has back reference to (1) as the antecedent sentence used to express the proposition that (3*) is commenting on. Strawson’s claim that an utterance whose definite subject lacks a proper anaphoric background “makes no statement” applies unreservedly to comments. We will presently educe entirely natural formal requirements for successfully commenting on what is said to show that failing to satisfy them means that a purported reference to something said is impossible in principle. A comment that cannot possibly be about anything that has been said says nothing that can be evaluated as true or false. Ryle singled out one important subclass of comments – those of form ‘What I am now saying is P’ – and argued that utterances of this form cannot possibly have an anaphoric antecedent that determines a reference to anything that is being said. One such utterance – ‘What I am now saying is false’ – is usually regarded as an antinomy. But if Ryle’s argument is right, ‘What I am now saying is false’ is an inexpressive nonstarter that makes no statement at all, not even an inconsistent one.

2.2 The Ryle Approach Ryle’s way with the Liar (Ryle 1951-52) has not attracted many followers but his diagnosis of what is wrong with “What I am now saying is false” is more on the mark than the standard analysis that leads us to regard it as an antinomy and to formal Tarski or Kripke type resolutions. Paying no attention at all to the Liar’s “truth predicate,” Ryle pointed to the unending regress of “namely riders” generated by the subject: ‘what I am now saying, namely, that what I am now saying is false, namely, that what I am now saying is false is false, namely that ...’ (Ryle 1954). In Ryle’s diagnosis, the fault in the Liar utterance is not located in the predicate but in the subject, which Ryle exposed as necessarily vacuous. Lying has no more to do with it than boasting. Clearly, any utterance of form ‘What I am now saying is P’ is defective in the same way. The Boaster who says, “What I am now saying is profound,” has similarly failed to say anything that can be evaluated as true or false. Utterances of form ‘what I am saying is P’ automatically lack the kind of anaphoric background Strawson requires for successful reference by descriptive definite subjects; such utterances “make no statement.” If we take Ryle’s approach, the Liar’s utterance is “inexpressive.” The Liar never rises to the level of antinomy. It gives the appearance of having a wayward truth value but in fact it has no truth value of any kind (not even “third” values). I shall follow the Ryle/Strawson way with utterances that purport to comment on something said seeking to put their approach to the Liar on

18 | Ryle’s Way With the Liar by Fred Sommers

a firm footing. But first we need to say more about the conventional view of the Liar as an “antinomy.”

2.3 The “Official” View Whatever is said is a proposition expressed by some sentence used in saying it. Ryle reads the Liar sentence as a purported comment on something said, pointing out that no such proposition can be tracked. It has however become customary to construe the Liar not as a comment on what the Liar sentence is saying but on the sentence itself being used to say it. “Sentences,” says Kripke, “are the official truth vehicles” (Kripke 1975, 53). This view bars the way to Ryle’s resolution of the Liar as a non-paradoxical utterance that fails to comment on anything said. For if one “officially” regiments ‘What I am now saying is false’ by paraphrasing it as: ‘This sentence is false’, it does refer to a sentence that is true if and only if it is false. But how reasonable is the official view? It has given rise to the common practice of rewriting sentences of form ‘that p is true (false)’ as ‘“p” is true (false)’. It accords to comments that predicate ‘true’ and ‘false’ a special status that comments like ‘that p is libellous’ or ‘that p is not original’ do not have. For it is clear that we cannot rewrite ‘that p is K’ as ‘“p” is K’ when ‘is K’ is not a truth predicate. ‘That Aristotle was chosen to be Alexander’s tutor is evidence of Philip’s sagacity’ cannot be legitimately paraphrased as ‘“Aristotle was chosen to be Alexander’s tutor” is evidence of Philip’s sagacity’. Nor can we construe ‘That Aristotle was Alexander’s tutor is both fascinating and true’ as ‘“Aristotle was Alexander’s tutor’ is both fascinating and true’. This must cast grave doubt on the view that sentences are the official vehicles of truth. Just as the comment, ‘That snow is white is widely known’ is not about the sentence ‘snow is white’ but about the proposition expressed by the sentence, so ‘That snow is white is true’ is not about ‘Snow is white’ but about the proposition that snow is white. The official view does not appear to be untenable. In any case, pace Quine, there seems to be no convincing reason to abandon the traditional view that propositions, not sentences, are the primary bearers of truth values. And then the shoe must be firmly fitted to the other foot: even ‘Yields truth when appended to “Snow is white”’ must then be interpreted as just a cumbersome variant of ‘That snow is white is true’ or ‘It is true that snow is white’. An utterance like ‘The sentence you are now reading is false’, which appears to be proof against a regress of namely riders, then turns out to be as vacuous as ‘What I am now saying is false’. Reading it as a comment on a proposition – ‘What the sentence you are now reading SAYS is false’ or ‘The proposition expressed by the sen-

Risky Business | 19

tence you are now reading is false’ – immediately puts us on the way to a regress of riders that renders the proposition to which the subject purports to refer forever unidentifiable. As Ryle showed, we cannot fix on any sentence that expresses the proposition to which the Liar purports to refer. But what accounts for this? To explain this and other cases of failure in self-referential commenting one needs to take a close look at the general conditions for referring to what has been said and commenting on it. The next section informally suggests that certain natural conditions for making a propositional reference can be systematically breached by an utterance in a way that renders the utterance “inexpressive.” Later sections will show this in a more formal way and apply the principle to selfreferential comments.

2.4 Risky Business To comment on something said by a sentence, S, one must use a referring expression formed by nominalizing S. The most common nominalizing device for forming a subject expression that can be used to refer to what is said by S is to prefix S with the word ‘that’. We can then use ‘that S’ as the referring expression to denote the proposition expressed by S in a comment on this proposition. For example, we can comment on what a speaker said when he assertively uttered ‘Mars is smaller than Venus’ by using the expression ‘that Mars is smaller than Venus’ as the subject of the propositional comment ‘That Mars is smaller than Venus has been known for more than five hundred years’. We seek the natural conditions for referring to and commenting on propositions (in the next three sections) and we shall find that the indispensable process of nominalizing sentences to form expressions for use in referring to what these sentences say is subject to formal constraints that rule out certain referring expressions as necessarily vacuous. The views of Ryle and Strawson on the conditions that the subject of a comment must satisfy in order to refer successfully to the proposition being commented on apply widely to all kinds of comments. By contrast, Kripke focuses narrowly on comments that predicate truth or falsity. Such comments, Kripke believes, court a special risk of being inconsistent and he is concerned to formulate constraints on the use of the truth predicate that will avoid paradox. Sentences Kripke characterizes as “paradoxical” Ryle and Strawson would simply find inexpressive because they lack a proper anaphoric background. In one example of how the truth predicate can paradoxically embarrass us, Kripke

20 | Ryle’s Way With the Liar by Fred Sommers

imagines Jones and Nixon commenting on another’s assertions in the following way: Jones: Most of Nixon’s assertions about Watergate are false. Nixon: Everything Jones says about Watergate is true. If it happens that Jones’s remark is the only one he made about Watergate, and that Nixon’s comments on the subject are evenly divided between those that are true and those that are false, then Nixon’s comment is true if and only if it is false and likewise Jones’s is true if and only if it is false. According to Kripke, the moral of this and many similar exchanges is: “Our statements involving our notion of truth are risky: they risk being paradoxical if the empirical facts are extremely unfavorable” (Kripke 1975, 55). These sorts of situations, which are not uncommon, had led Tarski to suggest that “in [everyday, non-formalized] language the notion of truth cannot be used in a consistent manner and in agreement with the laws of logic” (Tarski 1956, 164). Kripke (Kripke 1975) proposes a conception of truth that can consistently be expressed within the language to which it applies. But his proposal, which Quine calls: discouragingly complex,” (Quine 1987, 216) has not been widely accepted. The idea that the natural languages may be semantically inconsistent has its roots in the official view that rejects the propositional interpretation of the anomalous sentences that Kripke and Tarski regard as antinomies. “Propositionally” interpreted, the Liar turns out to be nothing more than an interesting specimen of vacuous, inexpressive utterances that fails to make any kind of statement, consistent or inconsistent. And the same applies to the “risky” remarks made by Jones and Nixon. Neither utterance is a truth paradox. Because of their interdependent content, neither can serve as a proper antecedent to determine the referent of the other. Both therefore end up necessarily vacuous and fail to comment on what the other is saying. Neither comment is paradoxically inconsistent, both are faulty since neither succeeds in saying anything at all. Kripke, who is committed to the official view, finds both sentences paradoxical and says: It would be fruitless to look for an intrinsic criterion that will enable us to sieve out as meaningless or ill-formed those sentences which lead to paradox. There can be no syntactic or semantic “sieve” that will winnow out the “bad” cases while preserving the “good” ones. (Kripke 1975, 55)

Where the Risk Really Lies | 21

Kripke gives no reasons for saying so flatly that a search for a winnowing criterion would be fruitless. Perhaps he shares with Quine an aversion for truth gaps. In any case, we shall in fact find that his pessimism is unwarranted. The natural languages embody quite definite rules governing the way we refer to and comment on propositions. We will see that these provide sieves for winnowing out just the utterances that fail to make it as comments (including those Kripke regards as paradoxical) while preserving those that succeed.

2.5 Where the Risk Really Lies Kripke’s story about Jones and Nixon is meant to illustrate the risk one takes with comments that predicate truth or falsity. But the risk of failure lies in the subject, not in the predicate. And when the subject is not well-grounded in a proper antecedent background, the predicate plays no role. Consider a closely similar exchange between Rhett Butler and Scarlett O’Hara: Rhett: What you’re about to say, my dear, is unladylike. Scarlett: Sir, no gentleman would say what you just said. Nothing here about truth or falsity. All the same, Rhett’s remark about what Scarlett was about to say was “risky;” it could easily be rendered inexpressive and Scarlett has taken full advantage of that by commenting on what Rhett had said in a way that makes her own comment anaphorically useless for purposes of determining what Rhett is talking about. And again, the failure turns out to be one of illegitimate self-reference. Rhett says that what Scarlett says is unladylike, but when we turn to what she says we see that her comment turns Rhett’s into a self-comment. For we now have ‘What Scarlett is about to say – namely that what I am saying ..., namely, that what Scarlett is about to say ..., namely, that what I’m saying, etc., etc. – is unladylike’. Scarlett’s retort is made at the same cost; her’s too is a vacuous self-comment. For she is characterizing what Rhett says as ungentlemanly. But when we try to identify what Scarlett is referring to we get into another regressive loop: ‘...what you just said, namely that what I was about to say, namely that what you just said, namely that what I am about to say ...’. (when one points out to Scarlett that she has failed to say anything that could be said to be true, she dismisses that with a “fiddledeede”; Scarlett is perfectly willing to subvert Rhett’s remark at the price of rendering her own response inexpressive.) The exchange is heated; clearly they are seriously at odds, but we cannot identify a proposition that either of them has expressed. As in the case of many a marital spats, despite all the sharp talk about

22 | Ryle’s Way With the Liar by Fred Sommers

what she is saying and what he is saying, neither party succeeds in saying anything at all. The Jones/Nixon exchange is similarly defective. Their utterances are inexpressive and would be so if Jones had said that what Nixon is about to say is untrustworthy, with Nixon responding that what Jones has just said offended him. For while Jones’s utterance might well have offended Nixon, Jones did not in fact impugn as untrustworthy anything Nixon said since neither Jones nor Nixon succeed in saying anything at all. No paradoxes, just garden variety utterances that lack the proper anaphoric background to be commenting on something the other has actually said. A successful comment about what is said is explicitly or implicitly related to a “saying sentence” that provides the anaphoric background of the comment. Mutual comments, like those of Scarlett and Rhett or Jones and Nixon, turn out to be unsuccessful comments on what they themselves are saying. All such would-be comments fail in a radical way: they make no statements. Any comment is about what is said and is explicitly or implicitly related to a “saying sentence” that provides the anaphoric background of the comment. In what follows we show how the natural constraints imposed on propositional comments constitute a “semantic sieve” that effectively winnows out any comment, C, purporting to be about the proposition that C.

2.6 Nesting Propositions Subject expressions that refer to things said are our particular concern. On hearing me assert ‘There are no elves’, you might comment ‘THAT THERE ARE NO ELVES is not easy to prove’. Your comment now contains a “propositional term” – the referring expression ‘that there are no elves’. We shall use a bracket notation as a nominalization operator that transforms a sentence into a propositional term for use in referring to what that sentence says. If S is a statement-sentence, [s] will be the term that results from a propositional nominalization of S. For example, [There are no elves] is the propositional nominalization, THAT THERE ARE NO ELVES, of the sentence ‘There are no elves’. Another form of nominalization is gerundial. Given the sentence ‘There are no elves’, I can form the propositional term, ‘THERE BEING NO ELVES’ and say ‘There being no elves saddens Marian’. We say that ‘S’ is propositionally nested in the term [s] and propositionally embedded in the sentence that contains ‘[s]’. For example, ‘[There are no elves] is not easy to prove’ propositionally embeds ‘There are no elves’. Closely related to propositional terms are terms used in talking about facts or states of affairs. I use angle brackets for expressions that purportedly denote

The Structure of Comments | 23

facts. For example, ‘there being elks’, ‘that there are elks’, ‘the existence of elks’, are three varieties of nominalization we often use in speaking of the fact . The first two of these nominalizations are also commonly used for referring to the proposition [There are elks]. Our discussion has been focusing on comments on propositions. But everything we say about the constraints on comments on propositions applies mutatis mutandis to comments on facts (see Sommers 1994a and 1997).

2.7 Propositional Depth We say things with sentences and may then nominalize the saying sentence for talk about what it says. Suppose that Russell assertively utters the sentence: (e) There are elks. thereby expressing the proposition [e]. We comment on [e] by using various propositional terms, among them: THAT THERE ARE ELKS THERE BEING ELKS WHAT RUSSELL SAID (when he assertively uttered ‘There are elks’)

In the comment ‘WHAT RUSSELL SAID is no news’, the subject ‘WHAT RUSSELL SAID’ refers to [e] elliptically. To explicate an elliptical reference to what was said we may supply a namely rider: WHAT RUSSELL SAID, namely, THAT THERE ARE ELKS, is no news’.

2.8 The Structure of Comments Let s be a sentence and [s] be the proposition expressed by s. The bracketed term, [s], is a nominalization of s that propositionally “nests” the sentence, s. For example, let s be ‘Some Australians are billionaires’: (s) Some Australians are billionaires. Since [s] is a definite referring expression, we may form the sentence, (t) [s] is easy to prove. (colloquially: ‘It is easy to prove that s’)

24 | Ryle’s Way With the Liar by Fred Sommers

Since (t) is a sentence, we may form (u) [t] is true. (‘It’s true that it’s easy to prove that s’) Australians are present in the world and among the things we say is that some are billionaires, etc. Propositions that have been expressed by sentences are also present in the world; in talking about them we may characterize them as true, obvious, original, entailed by other propositions, and so forth. (s) is about Australians, (t) – a comment on [s] – is about the proposition expressed by (s), while (u) – a comment on [t] – is about the proposition expressed by (t). The subject term of (u) is doubly bracketed; (u) can be represented as ‘[[s] is E] is T’ (It is true that it is easy to prove that some Australians are billionaires). Compare the three sentences: (s) Some Australians are billionaires. (t) [s] is easy to prove. (u) [[s] is easy to prove] is true. The first sentence, (s), contains no bracketed terms; it is not a comment on any proposition. We will say that its “propositional depth” is zero. The second, (t), is a comment that contains a singly bracketed term; its depth is 1. The third, (u), contains a doubly bracketed term; its propositional depth is 2. We can go on to form more sentences in which the original sentence (of zero depth) is ever more deeply nested within the propositional subject term. The propositional depth of each sentence is perspicuously given by the number of propositional nominalizations, as shown by the number of bracket pairs. The notion of propositional depth was first introduced by me in (Sommers 1969). The idea of a progression beginning with basic (non-propositional sentences) – sentences whose propositional depth is zero – has some affinities with the idea of “groundedness’ that Kripke makes use of in constructing his theory of truth. Of groundedness, Kripke (Kripke 1975, 57) says: “[It] seems first to have been introduced into the literature [by] Hans Herzberger” (Herzberger 1970). Unlike groundedness, which applies to sentences predicating truth and falsity, depth is propositional and applies to all kinds of statements. The notion of propositional depth remains valuable, but not much else in that 1969 paper has stood the test of time. A more durable conception of truth is presented in several later papers. (See for example, Sommers 1993, Sommers 1994, and Sommers 1997).

The Requirement of Determinate Depth | 25

2.9 The Requirement of Determinate Depth Every meaningful assertion has a finite determinate propositional depth. This semantic feature is built into the rules for forming terms to be used in referring to what is said and commenting on it. To begin with, we have a stock of nonpropositional terms like ‘Australian’ and ‘billionaires’ that we use to form basic sentences (of zero propositional depth) by the regular rules of sentence formation in the natural language (e.g., English or French). We may then avail ourselves of various nominalizing devices for forming propositional terms to be used in commenting on what is said. For example, we may propositionally nominalize the sentence ‘Some Australians are billionaires’ as ‘SOME AUSTRALIANS BEING BILLIONAIRES’ or ‘THAT SOME AUSTRALIANS ARE BILLIONAIRES’ and then use this in a sentence that embeds the original sentence: ‘THAT SOME AUSTRALIANS ARE BILLIONAIRES is no secret’. In the case where I do not identify the proposition I’m commenting on, it must be possible in principle to identify it by providing an explicit nominalization that brackets the sentence that was used to say what was said. Thus I could explicate ‘What Al said is libellous’ by using a namely rider that refers to what Al said. The explication might be: ‘What Al said, namely, THAT SAL IS UNFAITHFUL ...’. This now explicitly contains a term that propositionally “nests” the original sentence used by Al to say what he said. Any namely rider explication will have an overt embedding structure revealing the depth of the explicated sentence. For example, ‘What Tom said, viz., THAT SOME AUSTRALIANS ARE BILLIONAIRES, is widely known’ shows Sam’s statement to have the embedding structure ‘[some A are B] is W’, a sentence whose propositional depth is 1. Any meaningful statement has a determinate propositional depth. A statement about what is said has a propositional depth greater than zero. Sometimes the propositional depth of a statement is not known. For example, if Sam says ‘What Al says is hardly news’, we know that Sam’s statement is deeper by 1 than Al’s, but we do not know the depth of Sam’s statement until we learn the depth of Al’s. In principle however, any comment that refers to what is said, must have a determinate propositional depth greater than zero. Where, as in Sam’s comment, the reference to what is said is elliptical, explication in the form of namely riders must be possible. This is so because the basic procedure for forming a referring expression for referring to what was said requires a propositional “nesting” of the saying sentence (e.g., a nesting of S in ‘that S’). In some cases, an explication may not be practically possible. For example, being convinced that certain Cretans don’t tell the truth, I might impugn the remarks of a given Cretan woman by telling you to take everything she has told

26 | Ryle’s Way With the Liar by Fred Sommers

you with a grain of salt. Here I obliquely refer to statements of whose depth I am ignorant, though my own statement has a propositional depth that is deeper by one than that of the Cretan woman’s deepest statement. In this case, my comment has a determinate depth, but I’m not able to say what depth it has. Generally then: If a statement, S, is not about a proposition, its propositional depth, D(S), is 0. If S is a propositional statement it will have an implicit or explicit embedding structure that may be represented by “bracketing” some basic statement a finite number of times. Consider the sequence of statementsentences, e, f, g, and h: e: f: g: h:

There are no elves. It is well known that there are no elves. It is obvious that it is well known that there are no elves. It is doubtful that it is obvious that it is well known that there are no elves.

The propositional depths of these statements is as follows: e: f: g: h:

D(e)=0 [e] is W D(f)=1 [[e] is W] is O D(g)=2 [[[e] is W] is O] is D D(h)=3

By counting inward three brackets to the nested sentence, e, we find that it is triply nested in the subject term of (h).

2.10 Depth and the Anaphoric Background When applied to a comment, Strawson’s anaphoric condition for successful reference will have back reference to a sentence of lower depth than the comment. Consider again a normal sequence consisting of explicit background and comment: James: A present King of France exists. George: What James has just said is preposterous. The anaphoric background statement by James is of zero depth. The propositional depth of George’s comment is 1.

Comments on Sentences | 27

Any comment of form ‘What X said is P’ has a determinate depth greater than 1. The depth may not be known. But explication must in principle be possible. For where explication is impossible in principle, the definite subject (‘What X said’) will lack anaphoric background and be necessarily vacuous (Sommers 1982, 331f). It follows that one way to show that a propositional utterance is inexpressive (“makes no statement”) is to show that it is impossible in principle to consistently assume that it has a fixed propositional depth. Consider once more the exchange between Jones and Nixon that Kripke discusses: Jones: Most of Nixon’s assertions about Watergate are false. Nixon: Everything Jones says about Watergate is true. Kripke contrives empirical circumstances under which, “Nixon’s remark is true if and only if it is false and likewise, Jones’s is true if and only if it is false.” And he says, without explaining why he thinks so, that it would be “fruitless” to try to avoid these paradoxical conclusions by looking for ways to disable both remarks as ill-formed or meaningless. In fact, we have such ways and they show that under any circumstances the remarks of Jones and Nixon are not paradoxical but are would-be-comments that fail to refer; neither satisfies the requirement for being a meaningful comment on a proposition because neither has a determinate propositional depth. Let D(J) is the depth of Jones’s statement and D(N) is the depth of Nixon’s statement. Being a comment on Nixon’s statement, Jones’s statement is deeper than Nixon’s; so D(J) > D(N). Being a comment on Jones’s statement, Nixon’s statement is deeper than Jones’s; so D(N) > D(J). Neither utterance can consistently be supposed to have a determinate depth. Pace Kripke, both are winnowed out as “inexpressive” and non-paradoxical. We earlier applied the same analysis to the Rhett-Scarlett exchange, where no truth predicates were involved.

2.11 Comments on Sentences Talk about propositions can run afoul of the determinate depth requirement. But talk about sentences is not talk about what is said and has no effect on depth. Consider: Simon: Alvin’s sentence is grammatical. Alvin: Simon’s sentence is English.

28 | Ryle’s Way With the Liar by Fred Sommers

Both sentences are of zero depth. Both are nonvacuous and patently true. Sentential self-reference is permissible; a statement may be about the sentence being used for making it. Consider also: (J) The sentence you are now reading has less than twelve words. Sentence (J) is expressive and true. By contrast, propositional self-reference is ruled out. Consider Catherine’s utterance: (K) What I am saying now has never been said before. We assume the intended referent to be [K], the proposition purportedly expressed by (K). The depth of (K) is not known. But suppose it to be n. Since Katherine intends to refer to the proposition expressed by (K), we can represent (K) thus: [K] has never been expressed before. according to which – and contrary to our assumption that D(K)=n – D(K)=n+1. It is thus not possible in principle to assign a determinate depth to (K). But that exposes (K) as an inexpressive, vacuous utterance that says nothing. More generally: No statement can be about a proposition it expresses. The Liar’s utterance transgresses in the same way. Take it in its so-called “strengthened” form (predicating ‘is not true’ instead of ‘false’): L*: What I am now saying is not true. L* purports to be about the proposition it expresses. We may therefore explicate it as ‘What I am now saying, namely, THAT L*, is not true’, which reveals its propositional structure to be: L*: [L*] is not true. Since L* is a comment on a proposition, it must be deeper by 1 than the proposition it is commenting on. But no determinate depth can be consistently assigned. Assume that D(L*)=n. Since L* is a comment on [L*], D(L*)=n+1. We conclude that, despite appearances, L* is vacuous, inexpressive and without truth value.

Comments on Sentences | 29

That the Liar is vacuous is confirmed by Ryle’s observation that it generates a regress of namely riders: what I am now saying, namely what I am now saying is false, namely, etc., ... is false. Using the bracket notation for things said, this is the regress: L = [L] is false = [[L] is false] is false = [[[L]is false] is false] = ... which shows that any depth number, n, that one might assign to L is greater by 1 than n. Note that the impossibility of satisfying the requirement of determinate propositional depth for self-comments is unaffected by negation. Any utterance of form ‘What I am now saying is P’ is inexpressive and so is one of form ‘What I am now saying is not P’. The point is that comments on propositions that violate Strawson’s anaphoric requirement “make no statement” and are not candidates for having any kind of truth value (including “third” values). Both Ryle’s argument and our attempts to fix a determinate depth to propositional self-comments show that no self-comment can consistently be assumed to have a determinate depth. The converse is also true: every utterance that cannot satisfy the requirement of determinate depth (RDD) turns out to be selfreferential. We earlier noted this informally when discussing the Jones-Nixon and Rhett-Scarlett exchanges and later showed it more formally when we applied the requirement of determinate depth to the comments in these exchanges and found them to be self-comments. Finally we note that inexpressive exchanges may involve more than two utterances. Consider the sequence (R), (M), and (S) spoken by Rhett, Melanie and Scarlett: (R) What Scarlett is saying is impertinent. (M) What Rhett is saying is inconsiderate. (S) What Melanie is saying is glaringly obvious. By showing that each of these comments has no determinate depth we show it lacks a proper anaphoric background for commenting on what is said. Take Rhett’s comment. Rhett is commenting on Scarlett’s comment on what Melanie is saying about Rhett’s comment. Since Rhett talks about what Scarlett says, D(R) > D(S). Scarlett comments on what Melanie says, so D(S) > D(M). It follows that D(R) > D(M), i.e., Rhett’s remark commenting on Melanie’s remark is propositionally deeper than Melanie’s. But Melanie is commenting on what Rhett is saying, so her remark is deeper than Rhett’s. Neither Rhett’s nor Melanie’s would-be comments can consistently be assumed to have a determinate depth.

30 | Ryle’s Way With the Liar by Fred Sommers

Each remark turns out to be an illicit commenting on itself. A similar argument disables Scarlett’s comment on what Melanie is saying. The semantic sieve that “winnows out the ‘bad’ cases” is the requirement of determinate propositional depth, which exposes the improper and inadequate backgrounds of all three would-be comments.

2.12 Summarizing Conclusion We say things. And often we comment on what is said. For instance, A’s statement ‘Mars is smaller than Venus’ provides the anaphoric background for B’s comment ‘That Mars is smaller than Venus was already noted by Galileo’. To comment on what is said by a sentence S, we need special referring expressions. Every natural language has nominalizing transformations that transform a sentence into a nominal expression that can be used to refer to what that sentence says (e.g., by transforming the sentence ‘Mars is smaller than Venus’ into the noun phrase ‘that Mars is smaller than Venus’). These provide us with referring expressions than can serve as subjects of comment on the propositions we wish to talk about. Propositions, not sentences, are the proper entities to which our comments about what is said refer, and propositions are what such comments characterize in various ways (e.g., as known to Galileo, as interesting, false, true, profound, shallow, tragic, etc.). In the case of a comment like ‘That Joseph had been killed by a wild beast profoundly grieved Jacob’, the proposition commented on is explicitly identified by the subject that nests the sentence expressing the proposition. However, in the case of many comments, the proposition referred to is not explicitly identified. For example, ‘What Reuben told Jacob about Joseph’s death profoundly grieved Jacob’ is a comment whose subject definitely describes but does not identify the proposition being characterized as causing grief to Jacob. In that case, the proposition in question can be identified by namely riders of form ‘that S’. Strawson’s anaphoric requirement for successful reference comes into its own in the case of comments, where, surprisingly, we find it provides the best way to resolve the puzzles provided by the self-comments we call antinomies. Here Ryle pointed the way by arguing that the referring subject of ‘What I am now saying is false’ cannot be anaphorically grounded. No namely rider can possibly identify a sentence that expresses the proposition to which the Liar utterance purports to refer and comment on. The Liar sentence “makes no statement,” being inexpressive it is not a symptom that natural language is inconsistent in its use of truth predicates.

Summarizing Conclusion | 31

In any natural language, the process of forming a sentence for commenting on a proposition has a Chinese box structure that confers on any legitimate comments a finite and fixed “propositional depth.” NN uses a sentence, S, of zero depth to say that S. I comment on what NN says and my comment, which may contain a subject for form ‘that S’ expresses a proposition that is subject to further comments that more deeply embed S. When the subject refers to the proposition by description but without explicitly specifying it, it must in principle be possible to identify the proposition by one or more “namely riders.” Where it can be shown that no identification is possible, for example, by showing that we cannot consistently assign to the comment any fixed propositional depth, the comment is exposed as illegitimate. No box can contain itself. No utterance can be a comment on what it itself is saying. The rules of natural language for forming referring expressions that refer to propositions rule out a determinate propositional depth for the Liar or any other utterance that purports to comment on the proposition it is expressing. For there can be no such proposition.

3 The Logic Mountain Range Logic can be patient for it is eternal. Oliver Heavside

3.1 The Negativity Scene I wish first of all to state briefly what I think positively about negation. Ryle You’ve got to accentuate the positive Eliminate the negative J. Mercer

Even a casual inspection of our native language reveals that some of our terms are explicitly negative. Obvious examples are ‘unhappy’, ‘featherless’, ‘unformed’, ‘nonexistent’. Without further reflection, one might conclude from this that all non-formative expressions in our language are inherently positive unless they are the objects of one of many kinds of formative operators (e.g., ‘un’, ‘less’, ‘non’, etc.) that turn them into explicitly negative terms. As it happens, very many logicians and philosophers of language have given the matter further reflection, with the result that they, along with those who have not reflected so deeply, have come to the conclusion just mentioned: that the default nature of any term is positive unless made overtly negative by some formative device. Indeed, many have held that the negative of a term or a sentence is somehow less noble than the positive. In 1654 Z. Coke wrote, “The affirmation is before, and more worthy then (sic) the negation” (quoted in Speranza and Horn 2012, 135-136). Of course there are many terms that are quite explicitly positive, having become the objects of one of many kinds of formative operators (e.g., ‘ful(l)’, ‘ive’, etc.), that turn them into explicitly positive terms (e.g., ‘hopeful’, ‘massive’). The consensus among most contemporary logicians and philosophers of language is that, generally speaking, all non-formative expressions, whether simple terms or complex terms or even entire sentences, are inherently positive unless operated upon by an appropriate formative operator (a negator) that makes them explicitly negative. There is an alternative, older view of this (for much more on the history of logical negation see Horn 1989 and Speranza and Horn 2012). Traditional logic, initiated by Aristotle, and still favored today in some quarters, takes all terms to

34 | The Logic Mountain Range

be innately neither positive nor negative (uncharged, neutral) until they are used. Any used term (simple, complex, sentential) is either logically positive or logically negative. In practice, terms used positively tend to have their logical charge (positive) suppressed. Rarely noting that such terms are implicitly charged, modern logicians unreflectively take them to be logically uncharged (unlike negative terms, whose logical charge is usually explicit). But, of course, there is an important difference between having something hidden and not having it at all – even though this is often not obvious. A one-armed man and a stage actor pretending to be one-armed might, on casual inspection, both appear to lack an appendage. But what the first does indeed lack the second merely suppresses, hides. Terms used positively don’t lack logical charge – they merely tend to hide it. It’s still there, and it cannot be ignored by the logician aiming to reveal the logical character of such terms and more complex expressions containing them (see Horn 1989, especially chapter 3, and Englebretsen 1987a). When it comes to mathematics, we all find the idea of positive and negative charge on numerical expressions quite familiar. We know that applying the square root function to 4 yields both a positive number (+2) and a negative number (−2). And if we think a bit more deeply about the positive and negative in mathematics we realize that in addition to the positive and negative mathematical charges that operate on numerical expressions one at a time, there are positive and negative operators that operate on numerical expressions two at a time. These binary devices, addition and subtraction, make use of the same symbols (+ and −), but the resulting ambiguity is innocuous. Consider the expression ‘−2+7’. Here the first operator in unary, applying only to the first number; the second is binary, applying to both. The plus sign prefixed to ‘7' is the sign of addition, and while the charge on ‘2’ is explicitly negative, the positive charge on ‘7’ is implicit. Next consider the expression ‘8−5’. As is customary with positively charged expressions, the sign of positive charge on ‘8’ is suppressed. What now of the minus sign? This could be read in one of two ways: as a binary sign of subtraction, with ‘5’ taken as implicitly positive (i.e., ‘+8− (+5)’) or as a sign of negative charge on ‘5’ with the binary sign of addition suppressed (i.e., ‘+8+(−5)’). The first takes the expression to be the result of subtracting one positive number from another positive number; the second takes it to be the result of adding a negative number to another positive number. The resulting numerical value is the same in either case, so it makes no difference how we read it (though the first reading is probably the most natural). But the fact that it makes no difference here whether we read the expression as an addition or a subtraction is the very reason why we tend to forget that there is this fundamen-

The Negativity Scene | 35

tal difference between the positive and negative charges, on the one hand, and the operations of addition and subtraction, on the other. The systematic ambiguity of mathematical symbols is not exhausted by the signs of numerical charge and addition/subtraction. Numerals themselves can be ambiguous. The expression ‘555’ is hardly a 3-membered list of ‘5’s. When properly understood, each numeral is read differently: the first as ‘fivehundred’, the second as ‘fifty’, the third as ‘five’. The ambiguity of the numeral is resolved by position. These kinds of systematic ambiguity are a source of both great expressive economy as well as expressive power. There is no little irony in the fact that many modern logicians, who aim to mimic the undeniable strengths of mathematics in their own formal systems, do so by contrasting such formalisms with the informality of natural language on the basis of the latter’s being plagued by such unwelcome features as vagueness and ... ambiguity. Modern mathematical logicians hope to take inspiration from the formal science of mathematics; and they usually do. But when it comes to negation – not so much. Standard systems of logic fail to take adequate note of the difference between genuine lacking and merely apparent lack. The result is a formal language that has negators (actually only one) but no formative for positive charge (since positive charge is simply taken as the default charge). In the kinds of formal logical languages favored today, negation is always sentential One might put a sign of negation at various places in a sentence, but the inevitable result will always be the negation of the entire sentence. In ordinary English we could “negate” the sentence ‘Some boys like all footballers’ in a number of ways: ‘Some boys don’t (do not) like all footballers’, ‘Some boys dislike all footballers’, ‘No boys like all footballers’, etc. The modern logician will give a single paraphrase for all of them: ‘Not: some boys like all footballers’. The ‘not’ here is taken as elliptical for ‘it is not the case that’ and is usually symbolized by a unary formative for sentential negation: ~ . “‘~p’ means ‘not-p,’ which means the negation of ‘p’” (Whitehead and Russell 1910, 6). “The identification of ‘~’ with ‘it is not the case’ is to be preferred to its identification merely as ‘not’” (Strawson 1952, 79). This view that all logical negation is sentential was firmly established in modern logic by Gottlob Frege and those who followed (especially Bertrand Russell and W.V. Quine), but it is as old as the ancient Stoic logic. That logic departed from the then-established Aristotelian theory that the opposition between a proposition and its negation was only one kind of logical opposition. In Categories, 11b17-23, Aristotle had distinguished four kinds of opposition: contradiction (between a sentence and its negation), contrariety (between a pair of terms incompatible with one another, e.g., ‘hot’/‘cold’, ‘red’/‘blue’, ‘hap-

36 | The Logic Mountain Range

py’/‘sad’), correlation (between relative terms such as ‘half’/‘double’), and privation (between a term and its negation, e.g., ‘hot’/‘non-hot’, ‘red’/‘non-red’). Scholastic followers of Aristotle recognized that in natural language (or at least in scholarly medieval Latin) negation tends to attach to the copula, yielding ‘is not’ or ‘are not’. In such cases they held that the predicate term was “disjoined” or “disagreed with” the subject term. Where ‘Some politicians are honest’ joins ‘politicians’ and ‘honest’, ‘Some politicians are not honest’ disjoins the two terms. A few centuries later Thomas Hobbes noted the similarity between positive/negative character of mathematical expressions and the positive/negative character of the expressions of our thoughts: “By the ratiocination of our mind, we add and subtract in our silent thoughts, without the use of words” (Hobbes 1839, 3). And soon after, Leibniz was even clearer: Thomas Hobbes, everywhere a profound examiner of principles, rightly stated that everything done by our mind is a computation by which is to be understood either the addition of a sum or the subtraction of a difference. ... So just as there are two primary signs of algebra and analytics, + and −, in the same way there are, as it were, two copulas, ‘is’ and ‘is not’; in the former case the mind compounds, in the latter it divides. In that sense, then, ‘is’ is not properly a copula, but part of the predicate; there are two copulas, one of which, ‘not’, is named, whilst the other is unnamed, but is included in ‘is’ as long as ‘not’ is not added to it. This has been the cause of the fact that ‘is’ has been regarded as a copula. We could use as an auxiliary the word ‘really’; e.g. ‘Man is really an animal’, ‘Man is not a stone’. (Leibniz 1966, iii)

This idea that signs of opposition such as + and − could play a central role in a system of formal logic was taken up by the 19th century British algebraic logicians (especially Boole and De Morgan). But their way of doing formal logic was soon eclipsed by the new mathematical logic that resulted from Frege’s innovations in logical syntax. And we’ve now just arrived at the base of this mountain of logic. So, before we begin our climb, let’s take a brief rest and look a bit more closely at the notion of term negations and contrariety. ‘Red’ and ‘blue’ are contrary terms. They can both apply falsely to the same thing (this banana in my hand is neither red nor blue), but they cannot both apply truly to the same thing (this apple can’t be both red and blue). However, ‘red’ and ‘blue’ are not logically contrary terms – they are merely (non-logically) contrary. By contrast, ‘red’ and ‘non-red’ are logical contraries. Any term and its negation are logical contraries. What is the relation between the logical and non-logical contraries of a given term? Note that some terms have just one non-logical contrary (‘male’/‘female’, ‘winner’/‘loser’); some terms have several (‘started on Monday’/‘started on Tuesday’, ‘started on Wednesday’, ‘started on Thursday’, ‘started on Friday’, ‘started

Aristotle’s Peak | 37

on Saturday’, ‘started on Sunday’); some terms have an infinite number of nonlogical contraries (‘weighs 1 gram’/‘weighs 2 grams’, ‘weighs .07 grams’, ...). We can say that the logical contrary of a term is equivalent to the disjunction of its non-logical contraries. Whatever is non-red is either blue or green or white or yellow or ... . Note also that the negation of a negated term (‘non-non-red’) is equivalent to the negation of the logical contrary of that term (‘non-(blue or green or white or yellow or ...’), which turns out to be equivalent to the conjunction of the logical contraries of each of the non-logical contraries of that term (‘non-blue and non-green and non-white and non-yellow and ...’). More formally (where ‘P1’, ‘P2’, ... ‘Pn’ are the non-logical contraries of ‘P’): −P = P1 or P2 or … Pn − (−P) = − (P1 or P2 or … Pn) − (−P) = − (P1 or −P2 or … −Pn) And, since ‘− (−P)’ = ‘P’: P = −P1 and –P2 and … −Pn

3.2 Aristotle’s Peak Our logic contains more from Aristotle than from any other author, for all the precepts of logic belong to Aristotle. Arnaud Logic has travelled a long way since Aristotle but not always for the better. T. Smiley

Well, I have to admit, we’ve come to quite a mountain of logic. It happens to be named after Aristotle because he got here first. It’s big; it’s old; it can’t be avoided. It is the dominate feature of this country. For the best view of much of the country we have to climb it. And that means equipping ourselves with the appropriate cognitive tools. We need a formal logic. The trail gets a bit rocky here, so we need to be mindful of our footing. But we will take the shortest, easiest trail. Charles Darwin discovered many things during his time on the HMS Beagle. He discovered, for example, a variety of types of finches on the Galapagos Islands that were distinct compared to those on the mainland. Later, over his long career as a naturalist, he discovered many features of organisms such as barnacles, earthworms, etc. However, what Darwin did not discover was the origin of

38 | The Logic Mountain Range

species by means of natural selection (the “theory of evolution”). And neither did Alfred Wallace discover it. Theories, theses, accounts, explanations are certainly the outcomes of discoveries – often empirical studies such as Darwin’s – but they are not themselves discoveries. Such abstract things are, quite literally, the creations of those who first entertain them. They are propositions or coherent sets of propositions that can be grasped by any thinking being, but are initially brought into being by their authors. They are creations – they are not discoveries. Good theories in science are ones that offer a clear understanding of their foundations as well as of their consequences. Their foundations may include previous theories (or at least parts of such theories) as well as discoveries (including those leading to that foundational theory). For example, Kepler’s theory of elliptical orbits rested on Copernican heliocentrism as well as the empirical discoveries of Tyco Brahe. Good theories either state or make clear their salient consequences – they make good predictions and open new lines of research. That’s the way it is with good theorizing, not only in the empirical sciences, but also in other areas of study such as political theory, history, or criminal detection. There are, however, areas of study that rely minimally or not at all on empirical investigation. A purely formal study like pure mathematics is a good example. Mathematical theory often does depend on prior theorizing for its foundation. Thus Leibniz and Newton formulated the principles of the calculus on the basis of prior mathematical theories such as analytic geometry. But, as with Descartes’s claim of formulating the principles of analytic geometry in a dream, sometimes a formal theory is simply created in a way that appears de novo. At any rate, as with theories formulated in the empirical sciences, good formal theories are fecund in having valuable salient consequences. In the area of formal logic there is at least one very good example of a formal theory that really was created de novo, having no theoretical precedent and having consequences for both further theorizing and empirical study. Aristotle initiated the systematic study of formal logic in the 4th century bce. No such investigations had been done before, as he reports at the end of Sophistical Refutations (184b1-2). “[B]ut in the area of deduction nothing at all was available to us before we worked it out by investigations and practice over a long time.” But, of course, people had been logical long before that. They were, and are, logical in the sense that in ordinary, quotidian circumstances they are naturally adept at spotting inconsistencies and deducing appropriate conclusions from premises. Indeed, very young children master a large portion of their speech community’s vocabulary before entering school, and only a few years later have an impressive grasp of the grammar of their native language. Logical

Aristotle’s Peak | 39

ability comes along soon after, so that by adolescence most children require little time or effort to spot inconsistencies and deduce valid conclusions at a level of modest complexity. We are rational animals – but we are not always perfectly rational. Aristotle’s intent was to lay down the rules which, when followed, seemed to guarantee that, when a speaker made a claim, someone could say with certainty what other claim or claims have to be true if the original claim was true (validity). As well, if a speaker made more than one claim (say in the course of a debate), one could say with certainty whether those claims could all be true at the same time (consistency) or not (inconsistency). His system of formal logic, syllogistic, took a small number of kinds of arguments (a claim or claims along with a further claim that would have to be true if the original claim(s) were true) to be valid simply by virtue of their forms alone. The form of an argument was taken to be fully determined by the logical forms of its constituent statements, claims. Of course, the tricky part is determining what the logical form of a statement must be. A theory of logical syntax is meant to do just that. One way of conceiving of a formal logic is by thinking of it as a formal language. One hopes that it will be a language that reflects the logical aspects of non-formal, natural language. It is meant to be an abstraction from, or a model of, natural language. So it has a small, artificial vocabulary (or lexicon), a relatively simple grammar (theory of logical syntax), a proof theory (to account for things like validity and consistency), and a semantic theory meant to provide ways of interpreting expressions in the formal language. Traditional formal logic from its origins in Aristotle’ Prior Analytics to the end of the 19th century was for the most part a term logic. Generally, a term logic divides the lexicon into simple material, categorical expressions, i.e., terms (“the fat and the lean” of discourse (Ryle 1954, 116), and formal, syncategorematic expressions, i.e., functors (the “joints or tendons of discourse” (Ryle 1954, 116). The former have meanings (on their own, as traditional logicians said); the latter do not. Simple terms are logically homogeneous in the sense that they are not divided into distinct types by the term logician. Functors are of two types: unary and binary. They are applied to (material) expressions, terms (one or two at a time) to form new, more complex terms. In some versions, even entire statements are taken to be complex terms formed by a binary functor. For most traditional logicians, the main goal of a formal language was to model arguments and derivations made in an everyday natural language. Consequently, the logical syntax of such a logic took its clues from the grammar (or at least certain salient features of it) characteristic of the language used by the logician (e.g., ancient Greek for Aristotle and the Stoic Chrysippus, scholastic

40 | The Logic Mountain Range

Latin for the medieval logicians, and so forth). In his dialogue called the Sophist, Plato had wondered about the possibility of forming meaningful combinations (sentences) using any pair of terms; perhaps there are restrictions on such combinations (Sophist 261d). His response was that such combinations only come about when “verbs are mingled with nouns” (262c). Plato’s account of sentence formation amounts to a binary theory of logical form, logical syntax: sentences consist of a pairs of expressions that are fit for distinct logical roles (nouns, onoma, and verbs, rhema). No statement can be formed by a pair of nouns; no statement can be formed by a pair of verbs. For Plato, vocabulary is heterogeneous, consisting of two distinct kinds of expressions. A noun and a verb are more than just a pair of terms. They form a sentence when in use, and the rules of grammar (in this case Greek) guarantees that they “fit” to form a single linguistic unit; they are mingled, mixed, combined, blended as Plato said (and this mingling was meant to reflect something in Reality – the mixing of Forms). Aristotle was initially convinced of this binary theory of logical syntax, which he had learned from his teacher. In Categories (1a16ff) and De Interpretatione (16a-17a37), he took sentences to be combinations of noun-verb pairs. Notice that on such a binary theory of logical syntax a simple sentence requires only two expressions. These are bound together to form a linguistic unit (the sentence) not by any third element, but simply by their grammatical roles. But Aristotle could not maintain this binary theory once he had built syllogistic logic in his Prior Analytics. Syllogistic is a genuine term logic. It assumes a vocabulary whose material, non-logical, expressions are simply terms – not distinguished in any way by grammatical role (nouns, verbs, adjectives, etc.) – the vocabulary is homogeneous. Aristotle had some good reasons for assuming a homogeneous vocabulary. For one thing, in any syllogism at least one term must appear in a sentence in the subject term position and in another sentence in the predicate term position. This is not possible if only nouns are reserved for the first position and verbs for the second. But, if terms are to be combined to form sentences, and if grammar is not the guarantor of sentential unity, then how are pairs of terms to be combined? Aristotle’s answer was a ternary theory of logical syntax. A statement must consist of three expressions: a pair of terms and a logical copula. The logical copula binds the two terms and in doing so yields a unified linguistic expression. A sentence cannot be formed just by pairs of terms; something must tie, bind, glue, connect, join, link, combine, fuse, copulate them. Quite literally, a sentence is a pair of copulating terms. Each term in a sentence is the terminus, end point (horos, at Prior Analytics 24b16), with the logical copula coming between the two terms. There were four

Aristotle’s Peak | 41

logical copulae for Aristotle. English versions are: ‘belongs to every’, ‘belongs to no’, ‘belongs to some’, and ‘does not belong to some’. Oddly, some have denied that these are actually logical copulae for Aristotle. Peter Geach has said that, in Prior Analytics, Aristotle continued to hold that a statement could consist of just a pair of terms unlinked. Expressions such as ‘belongs to’, ‘applies to’, ‘is said of’, ‘is predicated of’, ‘does not belong to’, etc. do “not supply a link between” the two terms, but were “meant only to give a sentence a lecturer can pronounce” (Geach 1972, 53). Geach aside, it is important to notice that Aristotle’s logical copulae incorporate both quantity and quality. Moreover, these apply to the statement as a whole, not just to one or the other of the terms. Quantity is the logical feature of a statement by virtue of which its truth is determined by all or some of the individuals referred to by its so-called subject term, the things the first term, predicate term, is said to belong or not belong to. Quality is the logical feature of a statement by virtue of which the predicate term is affirmed or denied of the referents of its subject term. For Aristotle, the general form of any statement is: P applies to (hyparchei) some/every B where ‘applies’ is generic for ‘belongs’ or ‘does not belong’. As we will soon see, the notion that a statement is a combination of a quantified term (a subject) and a qualified term (a predicate) was not Aristotle’s. That idea was due to later scholastic logicians. For Aristotle, quantity and quality apply to a statement as a whole, not to either of its constituent terms. It is also important to recall from our brief view of the Negativity Scene that for Aristotle (and all traditional logicians who followed), there are two kinds of negation. Terms have contraries (see Categories 12a26-12b5, De Interpretatione 23b23-24 and 24b7-10, and Metaphysics 1055a34). Two terms are contrary when one, but not both, can apply to the same thing (at the same time, in the same sense). If the term that doesn’t belong to the thing but could sensibly be said to belong to it, then that term is said to be privative with respect to the thing. Socrates is sighted, but he could be blind; so ‘blind’ is a privative term. By contrast, a stone is neither sighted nor non-sighted (blind). A term might have any number of contraries but it has only one privative (call it the logical contrary). Thus ‘red’ has as contraries ‘blue’, ‘green’, ‘white’, ‘black’, etc., while its logical contrary is just ‘non-red’. So a term and its logical contrary have opposite charges. Statements also come in oppositely charged pairs. A term can be affirmed or denied of all or some of another. It is the qualitative aspect of the logical copula that determines affirmation or denial. Thus, ‘belongs to some’ and ‘belongs to

42 | The Logic Mountain Range

every’ indicate affirmation; ‘belongs to no’ (= ‘does not belong to some’) and ‘does not belong to every’ indicate denial. A pair of statements with the same terms and with the same quantity but a different quality are mutually contradictory. ‘A belongs to some B’ and ‘A belongs to no B’ (= ‘A does not belong to some B’) are contradictories. ‘A belongs to every B’ and ‘A does not belong to every B’ are also contradictories. A pair of statements with the same terms and the same quality but different quantity are mutually contrary. ‘A belongs to every B’ and ‘A belongs to no B’ are contraries. Given a pair of contradictory statements, one must be true and the other false; given a pair of merely contrary statements, both could be false but they could not both be true. How are term negation and statement denial related? Consider a pair of statements having the same logical copula and the same two terms, except that the first term in one is the negation (logical contrary) of the first term in the other, e.g., ‘A belongs to every B’ and ‘non-A belongs to every B’. An example in English might be ‘Athletic belongs to every boy’ and ‘Non-athletic belongs to every boy’. It is clearly impossible for both of these to be true but they could both be false (imagine a boy who is only somewhat athletic). So they are contraries. Remember that ‘belongs to no’ is simply a slightly more colloquial version of ‘does not belong to some’. If ‘A’ does not belong to some thing, x, then ‘nonA’ must belong to x. If ‘A does not belong to some B’ is true, then ‘Non-A belongs to every B’ must be true. Generally, to deny a term of some/every B is to affirm its logical negation of every/some B. For Aristotle, just as terms are the primary elements of statements, statements (premises and conclusions) are the primary elements of inferences, syllogisms. The primary tasks of syllogistic logic are (i) to determine which syllogisms are valid, (ii) to find the premises that would lead to a syllogistic conclusion, and (iii) to demonstrate the validity of valid syllogisms. A syllogism is a logos [one or more things expressed, said] in which, certain things being put forward, something other than that follows of necessity because of these being so. By ‘because of these being so’ I mean ‘following through them’, and by ‘following through them’ I mean that no term is required outside for generating the necessity. (Prior Analyics 24b18-23)

A syllogism (the Greek word means a summing up, a computation, an addition), then, is an argument or inference that consists of premises (πρότασις, protasis, something put forward) and something else, a conclusion that follows the premises by necessity. I call a syllogism perfect if it requires no other term beyond these assumed for the necessity to be evident; and imperfect if it requires one or more terms that are necessary through

Aristotle’s Peak | 43

the things put forward, but have not been assumed through propositions [i.e., have not themselves been put forward]. (Prior Analytics 24b24-27)

Perfect syllogisms are inference so obviously valid than no rational person requires more than an initial encounter to recognize their validity. Imperfect syllogisms require proof. The simplest syllogism consists of a pair of premises and a conclusion, each of the terms of which is also a constituent in one of the premises, and the other term of each premise is a term not found in the conclusion. The terms of the conclusion are called the “major” (the one applied to the other term) and the “minor” and the term that occurs in each premise but not in the conclusion is called the “middle” (the premise with the major term is called the “major premise” and the premise with the minor term is called the “minor premise”). According to Marian Wesoły, “For Aristotle, to ‘analyze’ is to make an investigation of how to discover terms and premises from which to deduce the desired conclusion, or of how to resolve a conclusion into its terms constructing premises” (Wesoły 2012, 89). Aristotle’s analytics procedure consisted of resolving a “problem” (i.e., a conclusion) by “finding such predicative relations that connect the major with the minor by means of the middle one. The point of departure in the analysis is a given problem (conclusion), which is always known in advance, before the premises are decided on. (Wesoły 2012, 102)

Analysis then was the process designed to accomplish the second task set for syllogistic logic, the finding of premises that would lead to a given conclusion. This was done by discovering an appropriate middle term that, when connected predicatively to each term of the conclusion would yield the appropriate pair of premises. How did Aristotle envisage proving valid but imperfect syllogisms? As it happens, he took just four syllogistic forms to be valid. They were: P belongs to every M

P belongs to no M

P belongs to every M

P belongs to no M

M belongs to every S

M belongs to every S

M belongs to some S

M belongs to some S

So P belongs to every So P belongs to no S S

So P belongs to some So P does not belong to S some S

He then formulated a small number of rules for converting any statement to another statement, where each logically entailed the other (i.e., they were logically equivalent). For example, any statement of the form ‘A belongs to some B’ is equivalent to one of the form ‘B belongs to some A’. Another equivalence was

44 | The Logic Mountain Range

between statements of the forms ‘A does not belong to some B’ and ‘Non-A belongs to some B’ and between statements of the forms ‘A does not belong to every B’ and ‘Non-A belongs to every B’. By applying these and other such rules to one or more of the statements constituting a valid imperfect syllogism it is possible to construct an equivalent syllogism that is now in one of the four perfect forms. The imperfect syllogism is said to be “reduced” to a perfect syllogism. Reduction is the main method of proof. The Aristotelian way of paraphrasing a statement, such as ‘Some statue is bronze’ as ‘Bronze belongs to some statue’, was admittedly unnatural and awkward. Later medieval scholastic logicians (especially from the 11th century with Abelard to the 14th century with Ockham) sought to offer a more natural paraphrase, and, in the process, conceived of an alternative version of Aristotle’s ternary theory of logical syntax. The scholastic theory of logical syntax was the foundation of versions of syllogistic logic known as “traditional” logic. Those who came later, Leibniz, Boole, Frege, etc., had in mind this version of syllogistic term logic, which they sought to modify or replace. Traditional logic has often been referred to as “subject-predicate” logic. Frege, who, as we will soon see, wanted to replace that logic with his own, said that the “distinction of subject and predicate finds no place” in his new logic (Frege 1970b, 2). “Subjectpredicate” logic is certainly an apt name for traditional logic, for it was the subject-predicate distinction that held the key to the traditionalist theory of logical syntax. The first thing the scholastics did was split Aristotle’s logical copulae. Aristotle had taken quantity and quality to be two logical features of a statement as a whole, and both were indicated by the copula. The scholastics saw quantity and quality to apply primarily to the terms of a statement. A statement was taken to have quantity and quality only secondarily, by virtue of the quantity and quality of its terms. So they divided the copula into two parts: a quantifier and a qualifier. Each of these attaches to one of the terms of a statement. A term with a quantifier was a subject (the term, then was the “subject term”); a term with a qualifier was a predicate (its term was the “predicate term”). The general form of a statement, then, was: quantifier term qualifier term More particularly: Some/every S is/is not P

Aristotle’s Peak | 45

According to [the medieval] analysis all categorical propositions are instances of the following scheme, regardless of their quantity: [cat] [neg] [Q] S [neg] cop P where bracketed parts of speech are optional, [neg] stands for negation (possibly even iterated), [Q] stands for signum quantitatis, i.e., some determiner, cop stands for a copula (in any tense) and S and P stand for (possibly very complex) subject and predicate term, respectively. (Klima 2001, 102)

It is important to note that even though Hobbes, Leibniz, and many others still today, have referred to ‘is’ and ‘is not’ (and their cognates and translations) as copulae (a practice started by Abelard), these are not logical copulae. For traditional logic they are merely parts, fragments of a split logical copula; they are qualifiers (see Englebretsen 1990 and Englebretsen 1996, 16-23). It is equally important to keep in mind, when considering traditional logic, that the constituents of a statement are still a pair of terms bound together into a single linguistic unit by a single logical copula – a copula which is, in this case, simply split into a quantifier and a qualifier. We can think of this theory of logical syntax as quaternary, with statements analysed logically as consisting of four elements: a pair of terms and two discrete parts of a formative element (which jointly constitute a logical copula). The fact that scholastic logicians were mindful of the nature of the split copula as a single formative element, even though its two parts occupy different locations in the statement, is testified to by the way the scholastics eventually came to (partially) symbolize statements. As with Aristotle, every statement eligible for entry into a syllogism was taken to be either an affirmation or a denial (i.e., affirmative or negative in quality) and either universal or particular (in quantity). The four general statement forms (called “categorical” forms), then, are: ‘Every S is P’, ‘No S is P’, ‘Some S is P’, and ‘Some S is not P’. The Latin for ‘I affirm” is affirmo; for ‘I deny’ it is nego. The scholastics came to use the first vowel of ‘affirmo’ as a sign for universal affirmation, the first vowel of ‘nego’ as a sign for universal denial, the second vowel of ‘affirmo’ as the sign for particular affirmation, and the second vowel of ‘nego’ as the sign for particular denial. The four categorical forms were referred to by these letters: Thus: A: Every S is P E: No S is P I: Some S is P O: some S is not P

46 | The Logic Mountain Range

Moreover, these logicians recognized that, in spite of their own quaternary analysis, categoricals still could be seen for what Aristotle said they were – pairs of copulated terms. Accordingly, they often used the letters a, e, i, and o as signs for unsplit copulae, writing the four categorical forms as: ‘SaP’, ‘SeP’, ‘SiP’, and ‘SoP’. The later medieval period marked a time of extensive innovation and organization in syllogistic term logic. The logicians of this period worked out a rich array of semantic theories to accompany their theory of logical syntax. Under the pressure of teaching logic to young (and generally unenthusiastic) boys, they mastered the art of simplifying (sometimes oversimplifying) the rudiments of their logic, devising along the way clever mnemonic devices to aid the novices. Syllogistic logic is certainly a thing of great beauty. But, like most human creations, it is not perfect. It is relatively simple and natural, equipped with a theory of logical syntax that rarely requires a radical departure from the surface forms of natural language (especially when the quaternary analysis is employed). But it is limited in both its power to adequately analyze certain kinds of ordinary natural language sentences, and it lacks sufficient power to provide a proof theory for some important types of ordinary inferences. It is, as its critics have often pointed out, expressively and inferentially weak. Traditional logicians had, at best, only limited success in analyzing three kinds of statements (and the inferences in which they are involved). First, they had little success with statements whose material elements are not terms but entire sentences (sentential clauses), such as ‘If every S is P, then some M is not P’, ‘Either some S is P or every S is not P’, etc. These were just the kinds of sentences (along with the inferences constituted by them) that the ancient Stoic logicians had concentrated on when they devised their alternative to Aristotle’s term logic. A logic such as this is now known as a “propositional” logic. Such a logic was beyond the scope of most versions of traditional term logic. Traditional logicians had a bit more confidence when it came to analyses involving singular sentences, sentences whose subject terms were proper names (‘Socrates’), singular pronouns (‘she’), singular demonstratives (‘that star’), or definite descriptions (‘the girl with the dragon tattoo’). A statement such as ‘Socrates is wise’ seems to have a quality but no quantity. How could it be construed as a categorical? Some traditional logicians said that such singular sentences have an implicit, hidden, suppressed universal quantity; others said the implicit quantity was particular. Their reasonings here made use of their various semantic theories, especially their ideas about the distribution of terms used in a statement. We’ll pass on by this particular bit of scenic delight for now, hoping

Aristotle’s Peak | 47

to get a much better view around the corner (Chapter 4) and at the falls (Chapter 6, part 2). Finally, traditional logic offered little insight into the logical treatment of relational sentences (and inferences involving them). Relational sentences have at least one transitive verb. A simple example is ‘Some boy chased every girl’. The obvious problem in trying to see this as a categorical is that it has too many subjects. Later, in the 17th century, the Port Royal logicians (A. Arnauld, P. Nicole, and C. Lancelot) and Leibniz made some headway here, but even in the 19th century, when Augustus De Morgan set out to build a logic of relationals, there was still not enough progress. Leibniz had tried to show how mathematical concepts and techniques could be adapted to perfect a formal system of logic. But throughout much of the 17th and 18th centuries the dominant view among philosophers was that logic is a study of how humans reason. The laws of logic are actually “the laws of thought.” Logic was considered to be a branch of psychology. During the 19th century, especially in Britain, a number of mathematicians began to challenge this “psychologism” in logic. George Boole (Boole 1952 and 1952a) and De Morgan (De Morgan 1926 and 1966), in particular, looked upon logic as a branch of mathematics, namely a kind of algebra. These “algebraic logicians” (e.g., W.S. Jevons, J.N. Keynes, W.E. Johnson, J. Venn, and others, including C.S. Peirce in America) viewed things like logical disjunction and conjunction as analogous to algebraic addition and multiplication. They took statements to be equations when analyzed into their proper logical forms. They also tended to interpret the terms as class names, with ‘0' representing the empty class (the class with no members) and ‘1’ as the universal class (the class of everything that is a member of any of the classes determined by the terms of the statement (or inference) – the “universe of discourse.” In Boole’s way of symbolizing, terms are represented as lowercase letters, juxtaposition of a pair of terms represents class intersection (conjunction), addition of a pair of terms represents class union (disjunction), subtraction of a term from the universal class represents class complement (negation) and ‘v’ represents an unspecified non-empty set. We could use this to formulate the four standard categorical forms: A: E: I: O:

x(1−y)=0 xy=0 xy=v x(1−y)=v

This algebraic logic was still a syllogistic term logic. In its proof theory it abandoned the old distinction between perfect and imperfect syllogisms, and it

48 | The Logic Mountain Range

took the proof of any valid syllogism to be a matter of “eliminating” the middle term. Such elimination of middles amounts to the algebraic addition of premises to yield the conclusion. For example, a syllogism of the form I below can be symbolized by II: I

II

No x is m

xm=0

Every y is m

y(1−m)=0

So, no x is y

∴xy=0

Keeping in mind that ‘(1−m)’ is the complement (negation) of ‘m’, the premises of II add up to the conclusion. Boole’s symbolic language allows alternative interpretations. One of these takes each letter variable to be read as a propositional variable. On this interpretation, the formal system amounts to a logic of propositions (à la Stoic logic). Thus, for example, ‘x(1−y)=0’ could symbolize ‘If x then y’ and ‘xy=0’ could symbolize ‘x and y’. This naturally raised the question of which interpretation (terminal or propositional) was primary. This question amounts to the question of which logic is more basic, a question most algebraists did not find pressing, seeing their formal system as neutral between the two logics. But with Frege the official answer was fairly well established: propositional logic is foundational, basic logic. At any rate, what Boole, De Morgan, and the other algebraists achieved was the goal of building a fully formalized language, one with a semantic theory that allowed multiple interpretations, and a well-constructed algorithm, governed by specified rules, that could model a wide range of inferences made in the mode of non-formal natural language. That the algebraists did not accomplish even more was due to a number of factors, including their insistence that statements are, au fond, equations, and the overwhelming power of the new mathematical logic that was already on its way, a logic that gave up on terms and reverted to a binary theory of logical syntax. Eventually we will explore a modern version of term logic, one which, while taking much from Aristotle’s syllogistic, the scholastics’ traditional logic, and the algebraists’ logic, goes far beyond Aristotle’s Peak. But first, before exploring that new term logic, we have an important site to visit. It is from that next vantage point, one first mapped out by the pioneers working at the crossroads of formal logic and mathematics, that we will get a view of the new highway of modern mathematical logic.

Predication Without Copulation? | 49

3.3 Predication Without Copulation? For recent times have seen the development of the calculus of logic, as it is called, or mathematical logic, a theory that has gone far beyond Aristotelian logic. It has been developed by mathematicians; professional philosophers have taken very little interest in it, presumably because they found it too mathematical. On the other hand, most mathematicians, too, have taken very little interest in it, because they found it too philosophical. Thoralf Skolem It was from mathematics that I started out. Frege A distinction between subject and predicate does not occur in my way of representing a judgment. Frege

Once he had abandoned the binary theory in favor of his ternary, copulated terms theory, Aristotle remained committed to the new way with logical syntax. In building the syllogistic he took the kinds of sentences (statement making sentences, statements) that can enter into inferences and premises or conclusions to always be ultimately analyzable into a pair of terms bound together by a logical copula. This allowed any kind of term to occur in any place in a premise or conclusion – a necessary pre-condition for syllogistic reasoning. The scholastic logicians modified Aristotle’s theory of logical syntax. The result was the quaternary theory; but even on that analysis any term could be either a subject term or a predicate term. Inference was still syllogistic. People being what they are, and logicians being people, over the twenty-four centuries following Aristotle, logicians found many additional ways to amend, emend, even contort syllogistic logic. There is no gainsaying that over the centuries logic has had its ups and downs. It has been especially up during the past century or so (at least in the academy). Today’s systems of formal logic are being taught to college students by instructors in philosophy, mathematics, information technology, psychology, and even a few in law. And less formal systems of logic are taught in departments of literature, rhetorical studies, and ... philosophy. Moreover, formal logicians have been engaged in a steady campaign during the past few decades to extend their formal systems in ways that take into account all kinds of logical tasks mostly undreamt of in previous philosophies. This new, well-designed modern highway through this country was built on the insights of Frege. The initial aim was to build a formal logical language that could serve as appropriate for the foundation of arithmetic, later extended to all of mathematics (thus “mathematical logic”). The central idea was that higher

50 | The Logic Mountain Range

branches of mathematics could rest on, be derived from, more basic branches, say arithmetic and geometry, which in turn could be derived by means of definitions and derivations from the formal logic. Moreover, mathematical proofs could be modelled on the types of derivations common in logic. This program was referred to as “logicism” and was strongly supported in the first couple of decades of the last century by influential logicians such as Russell. But there was a catch. Traditional formal logic, the logic initiated by Aristotle’s syllogistic, simply wasn’t up to the job. The “inferential power” of any system of formal logic is determined by the breadth, of kinds of arguments and derivations that system is adequate to analyze. The known versions of old logic were inferentially too weak. Frege was convinced that, if mathematics and mathematical reasoning were to have a firm foundation in formal logic, the logic required would have to be a new one. Since form follows function, the new logic incorporated a theory of logical syntax, logical form, inspired by mathematics itself. This was in marked contrast to the theory or theories of logical syntax at the heart of traditional logic. Plato may have looked to grammar as a guide to logical form, but the grammar on any natural language is far foo fine-grained for the job assigned to a formal logic, which models natural language by means of simplification, abstraction, and generalization. So, while Frege opted, like Plato, for a binary analysis, he looked for guidance to a non-natural language – mathematics. Frege’s model for the logical form of a statement was the structure of function-argument expressions in mathematics. Consider the number 4. The numeral ‘4’ is a name for it. But so are ‘22’, ‘the sum of 1 and 3’ and ‘the ratio of 8 to 2’. ‘22’ consists of a “function expression,” ‘...2’, which clearly has a gap in it, and an “argument,” an expression that fits, fills, that gap. The argument has no gaps; it is complete (saturated, as Frege would say), just as the function expression is incomplete (unsaturated). Some function expressions have multiple gaps. ‘The ratio of 8 to 2' consists of the function expression ‘the ratio of ... to ...’ and two arguments, ‘8’ and ‘2’. The “value” of a completed function expression, one whose gaps are all filled by arguments, is the object that it names. Thus 4 is the value of the completed function expression ‘the ratio of 8 to 2’. The resulting logic is generally referred to as the “predicate logic,” “predicate calculus,” “modern predicate logic” (MPL). As it happens the logicists’ dream of building mathematics on the foundation of MPL floundered in the 1930s when Kurt Gödel showed that any consistent formal system inferentially powerful enough to generate arithmetic must be incomplete (incapable of deriv-

Predication Without Copulation? | 51

ing all of its theorems). But could that logic nonetheless serve as a model for the logical features of natural language? The lexicon for the new formal language consists of formal expressions and material expressions. But now the material expressions are divided into arguments (called names) and function expressions (predicates). According to Frege, it is best to consider natural language statements as completed function expressions. In his semantic theory, names, complete expressions, have a meaning, sense (Sinn for Frege) and a referent (Bedeutung) (Frege 1970). The function expressions in natural language are called “predicates.” The referent of any name is an object. Function expression, predicates, likewise have both sense and reference. The referent of any predicate is a concept (Frege 1970a). No object is ever a concept; no concept is ever an object. So, no concept has a name; no object is the referent of a predicate. Notice that a statement is complete; the gaps in its predicate are filled by names. So a statement is itself a name, with both a sense and a referent. For Frege, the sense of a statement is the thought expressed by that statement; its referent is its truth value – the True or the False (Frege 1970, 62-83). As referents, the truth values are objects. The Fregean theory of logical syntax is binary. It takes any simple sentence to consist of a predicate and a set of names completing that predicate. No logical copula is required. What guarantees the unity of a statement is not some third element binding the name and predicate together. Unity is achieved by virtue of the predicate having gaps and names being perfectly fit to fill those gaps – not glue holding stamp to envelope, but round pegs filling round holes. A simple statement such as ‘Hume doubts’ is construed as a predicate, ‘...doubts’, and a name, ‘Hume’. ‘Romeo loves Juliet’ is analyzed as a doubly-gapped predicate, ‘...loves...’, and two names, ‘Romeo’ and ‘Juliet’. Since completed predicates themselves are names, they also can be arguments, gap-fillers, in “higher” function expressions. Thus, in the statement ‘Romeo loves Juliet and Juliet is happy’, the function expression ‘...and...’ has two entire statements as its arguments. ‘Some man loves Juliet’ is analyzed as ‘Some thing is (such that) it is a man and it loves Juliet’, where ‘some thing is (such that)...’ is a function expression (viz., a particular quantifier) whose single argument is a statement whose own function expression (viz., a conjunctive truth-function) is ‘...and...’ with the two subsentences as arguments. A universal affirmative like ‘Every logician is rational’ has a logical form consisting of a universal quantifier, ‘every thing is (such that)...’ whose argument is a sentence whose own function expression (viz., a conditional truth-function) is ‘if...then’ with the two sub-sentences as arguments. A very wide range of statements can be analyzed by means of this theory of logical syntax – it’s impressively expressively powerful; it has both inferen-

52 | The Logic Mountain Range

tial and expressive power. This can been more clearly seen once the now logically regimented sentences from natural language are translated into the new artificial symbolism of MPL. Our sample sentence above comes out as: Hume doubts Romeo loves Juliet Romeo loves Juliet and Juliet is happy Some man loves Juliet Every logician is rational

Dh Lrj Lrj & Hj ∃x(Mx & Lxj) ∀x(Lx ⊃ Rx)

The ‘x’s here are “individual variables” acting as pronouns with individual objects as their referents; the “individual constants,” ‘h’, ‘r’, ‘l’, are proper names; the quantifiers, ‘∀x’ and ‘∃x’ are said to “bind” the variables in the parenthetical sentences. In a relational, multi-gapped predicate, such as ‘L...’, the order of the proper names or pronouns filling the gaps in a predicate determine whether that relational predicate is interpreted as active or passive (thus ‘Lrj’ symbolizes ‘Romeo loves Juliet’ and ‘Juliet is loved by Romeo’; ‘Ljr’ symbolizes ‘Juliet loves Romeo’ and ‘Romeo is loved by Juliet’). The new logic’s merits are considerable. It might have failed to fulfill its foundational role for mathematics, but here is no reason to toss out a perfectly good tool as long as some job can be found for it. And, of course, ready at hand was the traditional job of using a formal logic to cast light on the kinds of quotidian reasoning tasks usually carried out in the medium of a natural, nonmathematical language. Though Frege and many of his followers never tired of denigrating natural language for its informality, lack of clarity, ambiguity, vagueness, etc., the function-argument theory of logical syntax is readily adapted to natural language statements. Little retrofitting is required. Speaking of the now standard logical analysis of universal affirmative statements in terms of a “deeper” structure revealed by its symbolic translation, involving a variable binding quantifier, a conditional, and nouns treated as predicates, Donald Davidson admitted that such an account is an “astonishing theory” in need of justification (Davidson 1980, 138). The standard theory of logical analysis is indeed “astonishing” and “in need of justification.” It takes the “deep” logical forms of statements made in a natural language to be remote from their apparent, “surface” forms. Logical form was invented to contrast with something else that is held to be apparent but merely the form we are led to assign to sentences by superficial analogy or traditional grammar. What meets the eye or ear in language has the charm, complexity, convenience, and the deceit of other conventions of the market place, but underlying it is the solid cur-

Predication Without Copulation? | 53

rency of a plainer, duller structure, without wit but also without pretence. This true coin, the deep structure, need never feature directly in the transactions of real life. (Davidson 1980, 137)

The contrast between this standard logical theory and the one underlying a term logic is striking. The alternative account, now mostly out of favor, does not artificially divide the material expressions of its vocabulary into “complete” and “incomplete” (letting all kinds of terms be fit to play any part), does not take the logical form of a natural language statement to be remote and hidden relative to is apparent form, does not take the logic of simple categorical statements to depend on the logic of conditionals or other forms of sentential compounds, does not bar the logical forms of natural language statements from featuring “directly in the transactions of real life,” and is in no way “astonishing.” Moreover, it is hardly the case that Davidson’s “solid currency” is in any way “plainer” or “duller.” MPL’s commitment to an array of baroque, unnatural, and often complex, devices (not to mention the accompanying account of inference that it requires) is what does indeed make it “astonishing.” This is not to deny, of course, that any theory of logic (or anything else) requires generalization and simplification. A certain level of regimentation is necessary. But surely the cautious introduction of such regimentation is preferable to the full-blown mobilization of complete regimentation that demands near universal conscription, full-dress uniforms, strict military discipline, and parade-ground drills, to use Ryle’s memorable image (Ryle 1954, 112). Traditional logic is simpler and more natural than modern mathematical logic. Still, it does not enjoy the expressive or inferential power of the new logic. But this hardly means that the powers enjoyed by MPL are beyond the capacity of any term logic. It is possible to construct a new formal term logic that, like traditional term logic, is relatively simple and natural. As well, it is equipped with a symbolic algorithm that infuses it with remarkable expressive power, and a calculus of inference for that algorithm that makes the logic inferentially powerful. We will take a look at that logic, “term functor logic” (TFL) next, forgoing any temptation to explore further the MPL highway and keeping to our original, less travelled path.

4 On the Term Functor Trail [T]he fixation on first-order logic as the proper vehicle for analyzing the ‘logical form’ or the ‘meaning’ of sentences in a natural language is mistaken. Suppes [The central idea in] my rejection of first-order logic as the appropriate instrument for the analysis of natural language ... is that the syntax of first-order logic is too far removed from that of any natural language, to use it in a sensitive analysis of the meaning of ordinary utterances. Suppes

4.1 Charging Up the Hill A syllogism is an addition of three terms, just like a proposition is an addition of two terms. Hobbes L’expression simple sera algébrique ou elle ne sera pas. Saussure

As we have seen, logicians, following Frege, began not with terms but entire statements. The newer logic takes its lexicon to consist of material expressions and functors (functions). Most importantly, it divides the material lexicon into exclusive types: names (viz., proper names and singular pronouns) and predicates. The identity function on a pair of names is a special case. Formal expressions, logical constants, are of various types and apply always and only to entire statements. The distinction between names and predicates is purely semantic, determined by the number of objects denoted. Names are singular expressions, used to denote a single object. Predicates are general, denoting more than one object. Mathematical logic further divides sentences into two types: atomic and molecular. An atomic statement consists of a single predicate and an appropriate number of names; no formal expressions are involved. A molecular statement consists of one or more sub-statements combined by the use of one or more sentential functions. Today’s students of logic are first taught the “propositional” (or “sentential” or “statement” or “truth-functional”) logic. They learn how to discern valid from invalid inferences using some algorithm (such as truth-tables or truth-trees). Then they learn a proof procedure that applies a set of “transformation” rules to premises to yield new statements, eventually leading to the conclusion. Like the

56 | On the Term Functor Trail

ancient Stoic logicians, modern mathematical logicians take the propositional logic to be basic, primary. The “predicate calculus” depends on it because the syntax of sentences involving quantifiers must also involve elements of the language of the propositional logic, and because the rules of proof for the predicate calculus include the rules of proof for the propositional logic. The logic of “identity” is then appended to the resulting logic to provide the students with the standard version of mathematical logic. What the student comes away with is an ability to do each of the following: translate natural language statements (and thus the inferences constituted by them) into the artificial language of MPL, determine the formal validity of arguments in propositional logic, and prove (derive conclusions from premises) in the full “predicate logic with identity.” It should be noted, however, that, for students of logic from the time of Aristotle to the early 20th century, one skill to be acquired was that of discovering “missing” premises in enthymemes. As we have seen, Aristotle took a conclusion to be a “problem” that was given, and the task then was to find the premises sufficient for establishing that conclusion. Since the conclusion is constituted from the minor and major terms, that task was ultimately to find the right middle term. Where the modern student learns to go from given premises to a conclusion, the traditional student learned this as well as how to go from a conclusion to premises. That’s what Aristotle meant by “analysis.” TFL is, like syllogistic, a term logic. But it goes well beyond traditional logic. It turns out that this new term logic can analyze a very wide range of inferences, it can determine validity, it can prove, it does not require a logic of propositions to be more fundamental than a logic of predicates, and it does not treat identity as a special appendix. We begin by building a very simple artificial language. Its vocabulary consists of an unlimited supply of elementary terms, each represented by a letter (usually upper-case) of the Roman alphabet. The language is also supplied with a pair of “term-functors,” expressions that when applied to a term or pair of terms form a new, compound term. One such functor is unary and is represented by the minus sign, −. The other functor is binary and represented by the plus sign, +. The unary functor is placed in front of a term to form a new term. The binary functor is placed between a pair of terms to form a new term. For example, if A, B and C are terms, then so are the following (where parentheses are used as punctuations in commonsense ways): −A, −B, A+C, (−A)+B, (−C)+(+B), −(C+A), (−B)+(A+B), −((A+B)+(C+A)). Compound terms formed via the plus sign have certain characteristic formal features. The binary functor is a symmetric and associative relation among terms. Thus, for example, A+B and B+A are

Charging Up the Hill | 57

equivalent; so are (A+B)+C and A+(B+C). The plus is not reflexive and is not transitive. It’s hard to imagine a simpler formal language. It is a language that is not good for very much. However, it does reflect a small bit of our natural language. Sticking with English, we do have forms of expressions in our language that are symmetric and associative but neither reflexive nor transitive. We say, ‘Tom is rich and happy’ where the compound term ‘rich and happy’ could just as well be replaced by ‘happy and rich’. A term such as ‘smart and beautiful but vain’ says as much as ‘smart but beautiful and vain’. Here the terms of conjunction share the formal features of our plus. We also say, ‘Tom is rich and Sarah is vain’, which could be said equally well by ‘Sarah is vain and Tom is rich’. In other words, we conjoin (usually with a word like ‘and’, but also ‘but’, ‘as well as’, etc.) both pairs of terms and entire sentences (viz., sentential clauses). And in most such cases the conjunction has the formal features of plus in our artificial language. This suggests that we could, when pressed, interpret our plus functor as a sign of conjunction since conjunctions share the formal features built into plus. There are other kinds of expressions that we normally use that are not conjunctions but which share the formal features of plus. Consider the sentence ‘Some logicians are artists’. Here the terms ‘logicians’ and ‘artists’ are operated on by the expression ‘some...are...’ to form the sentence. The expression ‘some...are...’ forms sentences from pairs of terms. It is a natural language term-functor. Moreover, it has the formal features of our plus. A word about our minus before continuing. It is obvious that the plus and minus signs of our language are meant to remind us of those same signs as they appear in arithmetic or algebra. Addition in elementary mathematics shares all the formal features of our plus. Our minus is likewise similar to the mathematical minus we are all familiar with. It can be interpreted via our natural language as negation, and negating is one of the things all of us are very good at. We negate simple terms (thus we use ‘unwed’, ‘nonpartisan’, ‘not smart’, ‘hopeless’, etc.), we negate compound terms (‘neither rich nor poor’), we negate entire sentences (‘Ed isn’t very smart’, ‘It is not a good day’, ‘Not a creature was stirring’). Notice that what has been said so far about terms is likewise said about sentences. Indeed, from our “terminist” point of view a sentence is just a compound term (though, of course, not all compound terms are sentences). So far we have this little formal language that can be used to model a small part of the logic of our natural language. Of course, it’s not much logic, resting as it does primarily on the meagre formal features of the plus. It’s time, then, to enrich our formal language, giving it new elements that will allow us to model not only

58 | On the Term Functor Trail

more kinds of natural language expressions, but to model the core ways in which we ordinarily manipulate our language in the process of reasoning. We saw above that a sentence such as ‘Some logicians are artists’ conjoins a pair of terms by means of the formative expression ‘some...are...’. And we saw that ‘some...are...’ could be formulated by our binary plus. The odd thing (well, one of the odd things) here is that the plus is a single expression but the ‘some...are...’ consists of a pair of words. Moreover, it would be natural to look for a related way to formulate sentences formed by conjoining pairs of terms by means of a slightly different expression, ‘all...are...’. Following the scholastics’ lead, let us begin the enrichment of our formal language, then, by splitting the binary plus into two parts, one representing expressions such as ‘some’ and the other representing expressions such as ‘are’. Our split binary plus will now consists of two separate signs (like the ‘some’ and ‘are’ of ‘some...are...’). The split binary will be: +...+.... We need not abandon the unsplit +. Examples of English versions of our split binary are: ‘some...are...’, ‘a(n)...is...’, ‘both...and...’. It is important to keep firmly in mind the fact that the two parts of our split functor are not themselves functors. Also note that the two fragments of the split copula are systematically ambiguous and that the ambiguity is benign, their different roles determined by position (just as in our positional numeric system). Compound terms (including sentences) cannot be formed from a pair of syntactically simpler terms with just part of a split formative. Both parts are required. For convenience only, we will still call the first part of such a split binary formative the quantifier and the second part will be the qualifier. If we are to build a formal language that is meant to allow us to model the ways we reason in the medium of a natural language, we need to recognize at least one other kind of quantifier. Consider a simple sentence like ‘Some logicians are artists,’ which could be formulated in a straightforward way as ‘+L+A’. A closely related sentence would negate the second term: ‘Some logicians are not artists’, which would be formulated as ‘+L+(−A)’. Since every sentence is a kind of compound term, we can negate sentences just as we negate terms. The corresponding negations of our two sentences would be ‘−(+L+A)’ and ‘−(+L+(−A))’. These two forms can be interpreted as ‘Not: some logician is an artist’ and ‘Not: some logician is not an artist’, grammatically proper, but inelegant English sentences. But they can be rewritten as ‘No logician is an artist’ and ‘No logician is not an artist’. The first of these two is quite natural, the second less so. They could be further paraphrased as ‘All logicians are not artists’ and ‘All logicians are artists’. Now the second of these seems more natural than the first. Let’s pick out all the more natural sentences from our four: ‘Some logicians are artists’, ‘Some logicians are not art-

Charging Up the Hill | 59

ists’, ‘No logicians are artists’, and ‘All logicians are artists’. We saw that the third was the negation of the first, while the fourth was the negation of the second. But the third and fourth no longer look like negations. We can rectify this by taking the negative formulas for these two sentences (‘−(+L+A)’ and ‘−(+L+(−A))’) and driving (as in algebra) the initial minus sign into the formula to yield: ‘−L−A’ and ‘−L−(−A)’. But what are we to make of the formative signs here? Consider ‘−L−(−A)’ in light of the fact that compound terms (including sentences) are formed from pairs of terms by means of a binary functor. We split our binary plus functor to get +...+... (interpretable, for example, as ‘some...are...’). Clearly the first two minus signs in ‘−L−(−A)’ constitute a binary functor joining the two terms, ‘L’ and ‘−A’ to form the sentence. But what do they mean (i.e., what are their natural language analogues)? Let’s borrow a bit more from algebra. We can say that (i) any adjacent pair of similar signs can be replaced by a plus sign and (ii) any adjacent pair of opposite signs can be replace by a minus as long as neither of the two signs is a quantifier (i.e., the second is a unary functor and the first is either also unary or it is a qualifier). But wait. So far our only quantifier is the first plus of the +...+... split binary functor, representing words like ‘some’. By equating ‘−L−(−A)’ with ‘−(+L+(−A))’ we have, in effect, defined a new quantifier (representing words like ‘every’). Allowing the second and third minus signs to cancel (yielding a plus) gives us the formula ‘−L+A’, which can be interpreted as ‘Every logician is an artist’, with the ‘−...+...’ constituting a new, defined, split binary functor (‘every...is...’). While the logical relation represented by ‘+...+...’ is symmetric and associative, the new ‘−...+’ is reflexive and transitive. Let’s define two more functors. A unary plus is defined as follows: +X =df – (–X). A binary minus is defined as: X–Y =df –((–X)+Y). By introducing a binary minus our formal language is rendered much more expressive and useful. The binary minus is both reflexive and transitive (but neither symmetric nor associative). So we can, for example, derive Z–Y from X–Y and Z–X; we can take X–X to be a tautology. The formal features of our binary functors, then, allow us to make a number of types of derivations. Moreover, these particular formal features are among those applying to various natural language expressions and essential to ordinary reasoning. Thus, conjunctive expressions such as ‘and’ are, when used ordinarily, both symmetric and associative (‘Sam and Nathan’ = ‘Nathan and Sam’, ‘It’s cold and raining’ = ‘It’s raining and cold’, ‘It’s cold, windy and raining’ = ‘It’s cold and windy and it’s raining’). Expressions such as ‘some...is/are...’ are also symmetric and associative (‘Some singers are actors’ = ‘Some actors are singers’, ‘Some performers who are actors and singers are dancers’ = ‘Some performers who are actors are singers and dancers’). Aristotle

60 | On the Term Functor Trail

expressed these categoricals using (Greek versions of) ‘belongs to some’; the medieval scholastic logicians expressed it using ‘i’ (thus: ‘XiY’, ‘Some Y is X). And just as our binary plus, via its formal features, reflects various natural language expressions sharing those features, the same holds for the binary minus. Ordinary expressions such as ‘if...then’ and ‘every...is...’ are both reflexive and transitive (‘If it’s raining then its raining’, ‘Every fool is a fool’ are tautologous; ‘If it’s raining then it’s cold’ and ‘If it’s cold then I’ll stay home’ jointly entail ‘If it’s raining then I’ll stay home’, and ‘All logicians are philosophers’ and ‘Every philosopher is wise’ jointly entail ‘All logicians are wise’). Aristotle expressed these latter kinds of categoricals using ‘belongs to every’; the scholastics used ‘a’ (‘YaX’, ‘Every X is Y’). It is obvious that the i and a functors are versions of our binary plus and binary minus. But this is still not enough. Consider the inference from X+Y and Z–X to Z+Y. This is the form of such simple valid inferences as ‘Some Y is X, every X is Z; so, some Y is Z (e.g., ‘Some logician is a philosopher, and every philosopher is wise; so, some logician is wise’). While the formative expressions in play here maintain their usual formal features, those features alone cannot account for the validity of such an inference. What is required is a rule of inference that, inspired again by Aristotle, the scholastics called the dictum de omni. In order to fully understand this rule we need first to follow the scholastic logicians for just a few more steps. Consider a statement such as ‘Every man is rational’. This was initially paraphrased as ‘Rational belongs to every man’, and parsed as ‘Rational / belongs to every / man’. ‘Rational’ and ‘man’ are the (categorical) terms (literally, the termini of the statement), the material elements, and ‘belongs to every’ is the logical copula, the formative element, the glue that binds the two terms into a new, more complex, syntactical unit, a statement. (Our binary plus and binary minus are logical copulae.) Next the scholastics split the copula, parsing the statement anew as: ‘Rational belongs to / every man’. Thus far, the original paraphrase and its new parsing were seen as admittedly unnatural. To render them closer to natural language, they next reordered the elements: ‘Every man / rational belongs to’, then reordered the second element to yield: ‘Every man / belongs to rational’. Following Abelard, the scholastics then replaced the quite unnatural expression ‘belongs to’ with a grammatically appropriate version of ‘to be’ (which is what they officially termed the logical copula), to arrive finally at: ‘Every man is rational’, a perfectly natural statement consisting of two parts which they called the subject (‘every man’) and the predicate (‘is rational’). The subject was further divided into the quantifier (‘every’) and the subject term

Charging Up the Hill | 61

(‘man’) and the predicate was divided into the qualifier (some version of ‘is’) and the predicate term (‘rational’). The scholastic logicians were hardly content with applying their creativity and insight just to questions of logical syntax. They were even more devoted to issues of semantics. Vast numbers of (often byzantine) semantic schemes were devised. These logicians were, at the very least, masters of distinctions. The core notion in such theories was that a term used in statements has meaning in the sense that it can supposit, stand for, something else. Theories of supposition tended to distinguish among a number of different ways a term could stand for something. Some of these distinctions continued to concern many logicians up to the 20th century and beyond: the distinction between the connotation of a term and its denotation (or its sense and reference, Sinn and Bedeutung) is one; another is the distinction between those terms in a statement that are distributed and those that are undistributed. It is this last that is of interest to us now. Roughly, a term used in a statement is said to be distributed in that statement just in case it stands for its entire denotation; otherwise it is undistributed. Terms that are universally quantified are distributed and terms that are particularly quantified are undistributed. That takes care of subject terms, but what of predicate terms? Here the idea is that a predicate term is distributed in a statement whenever that statement entails a statement in which that term is universally quantified. For example, in ‘Every logician is a philosopher’ the term ‘logician’ is clearly distributed; ‘philosopher’ is undistributed because it is not universally quantified here, nor is it universally quantified in any statement entailed by the original statement. By contrast, in ‘No logician is a fool’ both ‘logician’ and ‘fool’ are distributed since the first is already universally quantified (‘No logician is a fool’ = ‘Every logician is not a fool’) and the second is distributed in ‘No fool is a logician’, which can be immediately derived from the original. As we said, this notion of distribution is rough. We will smooth it a bit later on. The notion of distribution plays a central role in the application of the dictum de omni rule. The dictum has had a long and checkered history; it has been championed and challenged, and this in spite of the fact that it has been difficult to find agreement on even how to state it. For now, we will say that the rule allows one to derive from a pair of statements, at least one of which is universally quantified, a new statement which is just like the other statement except that the predicate term of the universal statement has replaced the subject term of the universal statement where it occurs undistributed in the other statement. Let’s return to the inference we set aside earlier: ‘Some logician is a philosopher, and every philosopher is wise; so, some logician is wise’. This valid infer-

62 | On the Term Functor Trail

ence conforms to the dictum. At least one of the premises is universal. Its predicate term (‘wise’) is substituted for its subject term (‘philosopher’) in the other premise, where that subject term (‘philosopher’) occurs undistributed, yielding the conclusion. Traditional logicians would say that the major term is substituted for the middle term in the minor premise to yield the conclusion. The 19th century algebraists would describe this by saying that the tokens of the middle term have ‘cancelled out each other’. We can use some of these traditional insights not only to augment and strengthen our own little formal language of pluses and minuses, but, in turn, to use this new formal language to cast clarifying light on those traditional notions of distribution and the dictum. Given our two unary functors, we can see that any term is either positive (in the range of an even number or no unary minuses) or negative (in the range of an odd number of unary minuses). Since every statement is itself a (complex) term, every statement is either positive or negative. So, in any simple statement using a split copula there will be 5 formative elements: a sign indicating whether the statement is positive or negative, a quantifier, a sign indicating whether the subject term is positive or negative, a sign indicating the qualifier, and a sign indicating whether the predicate term is positive or negative. Let us say that the sign of a term being positive or negative indicates the term’s charge. Every term, simple or complex (including entire statements), is charged. Further note that every quantifier is either particular or universal. The general form of any statement can be indicated as follows: ± [± (± S) + (± P)], which might be read: ‘It is/isn’t the case that some/every (non)S is (non)P’. Two conventions can be adopted, both common in ordinary discourse. Unary pluses, signs of positive charge, can be safely suppressed. Also, the qualifier and the charge of the predicate term can be amalgamated. When both are the same they can amalgamate to yield a positive qualifier; when the signs differ they amalgamate to yield a negative qualifier. It is important to keep in mind that only unary pluses can be suppressed; no minuses (unary or binary) can be suppressed, and no quantifiers (particular or universal) can be suppressed. This streamlining, suppressing certain unary pluses, in a formula reflects a corresponding practice in our natural language. It is easy now to formulate the standard categorical statements: A: –S+P, E: –S–P, I: +S+P, O: +S–P. We could, as an example, formulate our inference about wise logicians as: +L+P, –P+W; ∴+L+W. The contradictory of any statement is its negation. In other words, two statements are contradictory just in case they are exactly alike except for their charge. The negation of the I categorical (+S+P) is –[+S+P], which, once the external minus is distributed inside the

Charging Up the Hill | 63

brackets yields –S–P, an E categorical. In like manner, O and A can be shown to be contradictory. We will make use of this practice of distributing external signs of charge. Let us say that a statement has positive/particular valence whenever either its overall charge is positive and its quantifier is particular or its overall charge is negative and its quantity is negative; a statement has negative/universal valence whenever either its overall charge is positive and its quantity is universal or its overall charge is negative and its quantity is particular. Valence, in other words, can easily be determined by looking at the first two signs of its general form (even if the positive charge sign happens to be suppressed). If these two signs are the same (both plus or both minus) the statement is positive/particular in valence; if they are different the statement is negative/universal in valence. Often pairs of statements can be formulated so that the two formulas are algebraically equal. For example, –S+P and –[+S–P] are algebraically equal. So are +S+P and –(–S)+P. The notions of valence and algebraic equivalence permit us to formulate a new principle: Logical Equivalence: Two statements are logically equivalent just in case they have the same valence and are algebraically equal.

Thus, in the two cases above, the first pair are logically equivalent but the second pair are not (remember that the external signs of positive charge are suppressed here). Singular terms, terms denoting just one unique individual object, seem to pose a challenge to the term logician. Recall that for a term logic the lexicon of material expressions is homogeneous; any term, singular, general, mass noun, count noun, concrete, abstract, etc., can occur in a statement in any terminal position (i.e., as a subject term or as a predicate term). A consequence of this is that any term might appear sometimes quantified, other times qualified. All this is, of course, anathema for most modern mathematical logicians. Frege had made it strikingly clear that names (arguments, singular terms), including proper names and singular pronouns, are complete (saturated), while predicates (function expressions), are incomplete (unsaturated). Names refer to objects; predicates refer to concepts. Since an entire statement is complete, not containing any gaps to be filled by names, it is itself a name (of its truth value). The contrast between names and predicates (and its corresponding contrast between objects and concepts) is absolute. In particular, no name (qua name) could ever be used as a predicate.

64 | On the Term Functor Trail

Consider the statement ‘Aaron is tall’. For the mathematical logician this is a simple atomic statement analyzable into a (one-gapped) predicate, ‘...is tall’ and a name fit to fill that gap, ‘Aaron’. By contrast, the term logician eschews any atomic/molecular distinction (Sommers 1982, chapter 1). Every statement is a complex term, constructed from a pair of terms (themselves either simple or complex) bound together by a (usually split) logical copula. This means that even singular statements, like ‘Aaron is tall’, must have a logical form that reveals this ternary structure. If ‘is’ is taken in this statement as a qualifier, then the question arises: where is the quantifier? Some traditional logicians took singular sentences to have an implicit particular quantifier, but most held that the implicit quantifier was universal (since then a singular subject would be distributed, referring to its entire denotation). Leibniz suggested that such sentences can be viewed as having either quantifier arbitrarily (Leibniz 1966). Since it mattered not, for logical purposes, which quantifier is understood, the quantifier is suppressed in natural language (‘Some Aaron’ = ‘Every Aaron’ = ‘Aaron’). This notion of quantified singular terms having a “wild” quantity turns out to be a fruitful idea. Suppose I assert ‘Aaron is tall’ and ‘Aaron is a boy’, from which I want to derive ‘Some boy is tall’. I can do this of course, but how does the term logician account for it? The mathematical logician gives an account making use of a rule of statement logic, Conjunctive Addition, and a rule for introducing an “existential” quantifier, Existential Generalization. The term logician, following Leibniz, simply lets the two occurrences of ‘Aaron’ have different quantifiers, and then applies the dictum de omni. Mathematical logicians have enormous reservations about the quantification of singular subjects, but these are nothing compared to their reservations about the qualification of singulars. Singular terms are viewed as simply unfit for predication (see Frederick 2013). Since the term logician’s lexicon is homogeneous (any term can appear in any logical role), term logic must admit singular terms to predicate term positions, i.e., allow them to be qualified. Frege’s rock-hard distinction between names and predicates is breezily ignored by a term logic. Consider the statement ‘Twain is Clemens’. It looks as if it consists of a pair of singular terms flanking a qualifier. But logicians from Frege and Russell onward have warned that when it comes to natural language appearances are often (usually) deceiving. Such logicians analyze our statement into a pair of singular terms flanking a very special predicate expression. While the ‘is’ plays virtually no role most of the time in modern systems of formal logic, it becomes suddenly all-important in cases such as this one. It is taken to be a sign of identity, a relation between an object and itself. Unlike other predicate expressions,

Charging Up the Hill | 65

this one is given its own notation (=) and special limitations are put on the interpretation of statements in which it is involved, and special rules are employed for inferences involving such statements. Suffice it to say, the term logician, armed with the knowledge that, just as in ordinary language, singular terms can appear in any logical role (as arbitrarily quantified subject terms or as qualified predicate terms), has no need of any special “identity theory.” A socalled identity statement such as ‘Twain is Clemens’ has the logical form ‘Some/every T is C’. That’s it. ‘But surely,’ the modern logician will say, ‘the logical form of such a statement must reveal the fact the identity is an equivalence relation, and our use of = reminds one of this.’ If all that one means by an equivalence relation is that statements like ‘Twain is Clemens’ are reflexive, symmetric, and transitive, then nothing more is required of the term logician. Let A, B, and C be any singular terms. Every A is A. Some A is B if and only if some B is A. If every A is B and every B is C, then every A is C. What more is required? Traditional term logic tried to fit all statements into one of the four classic categorical forms. Three kinds of statements seemed ill-suited for such a fit: singular statements, relational statements, and compound statements (e.g., conjunctions, conditionals, disjunctions). We’ve seen that singular statements (including so-called identities) pose no serious challenge. Things seem not nearly so easy when it comes to relationals. Categoricals are supposed to be comprised of a pair of terms (possibly themselves complex) joined by a logical copula. If the copula is split, then the statement is construed as a subject and a predicate. But now consider a statement such as ‘Some general is losing every battle’. It seems to have too many terms; and two of them are quantified; and what about that relative term ‘losing’? Freed from any analysis in terms of subjects and predicates, the mathematical logician, can analyze such a statement using statement functions such as conjunction and conditionalization, individual variables (pronouns), and quantifiers applied to entire statements and binding those pronouns. Still, such cycles and epicycles are not required in a logic of terms. Remember: every statement is a complex term; any term used in a statement may be a complex term; every complex term is a copulated pair of terms. Our sample relational sentence consists of a subject (‘some general’) and a predicate (‘is losing every battle’). That predicate is itself complex, consisting in this case of a pair of terms (‘losing’ and ‘battle’) joined by an unsplit logical copula (‘every’). In natural language expressions such as ‘some’ and ‘every’ (e.g., ‘a(n)’, ‘any’, ‘every’, etc.) are used both as quantifiers and as (abbreviated) unsplit copulae. Still that’s not enough.

66 | On the Term Functor Trail

Let’s look at how modern logicians analyze a simple relational such as ‘Romeo loves Juliet’. The word ‘...loves...’ is taken as a predicate with two gaps (a ‘two-place predicate’) and the two names, ‘Romeo’ and ‘Juliet’ fill the two gaps. Symbolized: ‘Lrj’. Notice that the order of the two names is important. Reversing the order yields not an equivalent statement but rather the converse ‘Juliet loves Romeo’ (‘Ljr’). In this formal language the arguments (names or individual variables) play at least two logical roles: their number indicated the “adicity” (number of gaps) of the relational predicate, and their order indicates the “direction” of the relation (in this case, who is loving and who is being loved). Adicity, needless to say, is of no concern in a term logic. But it is certainly important for any formal language hoping to capture the logical features of natural language to be able to reflect relational direction. And this is easily done. We introduce into our formal language numerical subscripts in the following way: a common numerical subscript is attached to any two terms in a statement that are copulated in that statement or in any statement that it entails. Whenever two terms are the only terms in a statement the numerical subscripts can be suppressed. For example, the statement ‘Every man is rational’ could be formulated as –M1+R1 (it doesn’t matter what numerals are used as long as they are the same), and more simply as –M+R. Numerical subscripts do their heavy lifting when it comes to relationals. ‘Romeo loves Juliet’ would be formulated as ±R1+(L12±J2), while ‘Juliet loves Romeo could be formulated as ±J2+(L21±R1) or even ±J1+(L12±R2). ‘Some general is losing every battle’ would be formulated as +G1+(L12–B2). ‘A man gave a rose to every woman who hated him’ would be formulated as +M1+((G12+R2)13–(W3+H31)). In such statements, where even the relational term is complex, we could simplify the formulation by amalgamating the subscripts on the relationals to give us +M1+((G123+R2)–(W3+H31)). Where originally ‘gave a rose to’ was taken as a two-place relation between the man and the women who hate him, we now treat ‘gave’ as a three-place relation between the man, the rose, and the women. We’ve seen that pairs of terms that are copulated in a statement entailed by an original statement are (perhaps tacitly) subscripted in the original statement by a common numeral. From our rose-giving statement we can derive such statements as ‘A man gave a rose’, ‘A man gave a rose to every women who hated him’, etc., but we cannot derive, e.g., ‘A man is a rose’, since in its formulation, +M1+R2, the terms share no common subscript. Modern mathematical logicians take the logic of compound statements (the propositional calculus) to be primary, basic, with the predicate calculus resting on it. Traditional logicians tried hard to incorporate the logic of compounds into term logic, in effect, reversing today’s standard order. For example, Leibniz

Charging Up the Hill | 67

thought that if he could construe compound statements such as conjunctions and conditionals as categoricals it would make his attempt to build a term logic much easier. Traditional logicians already recognized strong similarities between conjunctive and particular statements and between conditional and universal statements. Even Frege allowed that any statement, p, could be understood as implicitly ascribing a predicate ‘is true’, reading a conditional of the form ‘If p then q’ as a categorical: ‘No case of p being true is a case of q being false’ or, as Frege would have it: “... the case does not occur in which the antecedent stands for the True and the consequent for the False” (Frege 1970, 74). At any rate, we have already seen that a term logic need not distinguish between different types of statements. Every statement is a complex term, a pair of terms (complex or not) bound together by a split or unsplit copula. Moreover, those copulae apply to any term-pair, simple or complex, sentential or non-sentential. A formula such as +X+Y can be the form of a particular statement (e.g., ‘Some singers are actors’) or a conjunctive statement (‘It’s raining and it’s cold’). All that matters, from the logical point of view, is that the formulation of a natural language statement reflect such formal features as reflexivity, transitivity, symmetry, etc. – and that is just what our pluses and minuses are meant to do. We saw that in the case of relationals both split and unsplit copulae are commonly used in natural language and are likewise available in our formal language. This availability is especially obvious in the case of natural language expressions used to form compound statements. We have, for example, both split and an unsplit versions of natural language connectives for conjunctions (e.g., ‘and’/‘both...and’), disjunctions (‘or’/‘either...or’), and conditionals (‘only if’/‘if...then’). So the logic of compound statements can be incorporated into a general logic of terms. But in doing this one needs to take care. There are two important disanalogies between statements and other complex terms. On the surface, these disanalogies appear to reflect poorly on any prospects for fully incorporating statement logic into term logic. A proper understanding of the nature of sentential terms will show, however, that the logic of statements is a special branch of term logic (Sommers 1993). The first disanalogy: while a particular statement does not logically entail its corresponding universal, a conjunction does entail its corresponding conditional. ‘(Both) A and B’ entails ‘If A then B’ but ‘Some A is B’ does not entail ‘Every A is B’. The second disanalogy: while a particular affirmative is logically compatible with its corresponding particular negative, a conjunction is not compatible with the conjunction of its first conjunct and the negation of its second con-

68 | On the Term Functor Trail

junct. ‘Some A is B’ and ‘Some A is not B’ are compatible, but ‘(Both) A and B’ and ‘(Both) A and not B’ are not compatible. It seems that these disanalogies rest on the difference between sentential terms and non-sentential terms. If we formulate the statements in the first disanalogy we get +A+B entailing –A+B when the terms are read as sentential but not when they are read as non-sentential. A similar distinction seems to hold for the second disanalogy. +A+B and +A–B are compatible when the terms are read as non-sentential but not when they are read as sentential. But, just for a moment, consider what happens if our term A is singular. In that case the first disanalogy evaporates. ‘Some A is B’ does entail ‘Every A is B’ when A is singular since singular subject terms, as Leibniz saw, can be given arbitrary quantity. The same holds in case of the second disanalogy. Generally, +A+B and +A–B can both be true – unless A is singular, in which case they are logically incompatible. But how does this help? It suggests that the logic of singulars and the logic of compound statements are in the same boat when it comes to these disanalogies. The disanalogies disappear when singulars are introduced. If sentential terms could be construed as singular terms, then the disanalogies disappear altogether and the logic of compound statements, the logic of sentential terms, becomes just a special branch of term logic. But can sentential terms be construed as singulars? Every statement is a sentence used to make a truth-claim. The claim is that the proposition being expressed by the sentence is true. True of what? Every statement is made relative to some specifiable (not always explicitly specified) domain of discourse. To make a statement is to claim something about the domain relative to which it is made – that’s the truth- claim. In our ordinary use of language our default domain of discourse is simply what we (and we hope our audience) take to be the actual world (or some salient part of it). To say that some singers are actors is to claim that there are singing actors in the world. To say that a filly won the derby is to claim that the world (at least the equine part of it) has as one of its constituents a derby winning filly. Generally speaking, to make a statement of the form +A+B is to claim that the world has at least one thing that is A and B as a constituent; to make a statement of the form –A+B is to claim that the world has no thing in it that is A but not B. A more revealing way of expressing these claims is this: +A+B claims that something characterized by A is characterized by B; –A+B claims that everything characterized by A is characterized by B. But, when A and B are sentential terms, what is being characterized? In ordinary discourse, what is being characterized is the world. We could read our formulas as ‘Some A world is B’ and ‘Every A world is B’. But, as there is just one world, the actual world, these subject terms are singular and

Charging Up the Hill | 69

have, therefore, arbitrary quantity. The logic of compounds statements, like the logic of singular terms, is just a special branch of a general term logic. Indeed, we can think of the logic of compound statements as a part of the logic of singular terms, which itself is part of the logic of terms. Let’s return, finally, to distribution and the dictum. As we saw, the traditional notion of distribution was less than perfectly clear. It has been much maligned by post-Fregean logicians such as Peter Geach, who characterized it as “incoherent” (Geach 1962, 4; see also Geach 1976 and the lively response to Geach by Terence Parsons in Parsons 2006a). Nonetheless, a number of logicians have now come to the defense of the theory of term distribution (Makinson 1969; Williamson 1971; Sommers 1975 and 1982, 51-52, 62, 181-182; Katz and Martinich 1976; Friedman 1978; Rearden 1984; Englebretsen 1985; Wilson 1987; Sommers and Englebretsen 2000; Parsons 2006a; Hodges 2009; Alvarez and Correia 2012; Martin 2013). Universally quantified subject terms are distributed; particularly quantified subject terms are not distributed. An unquantified term is distributed in a statement just in case that statement entails a statement in which that term is universally quantified, otherwise it is undistributed. Our formal language of pluses and minuses can make things clearer and easier. Recall the general form of any statement (with the copula split): ± [± (±S)+( ±P)]. As we saw, any term used in a statement is either in the range of no minuses or in the range of an even number minuses or in the range of an odd number of minuses. As Wilfrid Hodges has written, “Briefly, a term in a sentence is distributed if it occurs only negatively, and undistributed if it occurs only positively” (Hodges 2009, 603). Whether a term is distributed or undistributed in a statement is simply a matter of determining whether the number of minuses it is in the range of is odd or not. If it is odd, then the term is distributed; if it isn’t, then the term is undistributed. For example, formulating the statement ‘It is not the case that every logician is a philosopher’, –(–L+P), shows that ‘logician’, being in the range of an even number of minuses is undistributed, and ‘philosopher’, being in the range of an odd number of minuses is distributed. In our rosegiving example from above, our formulation reveals that ‘man’, ‘gave’, and ‘rose’ are undistributed and ‘women’ and ‘hates’ are distributed. Since singular terms can be quantified arbitrarily, when they are quantified in a statement they can likewise be taken to be distributed or undistributed arbitrarily (when they occur unquantified, of course, their distribution value is determined just as with any other term). Thus far, we have given the bare outlines of a formal language that has been built with the twin aims of simplicity and naturalness. But any formal language must be more than just relatively simple and natural. It must have

70 | On the Term Functor Trail

sufficient inference power. The Fregean revolution in logic was successful because, even though the new logic was far more complex than the old (and often aggressively non-natural), it was powerful. It could account for a very wide range of inferences in a systematic way. Nonetheless, it is possible to build a term logic that preserves much of the simplicity and naturalness of the old traditional logic, but which can match the power of the newer mathematical logic. There is not enough space here to lay out that entire system, but, by way of illustration, we can look at the central rule of inference it employs – the dictum de omni, a rule derived from Aristotle (Cat. 1b9-15, 25b32-35, 32b-33a5, Pr. An. 24b26-30). Recently Robert van Rooij has explored an alternative formulation of term logic, one inspired by Sommers’ version, but eschewing term functors (Rooij 2012).

4.2 A Better View There are now two systems of notation, giving the same formal results, one of which gives them with self-evident force and meaning, the other by dark and symbolic processes. The burden of proof is shifted, and it must be for the author or supporters of the dark system to show that it is in some way superior to the evident system. Jevons

We saw earlier that our formal language preserves enough of the formal features of certain natural language expressions to allow a number of kinds of immediate inference. Our incorporation of singulars, relationals, and compound statements, allows us to extend this power of immediate inference even further. The dictum is a rule that governs mediate inference, the inference of a statement from a pair of statements already available (e.g., as premises, axioms, hidden assumptions, previously derived statements, etc.). A standard scholastic formulation was: Quod de aliquo omni dicitur/negatur, dicitur/negatur etiam de qualibet eius parte. Or, as Aristotle would say, “What is said of something is likewise said of what that something is said of.” Or, as I want to put it, “A term affirmed/denied of a universal subject can be affirmed/denied of any subject that the first subject’s term is affirmed of.” It can be stated simply now, given our language of pluses and minuses (and thus distributed and undistributed terms). Dictum de omni: Given any universal statement, the predicate term (whether positively or negatively charged) can be substituted for the subject term of that statement in any other statement in which that subject term occurs undistributed.

A Better View | 71

The first figure classic syllogisms satisfy this rule. And as Aristotle showed, all other valid classic syllogisms are reducible to (derivable from) these, supplemented by the use of elementary rules of immediate inference. This is one reason that traditional logicians, including Leibniz, saw the dictum as the rule of mediate inference. But that’s not enough. The rule applies in cases of inferences involving singulars, relationals, and compound statements. Consider an inference involving singulars that is usually taken to be outside the capacity of any term logic: ‘Twain is Clemens, Twain was a Missourian; so Clemens was a Missourian’. We can formulate this as: ±T+C, ±T+M; ±C+M. In each statement the quantity (thus distribution) of the subject term is arbitrary. The term, M, affirmed of the distributed term T in the second premise is substituted for that term taken as undistributed in the first premise, yielding, via the dictum, +M+C, which is then immediately converted to the conclusion by virtue of the symmetry of the plus copula. De Morgan and other logicians in the 19th century (and of course earlier) worried about traditional logic’s inability to account for an inference such as: ‘Every horse is an animal; so, every head of a horse is a head of an animal’. Let’s begin by formalizing (using K for ‘head of’): –H+A; –(K+H)+(K+A). In this case there is a hidden tautologous premise, something that ‘goes without saying’: ‘Every head of a horse is a head of a horse’ (–(K+H)+(K+H)). The dictum allows us to substitute A for H (explicit premise) for any undistributed occurrence of H in another statement (viz., the second occurrence of H in the hidden premise), yielding the conclusion (for an idea of how the scholastic logicians might treat this, see Parsons 2013). Consider next the inference: ‘Carlee loves a boy, every boy admires some hockey player, every hockey player is poetic, whoever admires someone poetic is sensitive; so, Carlee loves someone who is sensitive’. We begin by formulating this as (nothing is lost here by suppressing subscripts): ±C+(L+B), –B+(A+H), –H+P, –(A+P)+S; ±C+(L+S). Applying the dictum to the second and third premises yields –B+(A+P); applying it to this and the fourth premise yields –B+S; and applying it, finally, to this and the first premise gives us the conclusion. 1. ±C+(L+B) premise 2. –B+(A+H) premise 3. –H+P premise 4. –(A+P)+S premise 5. –B+(A+P) 2, 3 dictum 6. –B+S 4, 5 dictum ∴7. ±C+(L+S) 1, 6 dictum

72 | On the Term Functor Trail

Standard valid argument forms used as rules in most versions of today’s mathematical logic can be shown to satisfy our dictum. Modus Ponens and Existential Generalization are just two examples. The first is formulated in our term logic as: –P+Q (‘If P then Q’,‘Every P world is Q’), ±W+P (‘P’,‘The world is P’); so, ±W+Q. We can even simplify this, making it more familiar, by suppressing the occurrences of ‘the world’, giving us: –P+Q, +P; +Q. Here, since Q applies to P universally in the first premise, it can be substituted for P wherever P is undistributed, e.g., the second premise, to give us the conclusion. Existential Generalization purports to derive ‘Something is F’ directly from ‘A is F’ (where A is a name). Our term logic takes such an inference to be mediate, with a second, hidden, innocuous premise: ±A+T. Again the dictum does the work here. Earlier we saw that two statements are logically equivalent just in case they are both algebraically equal and share the same valence (i.e., both are positive/particular or both negative/universal). We can use these notions of algebraic equivalence and shared valence to specify the necessary and sufficient conditions for argument validity. Validity: An argument is valid if and only if the number of premises (explicit or otherwise) with positive/particular valence equals the number of conclusions with positive/particular valence (i.e., either one or zero) and the sum of the premises algebraically equals the conclusion.

The application of Validity amounts to a decision procedure for arguments. Once the argument is symbolized, it amounts to counting minus signs and using simple algebra. It is simple and fast. By contrast, simple and fast are rare qualities to be found in modern predicate logic. Nor would most advocates for doing logic that way deny that its language is often quite unnatural. But who would dare quibble about the expressive powers of such a formal language. For its power here far exceeds that of traditional logic. And yet ... Aristotle took the perfect (complete) syllogisms of the first figure to be obviously valid to any rational person. No proof was required; no calculation; no counting of negators; etc. Indeed, some inferences are so intuitively obvious and simple that they can be seen immediately to be valid. In fact, some purported inferences are so intuitively and obviously invalid that initial observation of them is enough to reject their validity. In either case, the logical formulation of such simple and obviously valid/invalid inferences ought to reflect these facts. As it happens, there are some simple and obvious valid inferences that seem beyond the powers of standard MPL to illuminate by formalization. Quine said that the formal language of MPL aims to do nothing more than reveal the truth conditions of statements to be formalized, adding,

A Better View | 73

“And a very good thought that is” (Quine 1970, 36). Unfortunately, limiting formalization to nothing more than the revelation of truth conditions is just what limits the powers to illuminate more. Consider the simple and obvious inference from a given relational in active voice to its passive voice counterpart: ‘Kant read Hume, so Hume was read by Kant’. Simple, obviously and intuitively valid. Here is how it is formulated in the standard logic today: Rkh ∴Rkh Obviously – trivially – valid. But our intuitions tell us that the premise is primarily about Kant while the conclusion is primarily about Hume. The active /passive distinction seems important to us. But modern logicians are happy to follow Frege in dispensing with that distinction as “of no concern to logic” since the two statements “express the same thought” (Frege 1979, 141).Our new term logic, TFL, sheds more light by its formulation of such an inference, exhibiting the formal difference between the premise and the conclusion: ±K1+(R12±H2) ∴±H2+(R12±K1) (A simple proof here requires an application of commutation and two of association.) Another way in which MPL formulations are limited is due to its restriction to a fixed “adicity” for any predicate in a given inference. For example, if in a given inference a predicate such as ‘read’ is taken to be 2-place (e.g., ‘Kant read Hume’), it must be taken as a 2-place predicate in any of its other occurrences in that inference. Thus it could not be read as a monadic (1-place) predicate elsewhere (e.g., ‘Kant read’). The simple, and obviously valid inference from ‘Kant read Hume’ to ‘Kant read’ would either be formally invalid or inexpressible in MPL language. Consider the following inference: ‘Brutus stabbed Caesar with a knife, so Brutus stabbed Casesar’. The modern logician, in attempting to formulate this, must either treat ‘stabbed’ as a 3-place predicate in both occurrences (perhaps paraphrasing the conclusion as ‘Brutus stabbed Caesar (with something)’: Sbck ∴ Sbc or ∴ ∃xSbcx

74 | On the Term Functor Trail

or treat ‘stabbed’ as ambiguous (say with ‘S3’ for the 3-place ‘stabbed’ and ‘S2’for the 2-place version: S3bck ∴ S2bc More light is shed by the TFL formalization, which preserves our inclination to read ‘stabbed’ as unambiguous: ±B1+((S123 ±C3)+K3) ∴ ±B1+(S123 ±C3) Finally, consider the simple inference ‘Socrates taught a teacher of Aristotle, so one whom Socrates taught taught Aristotle’. As with the first example, MPL has difficulty exhibiting the formal difference between premise and conclusion here: ∃x(Tsx & Txa) ∴∃x(Tsx & Txa) TFL gives us a better view of what’s going on: ±S1+(T12+(T23 ±A3)) ∴ +(±S1+T12)+(T23 ±A3)

4.3 Deriver’s License Required Mathematicians are a species of Frenchmen: if you say something to them they translate it into their own language and presto! It is something entirely different. Goethe

We have seen just how important the notion of distribution is to our formulation of the key rule of syllogistic inference, the dictum de omni. Indeed, a term logic like TFL can be formulated taking all formative expressions as signs of distribution values, with ‘−’ for distributed and ‘+’ for undistributed. Boole took his primary rule of deduction (viz., equals can be substituted for equals) to be a version of the dictum (Corcoran and Wood 1980, 615-616). Suffice it to say that his rule was not the classic dictum. But it does at least preserve the insight that it’s a substitution principle (though not exclusively of equals for equals). What

Deriver’s License Required | 75

Boole did was take the elimination (of middle terms in a syllogism) to amount to the elimination of a variable from a pair of 3-variable equations. In fact, he took syllogistic inference to be a matter of equation solution rather than statement deduction (Corcoran and Wood 1980, 619-624). Consider a simple syllogism of the form ‘Every A is B; every B is C; therefore, every A is C’. What’s the best way to describe how one would get from the premises to the conclusion? The two instances of the middle term, ‘B’, are eliminated? The term, ‘C’, is substituted for ‘B’ in the first premise? Elimination or substitution? Each is an accurate description of what is going on when we draw conclusions from the premises of such valid syllogisms. But the real question about deductive inference isn’t that of what is being done, but of how it is being done. We don’t want just a description – we want an account, an explanation. For example, Kepler’s three laws describe (closely enough) planetary motion; Newton’s theory of gravity explains it (closely enough). Many people before him described species evolution; Darwin’s theory of natural selection explains it. More immediately, most people know how to multiply and divide; not that many can explain how it is done. So what about our simple syllogism? I want to claim that the elimination of middle terms is a shorthand way of describing the results of term-substitution. More importantly, I want to claim that the kind of term-substitution that takes place here can best be explained as the result of applying (when appropriately licensed) the dictum. In effect, the dictum allows us to substitute the predicate term of a universal affirmative or negative categorical premise for the subject term in any other proposition in which that subject term occurs undistributively. It is the universal premise that licenses such a substitution. Before continuing, we need to get clear about the concept of license and say how it relates to the notion of distribution. When the 19th century algebraic logicians began to build and refine their versions of formal logic, they did not make use of some of the innovations of the medieval logicians. With the exception of Christine Ladd-Franklin (Ladd-Franklin 1883), they made little use of the notion of distribution. Unlike the predicate calculus, a term logic flounders without an adequate account of distribution. As almost all logicians (and even some others who have wandered into Wonderland or gone through the Looking Glass) know, Lewis Carroll (aka Charles Dodgson) was a mathematician and logician. Near the end of his life he wrote a short dialogue for Mind called “What the Tortoise Said to Achilles” (Carroll 1895). A fairly large literature has grown around it. Briefly, what the Tortoise tries to do is get Achilles to admit that in order to draw the conclusion (essentially of the form) ‘This A is C’ from premises of the forms ‘This A is B’ and ‘Every

76 | On the Term Functor Trail

B is C’ he must first add the premise ‘If this A is B and every B is C, then this A is C’. Of course, once Achilles allows this he is faced with a new inference (with three premises) and is challenged by the Tortoise to admit still a fourth premise, and then ... ad infinitum. Though there has been much debate about just what logical point Carroll was trying to make, at least part of the answer is that he was illustrating the importance of avoiding confusing premises with rules of inference. If every rule used to draw a conclusion from a set of premises is also a premise, then an additional rule must be used to draw the conclusion from this augmented set of premises. This new rule must then be added as a further premise, and so on, and so on – and infinite regress. Gilbert Ryle (Ryle 1950) interpreted Carroll as teaching a lesson concerning laws in natural science (which are stated in the form of universal statements). An infinite regress would be initiated if one were to confound them with premises in scientific deductions. These universal statements are to be seen as “inference licenses,” which are to scientific explanations what logical rules are to formal deductions. Sidestepping Ryle’s take on scientific laws, I want to borrow his notion of license, confined now just to the field of logical (viz., syllogistic) inference. Generally, a license is a permission; it permits one to do something under certain specified conditions. A license has, in effect, the form: ‘Under such and such conditions, you can do the following...’. By my reading of Carroll (Englebretsen forthcoming a), one does need to be diligent in distinguishing argument premises from inference rules – but a license is likewise to be distinguished from such rules. So: 1. Every license is a universal premise. 2. No particular premise is a license. 3. Not every universal premise is a license. 4. No premise is a rule. 5. No license is a rule I’ve said that the dictum applies whenever it’s licensed. The license for it is a universal premise. An inspection of Aristotle’s four perfect syllogisms reveals that in each case there is a universal premise and the conclusion is the result of substituting the predicate of that premise for the middle term (the term that appears in both premises but not the conclusion) that occurs undistributed in the other premise. Since all valid syllogisms are reducible to those perfect syllogisms, it follows that all (and only) valid syllogisms license the application of the dictum. More often than not the suppressed premise of an enthymeme is the license. Licenses are always universal and are often necessarily true. The latter characteristic invites suppression – such a proposition “goes without saying.” Suppose I argue that Ralph is not married because he’s a Catholic priest. The

Deriver’s License Required | 77

hidden premise, the thing that doesn’t need to be said is ‘No Catholic priest is married’ (or ‘Every Catholic priest is unmarried’). This is the license that, by the dictum, permits the substitution of ‘unmarried’ for ‘priest’ in the explicit premise (where ‘priest’ is undistributed) to yield the conclusion. In the famous example ‘Every horse is an animal, so every head of a horse is a head of an animal’, the license is the explicit premise. The other premise is the suppressed ‘Every head of a horse is a head of a horse’ (something plainly too obvious and trivial to need stating). Here one is licensed to substitute ‘animal’ for ‘horse’ when ‘horse’ occurs undistributed. In the missing premise, ‘horse’ occurs twice, but only the second occurrence is undistributed. Substituting ‘animal’ for that second occurrence of ‘horse’ yields the desired conclusion. As we just saw, the dictum de omni, seen as a rule for term substitution when licensed, applies even where the middle term occurs as part of a more complex sub-sentential expression. Consider one more argument: ‘Every logician is a philosopher; every admirer of all logicians is a fool; therefore, every admirer of all philosophers is a fool’. The proof of this reveals a double reliance on the dictum. Let the first premise be line 1. Line 2 is the tacit, but innocuous ‘Every admirer of all logicians admires all logicians’. Now line 1 licenses the substitution of ‘philosopher’ for ‘logician’ in any premise making use of an undistributed ‘logician’. In the second line ‘logician’ occurs twice, undistributed the first time and distributed the second. So line 1 licenses the substitution of ‘philosopher’ for ‘logician’ in line 2 to yield, by the dictum, line 3: ‘Every admirer of all philosophers admires all logicians’. Now the second explicit premise (‘Every admirer of all logicians is a fool’) is line 4. This line is a second license. It permits the substitution of ‘fool’ for the complex term ‘admirer of all logicians’ in any statement making use of an undistributed ‘admires all logicians’. And that term does indeed occur undistributed in line 3 (‘Every admirer of all philosophers admires all logicians’). So line 4 licenses the requisite substitution in line 3 to yield, by the dictum, the conclusion ‘Every admirer of all philosophers is a fool’. To summarize: to validly draw a syllogistic conclusion, three things are required. First, a universal premise, which acts as a license for substitution. Second, another premise which includes an undistributed token of the subject term of the license. Third, a rule (the dictum de omni) that is licensed by the universal premise to substitute the predicate of the license for that undistributed term as it occurs in the other premise.

78 | On the Term Functor Trail

4.4 Down the Hill Without a Variable I think part of the appeal of mathematical logic is that the formulas look mysterious – You write backward Es! Hilary Putnam

As it happens, TFL is not the only candidate for a term logic to be built in recent times. Ironically, Quine, the “puritanical” (Speranza and Horn 2012) champion of MPL developed his own version of term logic. Before continuing, it should be noted that Quine’s version of term logic (called Predicate Functor Algebra, PFA) was never meant to rival the standard predicate calculus. It was meant primarily to reveal the central role of singular personal pronouns (in their guise as bound individual variables) in the formal language. He did this by eliminating them (Quine 1936, 1936a, 1937, 1959, 1960, 1976, 1976a, 1971, 1981, 1981a; Noah 1980, 1982, 1987; Böttner and Thümmel 2000). As a good Fregean logician, one committed to the standard predicate logic, Quine never tired of emphasizing the importance of bound individual variables (the formal analogues of pronouns). These variables carry the “burden of reference.” Their values (i.e., what they are allowed to refer to) are the things to which a user of the sentence, formulated in the language of MPL, is “ontologically committed.” Of course, pronouns are fairly commonly used in natural language. But MPL is far more profligate with them (or their formal counterparts) than is any natural language. Simple statements with no need for pronouns (e.g., ‘All logicians are fools’) are said to have a logical form that requires them (e.g., ∀x(Lx ⊃ Fx), read as ‘Every thing is such that if it is a logician then it is a fool’). But where did those pronouns, those ‘it’s come from – not to mention that ‘if ... then’? Natural pronouns are generally used to pick up the reference of other, antecedent, noun phrases. In ‘All the judges were tainted; they had received bribes’ the pronoun ‘they’ refers back to all the judges (which ‘All the judges’ had referred to). But what do those ‘it’s refer to in ‘Every thing is such that if it is a logician then it is a fool’? In other words, what do the variables in ∀x(Lx ⊃ Fx) refer to? What is the referring phrase from which they inherit, pick up, their reference? Answer: the quantifier that binds those variables – ∀x – ‘every thing’ (see Oderberg 2005a). So variables (and the quantifiers that bind them) are of central importance for the language of MPL. Quine built an alternative formal logic that dispenses with the apparatus of variables in order to reveal just how central they must be. Suppose you move from the tropics to my country – Québec. In your new home’s basement you find a big machine. It takes up valuable space, it costs money to use and maintain, it makes noise, you’ve never needed anything like it before. Could you dispense with your furnace? To see just how important a

Down the Hill Without a Variable | 79

good home heating system is in a place like Québec, try getting rid of yours. Then what? What would you have to do to get what the heating system had provided? More insulation, warmer clothes, more sweaters, more wool socks, more blankets, more hot drinks, more dirty, smokey fires in the fireplace, more fuel for those fires, more shivers, shakes and runny noses. That’s what Quine did to MPL; he took out the machine of variables and quantifiers and figured out what was needed to replace them. PFA, Predicate Funtor Algebra, is meant to do what MPL does – but without the big machine in the basement. Here is what he realized. Relative pronominal clauses (e.g., ‘that runs’, ‘who is reading’, ‘which is green’, ‘which is growing’ – paradigmatically: ‘such that ...’) form complex general terms when extracted from complex sentences. That’s how we got ‘such that it is a logician’ from ‘All logicians are fools’. ‘Some man is a father’ yields ‘some thing such that it is a man and it is a father’ (with ‘such that it is a man’ being one of the extracted complex general terms). Bound variables, then, are part of the mechanism for constructing complex general terms with a relative pronominal clause (like ‘such that it’). In the original sentence, reference is made to an individual. The complex general term that can be extracted can then be predicated of that individual – without loss of information in the process. This role of producing complex general terms from sentences is what Quine took to be the primary function of bound variable. It is more basic than their role in quantification (i.e. their role as pronominal referring expressions with quantifiers as their antecedents). Variables are eliminated, complex predicates are introduced, and the work that had been assigned to the bound variable machine is now taken up by predicate functors that apply to those complex predicates left behind. The result is a formal language that essentially consists of predicates (i.e., general terms) and predicate functors. Since the set of predicates is now logically homogeneous (all non-formative expressions are general terms) and each of the functors is either unary (applying to predicates one at a time) or binary (applying to pairs of predicates), and since the binary predicate functors form new, syntactically more complex predicates from pairs of predicates, and since sentences are the result of the application of such functors to such pairs, the logical syntax of PFA is ternary. One might just as well call predicates here ‘terms’. This is a term logic. Unlike TFL, built to model the inferences we commonly make in the medium of a natural language with a simple logical syntax reflecting the logical features of that natural language, Quine’s version of term logic was built to illustrate the central role of bound variables in MPL, aimed to capture that formal language in the language of predicates and functors applied to them, and still retains the complexity and unnaturalness of the language of quantifiers and variables.

80 | On the Term Functor Trail

We’ve only made a brief stop to view the kind of term logic that can be built as a simpler, more natural, and as powerful (perhaps more so) alternative to the standard logic now in place. In its term functor form, it seems to have a far better claim to being considered the logic of natural language – natural logic.

5 The Four Corners You are a Line, but I am a Line of Lines, called in my country a Square. E.A. Abbott

5.1 Around the Square Fie, fie, how franticly I square my talk! Shakespeare

There is, of course, much more to say about this slightly enriched formal tool that we have just acquired. It turns out that it can be augmented in a number of ways to produce a truly powerful formal language adequate to the needs of any formal logic. It is expressively powerful (able to formulate a very wide variety of natural language expression) and it is inferentially powerful (able to model many of the ways in which we ordinarily reason correctly) (see especially Sommers 1970, 1982 and 1990; Englebretsen 1996; Sommers and Englebretsen 2000; Oderberg 2005). What we want to look at now from this new vantage point is the possibility of this logical language’s ability to shed new light on a very old topic, one centrally located in the range of logic. The topic at issue is the famous Square of Opposition and the logical relations among what Aristotle, in De Interpretatione (19b19-19b29), called “the four” that are meant to be represented on the four corners of the square (see Englebretsen 1976, 1984, and forthcoming). The “traditional” square is supposed to offer a perspicuous visual display of the various logical relations that hold among the following family of sentences (where the terms are replaced by term-letters, which act as variables to produce a sentence-scheme rather than an actual natural language sentence): 1. Some S are P 2. Some S are not P 3. No S are P 4. All S are P Each of these was given a specific name. The first three were called, in order, I, O, and E. Though the old logicians called the fourth one A, we will call it a. Fear not; the old name, A, will not be lost entirely. A Traditional Square of Opposition would look (almost) like this:

82 | The Four Corners

a: All S are P

E: No S are P = Not: some S are P

I: Some S are P

O: Some S are not P

Fig. 1: Traditional Square of Opposition

Let us replace the sentences with their formulations in our artificial language:

a: !S+P

e: !(+S+P)

I: +S+P

O: +S−P

Fig. 2: Formal Traditional Square of Opposition

Around the Square | 83

According to tradition, forms at diagonally opposite corners of the square are logical contradictories. This means that, given such a pair of sentences, at least one is true (the “Law of Excluded Middle”) and it cannot be the case that both are true (the “Law of Noncontradiction”). The top two forms were said to be logical contraries (they could both be false but they could not both be true). The bottom two forms were said to be subcontraries (they could both be true but they could not both be false). Finally, tradition generally held that each of the top forms logically entailed the form directly below it, the lower forms being called the subalternates of the upper forms. The traditional square derives from Aristotle via Boethius (Boethius 187780). In his translation of Aristotle’s De Interpretatione, Boethius rendered Aristotle’s Greek version of the universal affirmation and its contradictory as ‘omnis homo albus est’ and ‘non omnis homo albus est’. However, in his commentaries he takes the particular negative not in terms of the negation of its contradictory, but as a genuinely logical particular (i.e., O). Tradition has generally followed the commentary rather than the translation (even though modern translations such as Ackrill’s parallel the original) (Ackrill 1963). Based on the translation, and using o for the negation of a, one can conjecture that there was an original Aristotelian Square of Opposition:

a

E

I

O

Fig. 3: Aristotelian Square of Opposition

84 | The Four Corners

The traditional square is a thing of beauty. However, as we all know (and are now told by biologists and psychologists), we tend to find more beauty in symmetry rather than asymmetry. And a close look at the traditional square reveals a small bit of asymmetry. As we would expect, contradictories amount to the negations of one another. The I and E forms are clearly contradictories. Given that ‘no’ is a contraction for ‘not some/a(n)’, E is the negation of I. But what of the contradictory pair of O and a? Notice that for the I, O, and E forms the only quantifier that occurs is ‘some’ (+). The proper negation of O would be ‘−(+S−P)’ (‘Not some S is not P’, ‘Not an S is not P’, ‘No S is not P’). That form is what deserves the name A. A “primary” square of opposition would exhibit only sentential forms making use of the quantifier ‘some’ (+). In other words:

A: No S are not P

I: Some S are P

E: No S are P

O: Some S are not P

Fig. 4: Primary Square of Opposition

This square has genuine logical symmetry. Unfortunately, it leaves us longing for those forms using the universal quantifier (−). The primary square displays particular forms and their negations. There are two universal forms to be considered: ‘All S are P’ (−S+P), our a, and ‘All S are not P’ (−S−P), which we will call e. How can these two forms, a and e, be featured on the square? As it turns out, in the vast majority of cases we can stick to the definitions found in our plus/minus logic, saying that an a can be defined as A and that an e can be

Around the Square | 85

defined as E. When everything goes right we can take advantage of these definitions, allowing the equivalence of a and A and of e and E. This could be displayed by the “normal” square of opposition:

A: No S are not P

E: No S are P

a: All S are P

I: Some S are P

e: All S are not P

O: Some S are not P

Fig. 5: Normal Square of Opposition

Unsurprisingly, as we all know, things don’t always go right. Notice that on the normal square particularly quantified sentences have as their contradictories not only their negations but also universals that are logically equivalent to those negations. Recall the Laws of Noncontradiction (LNC) and Excluded Middle (LEM). We can formulate them as: LNC A sentence and its contradictory cannot both be true. LEM Either a sentence or its contradictory is true. As long as everything goes right, a=A and e=E and we can say that the contradictories of I and O sentences are their negations. Yet we have alluded so far to the fact that things do not always go right. Let’s prepare ourselves for the worst by formulating laws that govern universal sentences (independently of how they might be logically defined). One law that we would naturally want to accept is the principle that a pair of sentences of the forms ‘Some S are P’ and “All

86 | The Four Corners

S are not P’ could not possibly both be true (likewise for the pair ‘Some S are not P’ and ‘All S are P’). Call this the Law of Quantified Opposition: LQO A sentence and its quantified opposite cannot both be true Intuition tempts us to say that, given a subject, any predicate is such that either it or its negation is true of that subject. In terms of the bit of formal logic we have forged above, this means that given any particularly quantified sentence and a second sentence exactly like it except that the term following the qualifier is the negation of the term following the qualifier of the first sentence, one of the two sentences is true. Thus we want to say that either ‘Some logicians are boring’ or ‘Some logicians are not boring’ is true (perhaps both are). Call this the Law of Subcontrariety: LSC Either a particular sentence or its subcontrary is true This law has a companion. We want to say that a universal sentence and its logical contrary cannot both be true. The law says that logical contraries are incompatible. Call it the Law of Incompatibility: LIC A universal sentence and its contrary cannot both be true All of the laws formulated thus far govern the normal square of opposition. However, when things go wrong, normalcy is no longer guaranteed. There are at least three ways in which things can go wrong (Sommers 1969, 284). In each case the normal square fails to apply (in the sense that one of the laws fails to hold). Moreover, in these cases, the failure of the law is the result of our inability to define universals (a and e) in terms of the respective negations (A and E).

Is Something There? | 87

5.2 Is Something There? The more realistic view seems to be that the existence of children of John’s is a necessary pre-condition not merely of the truth of what is said, but of its being either true or false. And this suggest the possibility of interpreting all the four Aristotlian forms on these lines: that is, as forms such that the question of whether statements exemplifying them are true or false is one that does not arise unless the subject class has members. Strawson Presupposition has all the advantages of theft over honest labour. Russell

Modern logicians, needless to say, do not formulate natural language sentences the way we have suggested. For better or worse, they standardize the traditional sentences of the square as the Modern Square of Opposition:

Every x is such that if it is

Every x is such that if it is

S then it is P

S then it is not P

(∀x (Sx → Px

(∀x) (Sx → ~Px

(∃x) (Sx & Px) There exists some x such that it is both S and P

(∃x) (Sx & ~Px) There exists some x such that it is both S and not P

Fig. 6: Modern Square of Opposition

This square differs from the other squares in many ways. For example, notice that the quantifiers, which we introduced as inseparable pairs (along with the

88 | The Four Corners

qualifiers) of split binary term functors, have been abandoned in favor of new quantifiers (which are taken to be functions on entire open sentences, binding the individual variables that occur in those sentences). Moreover, the particular quantifier has been replaced by the “existential” quantifier. This means that the new I and O forms have “existential import.” To assert such a sentence (use it to make a statement) is to make an existence claim. In contrast, their contradictories do not have existential import. This means that the traditional relation of subalternation does not hold for the modern square. It can be argued that the principle of subalternation is actually enthymemic, using a missing, tacit premise – “Aristotle’s proviso” as Alvarez and Correia call it (Alvarez and Correia 2012, 304). Thus ‘−A+B’ does entail ‘+A+B’, but only on the assumption of a hidden premise: ‘+A+A’. We could read this as ‘Some A is A’, ‘Something is A’, ‘There is an A’. The same holds, mutatis mutandis, for an inference from ‘−A−B’ to +A−B’. In any case, without taking subalternation as a specified rule of inference, it is a tacit (and non-tautological) premise – and Aristotelian proviso – that is required for the inference of any particular from its corresponding universal. Our particular quantifier is not existential. But we do need to ask what happens with sentences of the forms ‘Some S are P’ and ‘Some S are not P’ when there is no S. There are no satyrs. Let’s agree with modern logicians, then, that ‘Some satyrs are poets’ and ‘Some satyrs are not poets’ are both false. More generally, let us say that when there is no S it follows that the I and O forms, ‘+S+P’ and ‘+S−P’ are both false. But the modern logician goes on to say that in such cases the corresponding universal forms are true (since they have been in effect, defined as the negations of the I and O forms. In other words, the modern square is constructed using the a, e, I and O forms, with a and e defined in terms of A and E. We will not follow the modern logicians that far. Let us say that in cases where I and O are both false, their respective negations are true (by LNC and LEM), but, since LSC fails, a and e can no longer be defined (in terms of A and E). When things go wrong this way, when there is nothing satisfying the particularly quantified term, no corresponding universal can even be defined. Such universals are inexpressive and have no truth value. Cases where LSC fails to hold are vacuous.

Nonsense! | 89

5.3 We Could be Lost Clearly, therefore, not everything is or happens of necessity: some things happen as chance has it, and of the affirmation and the negation neither is true rather than the other; with other things it is one rather than the other and as a rule, but still it is possible for the other to happen instead. Aristotle

Existence failure is not the only way things can go wrong. LSC is also jeopardized by lack of sufficient determination of a subject by a predicate. Consider now the sentence ‘A sea battle will occur in the Gulf of Saint Lawrence on 30 March 2020’. As of now, the truth value of such a sentence is underdetermined. Suppose we display it on a primary square (in the I position). LNC and LEM would still hold (as they always do), but LSC would fail. We could not (now) assign truth to either ‘A sea battle will occur in the Gulf of Saint Lawrence on 30 March 2020’ or ‘A sea battle will not occur in the Gulf of Saint Lawrence on 30 March 2020’. When things go wrong in this second way, we have no grounds for defining the corresponding universals a and e.

5.4 Nonsense! Logic sometimes creates monsters. Poincaré

Think about the orange I squeezed for my breakfast juice this morning. There are many things we can say about it. Some are true: ‘The orange is fresh’, ‘It is sweet’, It has many seeds’. Some are false: ‘The orange is purple’, ‘It is sour’, ‘It’s not mine’. Some are just nonsense: ‘The orange is courageous’, ‘It is illiterate’, ‘It is a prime factor of 42’. From our logical point of view, pair of terms are operated on by (split or unsplit) binary term functors to form syntactically more complex compound terms (some of which are sentences). In many cases (most of the ones familiar to us from our ordinary discourse) the compounding of a pair of terms yields a sensible term (e.g., ‘fresh and juicy’, ‘Some oranges have no seeds’, etc.). However, most term pairs simply do not “go together.” Most such pairs are like ‘orange’/‘courageous’ (or ‘pencil’/‘cry’, or ‘2’/‘red’). So the fact is that we can sometimes form sentences that are just plain nonsense. They are what Ryle had called “category mistakes” (Ryle 1938 and chapter 1 of 1949; Sommers 1959, 1963, and chapter 13 of 1982; Englebretsen 1972, 2005, and 2012, 47-51). Nonsense is a third way in which things can go badly. Sentences without

90 | The Four Corners

sense cannot be true. But are they false? And what of them and their siblings on the square? Suppose I say, ‘Some numbers are drinkers’. You naturally want to dispute this. You will allude to the fact that numbers aren’t the right sort of things to be called drinkers, that the terms ‘number’ and ‘drinker’ don’t go together. If you happen to be less patient (or less inclined to philosophizing), you might simply deny what I have said. How would you do that? “Some numbers aren’t drinkers’? ‘It’s not the case that some numbers are drinkers’? Notice that the second is the logical negation (thus contradictory) of my sentence. The first option, however, is ambiguous. ‘Some numbers aren’t drinkers’ could be paraphrased as either ‘Some numbers are nondrinkers’ or as ‘It is not the case that some numbers are drinkers’ (just the second option). But ‘Some numbers are nondrinkers’ is just as senseless as my original sentence. It seems reasonable to say that you can deny what I said by asserting its negation, and that such a negation is true. Consequently, ‘Some numbers are drinkers’ and ‘Some numbers are nondrinkers; are both false, while ‘It’s not the case that some numbers are drinkers’ and ‘It’s not the case that some numbers are nondrinkers’ are both true. Moreover, these last two are more simply paraphrased as ‘No numbers are drinkers’ and ‘No numbers are nondrinkers’. In general, when the terms ‘S’ and ‘P’ cannot be sensibly compounded, the forms ‘Some S are P’ and ‘Some S are not P’ are both false and their negations are both true (i.e., they are false in their I and O forms and true in their A and E forms – and the forms ‘All S are P’ and ‘All S are not P’ (a and e) are simply undefined. Thus, once more, while the other logical laws either hold or don’t apply, LSC has failed.

5.5 The Hexagonal Square Thus the son of a Square is a Pentagon; the son of a Pentagon, a Hexagon, and so on. E.A. Abbott

As we have seen now, the a and e forms have a home on the normal square just as long as things are ... well ... normal; as long as a and e can be defined (and thus equated) with A and E. In short, as long as nothing goes wrong (e.g., there is no failure of existence, no lack of determination, no nonsense), a and e can be defined and, consequently, LSC holds. But suppose we have no idea whether our sentences are normal or not. We want to be able to display all the logical relations that hold (or might hold) among any six-membered family of sentences having the A, E, I, O, a and e forms. What I required is a place on the square

Squaring the Square | 91

reserved for the last two forms – but without prejudice about their definability. Thus the Hexagon of Opposition:

A

E

a

e

I

O

Fig. 7: Hexagon of Opposition

In normal cases, LNC, LEM, LQO, and LIC all hold and can be seen on the hexagon. In normal cases, a entails A, a entails I, e entails E, e entails O, A entails a, E entails e and the hexagon just collapses into the normal square. When things go well, LSC holds, so either I or O is true and either A/a or E/e is false. Thus, in such cases, all sentences displayed on one side of the square are true and all the sentences displayed on the other side are false. In abnormal cases, a and e are not defined, the hexagon does not collapse into the square, I and O are both false, and A and E both true, and, being undefined, a and e have no truth value and are out of place (with nowhere else to go).

5.6 Squaring the Square What happens to the hole when the cheese is gone? Bertolt Brecht

So far we’ve found a place on the square for A, E, I, O, a and e forms by turning the square into a hexagon. We defined the first two of these forms as negations of the second two; we defined the final two, in effect, by “driving in,” distributing, the initial unary minus of negation into each of the first two. Thus, we held I and O as logically primitive. What we have not left room for on the hexagon are the negations of the a and e forms. Notice that, given our formulations, I and O and a and e are un-negated, while A and E are negated sentences. What of the negations of a and e? The negation of a has the form ‘−(−S+P)’; the nega-

92 | The Four Corners

tion of e has the form ‘−(−S−P)’. Call these i and o, respectively. Since a and e are themselves non-primitive, i and o are likewise non-primitive. Moreover, since in non-normal cases a and e are undefined, it follows that i and o are also undefined in those cases. In the normal cases we can say: A=a, E=e. I=i and O=o. We could display all eight members of this extended family of forms on a Squared Square of Opposition: a: −S+P

e: −S−P

A: −(+S−P)

I: +S+P i: −(−S−P)

E: −(+S+P)

O: +S−P o: −(−S+P)

Fig. 8: Squared Square of Opposition

Again, as long as things go well, the outer square simply collapses into the inner square – all is normal. When things go awry, the outer square simply vanishes like the Cheshire Cat. As it happens, this squared square allows us to shed

Squaring the Square | 93

light on some controversies. For example, Terence Parsons (Parsons 2006) has proposed an invalid inference which could be, contrary to expectations, proven using a version of traditional logic, namely, the one proposed by P.F. Strawson (Strawson 1952). The premise of Parsons’s inference is ‘No man is a chimera’ (which is true) and the inferred sentence is ‘Some non-man is a chimera’ (which is false since there are non-men but there are no chimeras). Applying simple conversion to the premise yields the logically equivalent ‘No chimera is a man’. This is then obverted to get ‘Every chimera is a non-man’. Applying subalternation then gives ‘Some chimera is a non-man’. Finally, this last is converted to produce ‘Some non-man is a chimera’. There is no doubt that something has gone wrong in this proof. Let’s formulate it in our plus/minus system. 1. −(+M+C) premise 2. −(+C+M) conversion on 1 3. −C−M obversion on 2 4. +C−M subalternation on 3 5. +(−M)+C conversion on 4 Things seem innocent enough. But let’s describe the forms of each step. 1 and 2 have E forms; 4 and 5 are O in form. There are no grounds for rejecting the steps from 1 to 2, 3 to 4, or 4 to 5. This leaves the step from 2 to 3. Now 3 is an e form sentence; it is universally quantified, and, as we now know, universally quantified sentences (of a or e form) and their negations (i and o) are defined in normal cases and undefined in non-normal cases. Since 3 is vacuous (there being no chimeras) it has no truth value, so nothing – in particular, 4 (thus 5) – follows from 3. The premise ‘No man is a chimera’ is true because it is the negation of ‘Some man is a chimera’, which is false. But ‘Some man is not a chimera’ is also false, making the whole family vacuous in the sense of no having a, e, i, o. So, the outer square of the squared square disappears. The e form (line 3), along with its lowercase siblings, has left the building. A final note before continuing: in all cases of the squared square, either the outer square collapses into the inner square (the normal cases) or it vanishes (the non-normal cases), which is perhaps why logicians have paid so little attention to it.

94 | The Four Corners

5.7 The Punch Line For the Snark was a Boojum, you see. Lewis Carroll

Thus far, all of our discussion has centered on terms assumed to be general. Our squares and hexagon exhibit the logical relations holding among families of sentences whose two terms are never singular. But could a square accommodate singular sentences? Suppose ‘S’ in our sentence forms could be interpreted as a singular term (‘Socrates’, ‘the first man on Mars’, etc.). It seems reasonable to concede that even where the subject term is singular we can have both normal and non-normal cases. In abnormal cases things can go wrong in any of the ways we have discussed. In the case of ‘The youngest daughter of Prince Charles lives in Switzerland’ the singular term is vacuous. In ‘Brad Pitt will become a tree surgeon next year’ the subject is underdetermined with respect to his future. And ‘Socrates is prime’ is nonsense. But the question remains how to logically formulate singular sentences. In particular, how are they to be construed as quantified (an essential requirement for entry to the square)? One answer is that singulars can be logically parsed as particulars, differing from standard particulars in that they entail (on semantic, not formal, grounds) their corresponding universals. This is the so-called “wild quantity” thesis (see Leibniz 1966; Sommers 1967, 1969a, 1976, 1993; Englebretsen 1980, 1986, 1986a, 1988). This would mean, say that ‘Socrates is wise’ has as its logical form ‘+S+W’, and that it entails ‘Every Socrates is wise’ (−S+W). In what follows we can remind ourselves that a term is taken to be singular by writing it as a lowercase letter (thus our singular sentences become: ‘+s+W’ and ‘−s+W’). So let us construct a Singular Hexagon of Opposition to begin with:

Truth or Dare | 95

A: (+s W)

E: (+s+W)

e: s+W

e: s W

I: +s+W

O: +s W

Fig. 9: Singular Hexagon of Opposition

As usual, when things go wrong, a and e are undefined, LSC, fails, etc. But, when things are normal, not only is a defined as A and e as E (so that a and A mutually entail one another, so do e and E), but I entails (semantically) a and O likewise entails e. This means that the singular hexagon doesn’t just collapse into the singular analogue of a normalsquare. Since, when defined at all, a entails I and e entails O, when nothing is amiss with singulars, the singular hexagon Snark collapses into a single line Boojum, the Singular Line of Opposition:

A, I, a _________________ E, O, o Fig. 10: Singular Line of Opposition

5.8 Truth or Dare There’s a guy who asserted both p and not-p, and then drew out all the consequences. Sidney Morgenbesser

The Law of Subcontrariety (LSC) fails when both members of an I and O pair fail to be true. Ryle, Strawson, and Sommers have all argued that such statements

96 | The Four Corners

lack the necessary “anaphoric background” that licenses their referring terms to have a determinate reference. In his Categories (13b14-19), Aristotle showed that if Socrates exists then either ‘Socrates is ill’ or ‘Socrates is well (not-ill)’ is true; but if Socrates doesn’t exist then neither statement is true. This required Aristotle to be careful to distinguish the contradictory of a statement from the logical contrary of that statement. ‘Socrates is not-ill’ is the contrary, but not contradictory of ‘Socrates is ill’. The failure of Socrates to exist has, as he saw, an effect on the contrary pair of statements, but it has no effect on the pair of contradictory statements. Pairs of contradictories are always bound by the Laws of Excluded Middle and Noncontradiction, which demand that one, and only one, of the pair must be true. When there is no Socrates, the term ‘Socrates’ cannot be used to make a determinate reference, rendering both contrary statements ‘Socrates is ill’ and ‘Socrates is not-ill’ false, but leaving the contradictories of these, ‘Not: Socrates is ill’ and ‘Not: Socrates is not-ill’ both true. ‘The present King of France is wise’ and ‘The present King of France is unwise’ are vacuous because the expression ‘the present King of France’ can make a determinate reference if the presupposition that there exists a present King of France is true. That presupposition would, were it true, provide the required anaphoric background. The same explanation accounts for the vacuousity of a pair like ‘Some mermaids are married’ and ‘Some mermaids are unmarried’. Since there are no mermaids, the pair lacks the anaphoric background (the appropriate true presupposition) necessary to determine reference. It’s this failure of determinate reference that leads to the I/O falsity and, in turn, the indefinability of corresponding a and e (and, of course, i and o) form. A sentence such as ‘All mermaids are unmarried’, being undefined, is inexpressive, expresses no proposition. When we visited the Liar’s Lookout, we also encountered inexpressive sentences. These semantically paradoxical sentences, along with category mistakes, vacuous sentences, future contingent sentences, etc., all fail to have truth values. This might well lead one to conclude that the old Law of Bivalence should be sent to the permanent recycling bin. However, as we have seen, Chrysippus and many subsequent logicians recognized the important distinction between statement-making sentences and what those sentences express – propositions. Any truth value that a statement might have (if it has any truth value at all) must be inherited from the truth value of the proposition it expresses. Propositions are the proper bearers of truth or falsity. Bivalence does govern statement-making sentences – at least in the normal cases. But abnormal cases, as we now know, are something else. By contrast, propositions are just the things to be subject to the Law of Bivalence. There are exactly two truth values: true and false. Every proposition is either true or false and no proposition is

Proposition Opposition | 97

both true and false. It’s the Law (see Pérez-Ilzarbe and Cerezo 2012, 11-14). The abnormal cases we saw on the square are simply those sentences that fail to inherit a truth value (because they fail to express any proposition due to lack of a determinate propositional depth, or fail to be supplied with an anaphoric background that determines reference, or fail to presently have a factual ground for determining the truth value of the propositions they express).

5.9 Proposition Opposition There are more things in heaven and earth than are dreamt of in your philosophy, my dear logician. Hans Reichenbach

Recall that in our look at the Plus/Minus Logic, when we were on the Term Functor Trail, we noted that since sentences are terms the logic of statements (statement-making sentences) can be taken as a special branch of term logic. It is special because of the pair of disanalogies we encountered there, disanalogies that were resolved by the recognition that sentential terms are singular, denoting just one thing – the universe of discourse relative to which they are being used. All of this can now be illustrated using a squared square, one which then collapses to become a singular line.

98 | The Four Corners

−p+q

−p−q

−(+p−q)

+p+q

−(−p−q)

Fig. 11: Propositional Square of Opposition

And thus the last line:

*p+q _____________________ *p!q Fig. 12: Singular Line of Propositional Opposition

−(+p+q)

+p−q

−(−p+q

6 Referential Falls Don’t talk unless you can improve the silence. Jorge Borges

6.1 Into the Semantic Forest Words, mademoiselle, are only the outer clothing of ideas. Agatha Christie

Our trek through the logical range has not been without difficulties and dangers, but at least syntactic problems are fairly easily illuminated. We are now going to descend into a much darker area, a dense forest of semantics, where we seldom have a view of the forest and only occasionally have a clear view of some trees. For most mainstream modern logicians a “universe of discourse” is a set of objects that are potential “values” for the bound variables of that discourse (i.e., formulated statements). The objects are usually taken (implicitly) to be nothing but bare particulars – things. Thus the classical quantifiers: ‘every thing is such that’ (∀x) and ‘some thing is such that’ (∃x). Moreover, these bound variables carry the entire “burden of reference,” as Quine insisted. Frege’s inviolable ontological distinction between objects and concepts rested on his logical distinction between saturated/complete and unsaturated/incomplete expressions (see Sommers 1982, 37ff). The latter, recall, are predicates or other function expressions containing gaps (e.g., ‘... loves ...’); the former are names (singular referring expressions such as proper names, personal pronouns, etc.) suitable for filling those gaps. In effect, then, the distinction between singular terms and general terms is central to the semantics of MPL. Singular terms refer (to single, individual objects, things). What do general terms do? The Fregean line, of course, was that they also somehow refer. But while singular terms refer to objects, general terms “refer” to concepts. In the formulation of ‘Some man is bald’ – (∃x)(Mx & Bx) – ‘Some thing is such that it is a man and it is bald’ – the bound variables (pronouns) refer to an unspecified bare particular object in the universe of discourse, while the predicates (‘... is a man’ and ‘... is bald’) refer to the concepts of being a man and being bald. Now modern semantic theories come in a number of varieties. The sort most commonly attached to modern formal logic comes from a wide range of sources (from Aristotle to Boole). Frege is clearly the most immediate source. So let’s go

100 | Referential Falls

a little deeper into this forest to look at some further things he had to say about certain semantic issues. Frege held that concepts are “predicative in nature” (Frege 1970a, 43) so that even in the case of the expression ‘man’ in ‘Some man is bald’, though one might think that it is a ‘subject-concept,” it is no such thing. It is a general term and therefore predicative. That’s why (as Quine says in his work on Predicate Functor Algebra) ‘man’ is logically parsed as ‘... is a man’ (see Frege 1970a, 47). Frege’s objects are never concepts, but they do “fall under” concepts. The kind of reference (Bedeutung) that a general term has is to a concept. Objects “fall under” concepts, but they are not literally referred to by general terms. Were one to literally refer, by use of a term such as ‘the concept of C’, the proper object of that reference would not be the concept. Only objects can be literally referred to. Names, singular terms, are the kind of expression that refer to (designate, stand for) objects. But names also have a sense. The sense of a name is its “mode of presentation” (Frege 1970, 57). It is the manner in which such an expression presents its object of reference. For example, ‘3x2’ and ‘5+1’ both refer to the number 6, but they do so in different ways (i.e., by virtue of different senses; the first is a product and the second is a sum). Frege was careful to distinguish the sense of an expression from the idea associated with it. You and I might very well have different ideas of Aristotle – we don’t share a common “internal image” of Aristotle. How could we? Ideas are private, psychological, “subjective” entities. The sense of a name such as ‘Aristotle’, by contrast, is objective, public, “the common property of many” people – including you and me (Frege 1970, 59). Keep in mind that for Frege the fundamental distinction is between complete and incomplete expressions. Names are complete; function expressions, including predicates (general terms) are incomplete. A sentence is the result of completing a predicate by filling its gaps with names. Consequently, “Every declarative sentence is a proper name” (Frege 1970, 63). Moreover, the logician is not concerned with just any type of sentence. In contrast with imperatives, commands, etc., “Only those sentences in which we communicate or state something comes into question” (Frege 1967, 21). Let’s simply call these kinds of sentences statements. A statement has both a sense and a reference. The sense of a statement is a thought. A thought is the “content” of the statement (Frege 1967, 21). The sense of a statement, like the sense of any name, is its mode of presenting its objects of reference. When searching for a formal logic that could serve as the foundation of mathematics, Frege rejected the systems of traditional and algebraic logic. He took these to be plagued by “psychologism,” the unwarranted importation of

Into the Semantic Forest | 101

psychological features into logic, the misguided view that logic should provide a description of how we think. Boole had even titled one of his works The Laws of Thought. To be sure, we do think; we do have thoughts. But they are necessarily private, subjective. They are to be contrasted with Thoughts, the publically available, objective senses expressed by the statements we use. Importantly, Thoughts are “not produced” but “we apprehend them” (Frege 1967, 35). For “the thinker does not create them but must take them as they are” (38). Thoughts are like unknown galaxies, things waiting (perhaps forever) to be discovered. “Thoughts are not mental entities, and thinking is not an inner generation of such entities but the grasping of thoughts which are already present objectively” (Frege 1997, 302). “We are not owners of thoughts as we are owners of our ideas ... we do not produce thoughts, we grasp them. ... And yet the thinker does not create them but must take them as they are.” (Frege 1997, 341-344). (See Englebretsen 2012, 98-102 for a critique of this view.) But what about the reference of a statement? What object or objects could a statement refer to? It turns out that Thoughts, the senses expressed by statements, are the proper objects of so-called propositional attitudes. They are what are believed, doubted, commanded, known, etc. More importantly, they are the proper bearers of truth or falsity; “when we call a sentence true we really mean its sense is. From which it follows that it is for the sense of a sentence that the question of truth arises in general. ... I call a Thought something for which the question of truth arises” (Frege 1967, 19-20). So, for Frege, Thoughts are the senses of statements. And the references of those statements ...? “It is the striving for truth that drives us to advance from the sense to the reference” (Frege 1970, 63). That reference turns out to be nothing other than the truth value, the truth or falsity of the statement. For “all true sentences have the same reference and so, on the other hand do all false sentences” (Frege 1970, 65). Every declarative sentence is a proper name (Frege 1970, 63). It follows that, given that objects are what names refer to, truth values are objects – the True and the False. All true statements refer to the former; all false statements refer to the latter. No doubt, resorting to outre objects like the True and the False in order to account for statement reference has some odd consequences for one’s ontology. Frege’s ontology (which he thought was revealed by his logic) consists, in the first instance, of physical objects (public objects that can be perceived) and mental objects (ideas). But there were other things as well, things that were objective and public like physical objects, but abstract and not subjective in the way mental objects are. Such things include the truth values, senses, and Thoughts. They belong to a third ontological realm. “So the result seems to be: thoughts are

102 | Referential Falls

neither things of the outer world nor ideas. A third realm must be recognized” (Frege 1970, 29). The forest seems to be getting darker. The sense of a statement is its mode of presenting its truth values (its reference). But there are cases in which a statement might not have its “customary” reference. There are so-called oblique cases, sentential contexts involving propositional attitudes and indirect quotation. Citing Leibniz’s Law (in effect: Expressions having the same reference can be substituted for one another in any statement without altering the statement’s truth value), Frege asks “What else but the truth value could be found, that belongs quite generally to every sentence ...? (Frege 1970, 64). He considers the statements ‘Copernicus believed that the planetary orbits are circles’ and ‘Copernicus believed that the apparent motion of the sun is produced by the real motion of the Earth’. Here the two subordinate clauses have different truth values; yet each can be substituted for the other in the statements containing them – without altering the truth of the resulting statements. Consider a more obvious case. Emma thinks that Twain is humorous. Twain is Clemens. But Emma, unaware of this identity, doesn’t think that Clemens is humorous. I would be a logical mistake to draw from ‘Emma thinks that Twain is humorous’ and ‘Twain is Clemens’ the conclusion ‘Emma thinks that Clemens is humorous’. And this is so even though both subordinate clauses have the same reference – both are true, both refer to the same object (the True). Frege’s answer is that in such oblique cases “it is not permissible to replace one expression in the subordinate clause by another having the same customary reference, but only one having the same indirect reference; i.e. the same customary sense” (Frege 1970, 67). The customary reference of ‘the planetary orbits are circles’ is the True, the object one is directed to by the customary sense of the statement. But when such a sentence occurs only as a part of an oblique statement (as in the Copernicus statements), it no longer has its customary reference. Now its reference is indirect; its indirect reference is just what was its customary sense. Thus, “the reference of a sentence is not always its truth value” (Frege 1970, 67), “i.e. not a truth value but a thought, a command, a request, a question” (Frege 1970, 68). Immediately after disposing of the problem of indirect reference for statements, Frege alludes to the problem of vacuous reference. Consider this: thus far, no one has walked on Mars, yet we can say ‘The astronaut who walked on Mars is a hero’. The statement is certainly not true. Nonetheless, it does have a sense (it’s not nonsense like ‘The astronaut who walked on Mars is a prime factor of 12’). It has no reference, no truth value. It is referentially vacuous. Frege’s explanation for this is brief and to the point. “If anything is asserted there is always an obvious presupposition that the simple or compound proper names

Into the Semantic Forest | 103

used have reference” (Frege 1970, 69; see also Frege 1964, 83). Here “simple proper names” are just ordinary names such as ‘Aristotle’, ‘Copernicus’, ‘Quine’; “compound proper names” are definite descriptions such as ‘the astronaut who walked on Mars’, ‘the mayor of London’, ‘the smallest prime number’. Since sentences are proper names, when they are asserted (i.e., when they are statements), “there is an obvious presupposition that they have reference.” When they contain vacuous names, names without any direct reference, such statements are themselves vacuous. Yet they must be accompanied by a presupposition (generally implicit) that they have a reference. Since that reference cannot be the True it must be the False. A version of this explanation of vacuousity in terms of faulty presupposition was taken up most famously a half century later by Strawson (Strawson 1950). We haven’t followed Frege very far down the semantic path he forged. Still, it hardly needs emphasizing at this point that term logicians are inclined to follow a slightly different path through the semantic forest. One can hardly deny that we use expressions, terms (words, phrases, sentences, etc.) to express. To express what? Let us say that, in general, we express concepts (Frege’s senses). We say ‘Some philosophers are fools’, thereby expressing, among other things, the concept of foolishness. This sense of ‘concept’ takes concepts as objective and available to public scrutiny. It contrasts with the subjective, private sense of concept (something like Frege’s ideas). Concepts, qua things expressed, may be simple or complex. The concept of foolish philosophers is more complex than the concept of foolishness. Concepts come about by being expressed. Though there are surely unexpressed ideas (we sometimes keep them to ourselves), there are no unexpressed concepts. Frege held that concepts were simply there, waiting to be expressed. But this can’t be right. Just as, for example, an uncommitted act is no act at all, an unexpressed concept is no concept at all. When I say that I am looking for a three-eyed, bald, green dog with a gold tooth, reading Plato, I express a (complex) concept for the first time in history (I assume). Thus, in saying what I say I bring into being that concept, an objective, public concept that you and I now share. So speaking is creative – but it only creates concepts. That’s why talk is so cheap. Some (far from all) concepts correspond to properties of objects. The concept of foolishness certainly does (who would deny the existence of fools?). My new complex concept of a Plato-reading dog, expressed above, does not correspond to any property of any object (though each of its constituent concepts does). Properties obtain of, characterize, hold of, are true of objects. There are no un-had properties. The concept of human perfection corresponds to no property since there are no humans having the property of perfection. When we use

104 | Referential Falls

a term corresponding to a property, our expression not only expresses the appropriate concept but signifies that corresponding property. We can add that our term denotes the objects having the property (if any) signified. A term like ‘foolish’ expresses the concept (has the sense) of foolishness, signifies the property of foolishness, and denotes fools. A used term such as ‘human perfection’ succeeds in expressing a concept (it has a sense), but it is significatively and denotatively vacuous. On the theory being advocated here, a used expression normally (i.e., when neither significatively nor denotatively vacuous) has at least three semantic features: it expresses a concept, it signifies a corresponding property, and it denotes any objects having that property. The distinction between the second and third of these features is the one made by Frege between sense and reference. It is the distinction between the first and second feature that is subtle and (perhaps) unfamiliar to many contemporary philosophers and logicians. For those of earlier eras, it is grounded on the distinction between saying and being. Concepts depend on being expressed (saying); properties depend on being had by objects (being). Expressing a concept is simply a matter of using a term properly, just a matter of saying. Signifying a property (and therefore denoting things) is a matter not only of expressing the right concept but of the way things are. It requires that certain things be some way. It is also important to note here that abstract substantives, such as ‘foolishness’, ‘wisdom’, ‘old’, are ambiguous. They are used sometimes for concepts (the concept normally expressed by use of the term ‘fool’ is foolishness). They are also used for the properties corresponding to those concepts (the property signified by use of the term ‘fool’ is foolishness). The concept of ϕ and the property of ϕ correspond with one another but are not identical. A further word about denotation is now in order. While contemporary logicians tend to take the universe of discourse required for the interpretation of statements to be constituted by a non-empty set of bare un-propertied, objects, objects having no characteristics, earlier, pre-Fregean logicians did not. Traditional logicians, including the 19th century algebraist, saw universes of discourse as totalities of propertied objects, the constituents of which are determined by contexts of discourse. Let us say that a universe of discourse is a totality of objects. Some totalities are fixed. The addition or subtraction of any constituent results in a new, different totality. Sets are fixed totalities. Other totalities are variable. The addition or subtraction of any constituent need not result in a new, different totality. The cells constituting your body, the members of the U.N., the cutlery in my kitchen, and the players on a sports team all constitute variable totalities. The real world is such a totality. Any spatial or tem-

Into the Semantic Forest | 105

poral part of the world is such a totality. Every possible or imagined world is a variable totality. Whenever we speak, we do so relative to some specifiable (though rarely specified) universe of discourse. And any totality (fixed or variable) can be a universe of discourse. When I say that no even number greater than 2 is prime, I speak relative to the infinite but fixed universe of natural numbers. When I tell you that some philosophers are fools, my universe is understood by me (and I assume by you) to be the real world or some salient part of it. When I say that some of the guests at my dinner party stole my cutlery, I say so relative to the totality of people who attended my party. When I complain that the notebook is not in the desk drawer, my universe of discourse is the small totality of items in that drawer. With this notion of a universe of discourse in mind, we immediately note that whenever a speaker uses a term, the denotation of that term (if it has any denotation at all) is restricted to things in the current domain of discourse. If I say that Pegasus had wings, my universe of discourse is the unreal world of Greek mythology. If I say that the 2013 winner of the Kentucky Derby was a Canadian horse, my universe of discourse is a part of the real world (viz., the entries in the 2012 Kentucky Derby). If I say that no horses have wings, my universe of discourse is the real world. When I say that some horses have wings, my universe of discourse is the union of all totalities (real or not) that have horses as constituents. A term, T, used relative to a universe of discourse, U, signifying the property P, denotes all the things in U that have P; it does not denote anything that is not a constituent of U. And this is so even if there are P-things in universes of discourse other than U. Now what makes all of this interesting and important for logic is that a statement is a kind of used term. A statement (as we saw on the Term Functor Trail) is a sentence used to make a truth-claim (in contrast with questions, commands, prayers, etc.). Since statements are (complex) terms (sentential terms), they share the semantic features of terms in general. Statements express concepts, signify properties, and denote things having those properties. When a concept is expressed by a statement, we call it a proposition. We can think of a proposition as the sense of a statement. Propositions are the proper bearers of truth and falsity, and they are the proper objects of propositional attitudes. In normal (non-vacuous cases), statements also signify and denote. This notion of proposition is neatly summarized in the following five theses (in Cappelen and Hawthorne 2009, 1): T1: There are propositions and they instantiate the fundamental monadic properties of truth simpliciter and falsity simpliciter.

106 | Referential Falls

T2: The semantic values of declarative sentences relative to contexts of utterance are propositions. T3: Propositions are, unsurprisingly, the objects of propositional attitudes, such as belief, hope, wish, doubt, etc. T4: Propositions are the objects of illocutionary acts; they are, e.g., what we assert and deny. T5: Propositions are the objects of agreement and disagreement. Terms (if they signify at all) signify properties. Sentential terms (including statements) must (if they signify anything) signify properties. Totalities have two kinds of properties. Some of these they have as a whole. Others they have by virtue of their constituents. Consider the salad I ate for lunch today. It was healthy and fresh, and it was not expensive. The salad had these positive and negative properties as a whole. The salad also had properties just by virtue of what ingredients it had or lacked. It had lettuce, a tomato, peppers and no onions. It was positively characterized by the presence in it of lettuce, peppers, and a tomato; it was negatively characterized by the absence of onions (and much else besides). The properties that a totality has by virtue of what are included or excluded from its constituents are positive or negative constitutive properties. Statements are made relative to a specifiable universe of discourse. They express propositions and signify those constitutive properties that correspond to those propositions. They denote what has those properties – their universes of discourse. Any statement can be logically construed as a claim of existence or nonexistence. Thus, the statement ‘Some lions are cowardly’ can be taken as ‘There are some cowardly lions’, expressing the proposition that there are some cowardly lions. The statement ‘Every man is mortal’ can be construed as expressing the proposition that there are no immortal men. Suppose I say, relative to the universe of discourse consisting of the things in my desk drawer, ‘There is a roll of tape’. In doing so I express the proposition that there is a roll of tape (in my desk drawer). In this context, if there is a roll of tape, then my proposition corresponds to the positive constitutive property my statement signifies (viz., the presence in the desk of a roll of tape, the presence of a roll of tape as a constituent of the universe of discourse), and it denotes what has that property (viz., the items in my desk drawer). When I say, ‘There is no notebook’, (if there is no notebook in my desk drawer) my statement signifies the negative constitutive property (viz., the absence of a notebook from the universe of discourse). When such correspondences hold between an expressed proposition and a signified constitutive property, the proposition (and consequently statement) is true, and

Into the Semantic Forest | 107

the property is a fact (that makes it true by virtue of that correspondence. When no fact corresponds to a proposition (i.e., when the statement is significatively and denotatively vacuous – like the term ‘human perfection’), the proposition is false. Facts are the (constitutive) properties (of the relevant universe of discourse) to which true propositions (and thus the statements that express them) correspond. Critics of so-called correspondence theories of truth often point out that facts, whatever they are, are certainly not to be found in the world (or any other universe of discourse). And the critics are right in doing so. However, this does not mean that true propositions have nothing to correspond to. They do correspond to facts – and facts are not in the world; facts are constitutive properties of the world. Facts construed in this way are just the kind of objective, non-linguistic correlates of true statements that correspondence theories of truth require. (For much more on this particular version of a theory of truth by correspondence see especially Sommers 1996, Sommers and Englebretsen 2000, and Englebretsen 2006 and 2010a.) Our path thus far through the semantic forest has paralleled Frege’s path from time to time, but ultimately our path has diverged from his in some crucially important ways. An obvious but significant difference is that, while both semantic theories take statements to have extensions (objects of reference for Frege, objects of denotation for the term logician), the natures of these objects are radically different. Taking statements as (logically) names, Frege has to discern just what objects could be named by them. The positing of objective abstract objects – the True and the False – is his answer. By contrast, the term logician adopts a semantic theory that accounts for the denotation of statements (a kind of complex term) consonant with the account of denotation for terms in general: they denote whatever has the property they signify. Statements denote the things that have the constitutive properties they signify – the universes of discourse relative to which they are made. And this seems much closer to our unschooled intuitions about what it is we are talking about when we say such things as ‘Some philosophers are fools’, ‘Not all the guests were drunk’, or ‘No horses have wings’. In each case we can be said to be talking about the real world (having foolish philosopher among its constituents), the party attendees (having some sober constituents), or the real equine world (lacking any winged constituents). Saying that, in so speaking, we are making reference to the truth values of what we say is a much farther stretch. What we say is indeed true or false, but it’s the way things are, the ways the relevant universes of discourse are constituted, that determines such truth values – not the values themselves. Fregean semantic theses also contribute to a general view of the nature of logic and its relation to ontology, which is quite different from the view taken by

108 | Referential Falls

term logicians. For one thing, the mathematical logic that resulted from Frege’s original insights was forged in the fires of anti-psychologism, “the corrupting incursion of psychology into logic” (Frege 1964, 12). Frege had in mind primarily the 19th century British logicians like Boole, the author of a book titled The Laws of Thought, who seemed to believe that logic is an account of how we actually reason rather than how we ought to reason. Moreover, these “psychological logicians,” assuming that logic deals with subjective ideas rather than objective Thoughts, were led into ontological confusion. “[F]or me there is a domain of what is objective, which is distinct from that of what is actual, whereas the psychological logicians without ado take what is not actual to be subjective” (Frege 1964, 16). For him, if logic is to be a fully general science, then it must be seen as a science of objective truths, independent both of subject matter and opinion. In a brief essay entitled “Logic” of 1897, Frege argued that a scientific law, such as the law of gravitation, must be something more than any subjective idea, but rather a single objective abstract truth (a Thought) that any number of persons might view (“grasp”) independently of any of their various subjective ideas of gravitation (Frege 1979, 145). Logic is fully objective, universal, and completely independent of us – of how we actually reason, of how we evolved the capacity to reason at all. As we proceed, we will eventually have an opportunity to look back at this particular curiosity and see it from a different angle. Suffice it to say for now that others have not shared Frege’s radical anti-psychologism (see, for example, Cooper 2001, Sommers 2008, Sommers 2008a, and Lehan-Streisel 2012). Logicism is the thesis that the proper foundations of mathematics (in particular, arithmetic) is formal logic. We’ve already seen how Frege, in pursuit of defending this thesis, built a theory of logical syntax modeled on that of function-argument expressions in mathematical analysis. If arithmetic takes numerical structures, and thus ultimately numbers themselves, as the objects of study, it implicitly embraces an ontology of abstract, objective things. Thus Frege’s logic is a formal, logically correct, scientific language (a Begriffschrift) adequate for the articulation of the most general truths (see, for example, Frege 1964, 90). Such truths are objective (i.e., they are Thoughts naming the True). Consequently, they must involve reference to individual things that are ontologically independent of one another and independent as well of what anyone thinks or says – things in the third realm, including Thoughts, truth values, senses, numbers, etc.. This is Frege’s version of Platonism, an ontology of objective abstract things. No one could gainsay the fact that artificial, formal languages such as the language of arithmetic, Leibniz’s characteristica universalis, and Frege’s Begriff-

Into the Semantic Forest | 109

schrift are markedly different from the natural languages you and I use every day. Our languages are truly natural; they evolve; they are historical; they are social constructs. No one designed any natural language. They are used to engage in an untold number of tasks (including reasoning). Formal languages are intentionally designed. They are designed for specific purposes. Artificial formal logical languages aim to model pure reason, not the reasoning we do ordinarily, but the reasoning we ought to do (especially when our goal is scientific truth). MPL, the modern version of Frege’s formal language, puts a great deal of its semantics and ontology into its syntax. A formal language begins with a chosen lexicon (vocabulary) of basic (unanalyzed) expressions, goes on to add a theory of syntax (a grammar), a set of formation rules that account for how items in the lexicon are to be combined to form more complex expressions (especially sentences), then adds a semantic theory designed to exhibit how expressions of the language are to be interpreted, and finally goes on to a proof theory, a set of rules of inference that are meant to account for logical derivations. There are disagreements today about key elements of proof theory and semantic theory. But there is little disagreement among mainstream logicians when it comes to the syntax of a language adequate to the demands of a formal logic. As we saw earlier, MPL begins formulating its syntax by distinguishing between complete expressions (Fregean names) and incomplete expressions (function expressions, including predicates and relational terms). This amounts to a distinction between singular and general terms. That distinction is, in effect, a distinction between two kinds of denotation: singular terms purport to denote just one thing, general terms can denote more than one thing. And denotation is a matter of semantics. Modern term logic, TFL, tries to keep its syntax semantically silent. This starts with the lexicon. While MPL has a lexicon already consisting of two semantically distinct kinds of expressions, TFL makes do with a lexicon of undifferentiated expressions, terms. Denotation is left for semantics to deal with later. Whether a term used in a sentence denotes at all, and, if so, whether it denotes one or more things in the appropriate universe of discourse is never reflected in TFL’s logical syntax. Even inferences involving wild quantity are material, independent of form alone. Not only semantics intrudes on the syntax of MPL. As we saw, ontological considerations also make an appearance. Already, in specifying his lexicon for his envisioned formal logic, Frege had presumed an ontological distinction: concepts/objects. And the central ontological question – What is there? What exists? – was taken by his MPL heirs to be answered by an understanding of logical syntax. Hume, and then Kant first alerted philosophers to the fact that

110 | Referential Falls

existence is not a property that any individual thing has or lacks. Talk, of course, is cheap. We can say of any such thing that it exists (or doesn’t exist), but such ascriptions are misleading – ‘exists’ and ‘doesn’t exist’ are not “real predicates.” But then what? Can anything be said properly to exist or not? And, if so, what? Frege’s answer was that, since it doesn’t apply to objects, existence is a property that applies only to concepts. To say that horses exist and unicorns do not is, when such saying is logically corrected, to say that there are objects satisfying the horse concept but not the unicorn concept. The former, unlike the latter concept, is not empty. Existence is a property of non-empty concepts (see Frege 1953, §53). Bertrand Russell gave a similar response to the question that Hume and Kant bequeathed to logicians and philosophers. But where Frege took concepts as the proper bearers of existence/non-existence, Russell raised the ante from the ontological level (concepts) to the logical. To say that horses exist is to say that the “propositional function” (in effect, the predicate expression that Frege had taken to refer to the concept), ‘x is a horse’ is “sometimes true” (depending on what determinate expression (name) replaces the variable (see Russell 1918, lecture V). Eventually, Quine cut to the chase, simplifying just how logical form can exhibit our ontological commitment (what we take to exist): “To be is to be the value of a variable” (Quine 1953, 15). Only a few adherents to MPL have disagreed. Nevertheless, modern term logicians have taken a different path through the semantic forest – especially here. As Hume and Kant taught, existence is not properly a property of concepts, propositional functions, etc. It is, as we have seen, a property of relevant universes of discourse. It is a constitutive property. To say, relative to a specifiable universe of discourse, that horses exist is simply to say of that universe that there are horses among its constituents. Furthermore, this is not in any way reflected in the logical forms of such statements. For the term logician, whatever ontological insights might be forthcoming, they do not originate simply at the level of logical syntax (much less at the stage of lexical specification).

The Curious Case of the Singular Term | 111

6.2 The Curious Case of the Singular Term “Is there any point to which you would wish to draw my attention?” “To the curious incident of the dog in the night-time.” “The dog did nothing in the night-time.” “That was the curious incident,” remarked Sherlock Holmes. A.C. Doyle

The Punch Line that we saw at the Four Corners recognizes that singular terms are special. But really, ... what’s so special about them? Singular terms are taken to be expressions used to denote just one thing (as opposed to general terms used to denote more than one thing). Normally these are proper names, singular pronouns, definite descriptions and the like. Logicians have certainly thought for a very long time now that singulars are logically special. In particular, they have seen such expressions as barred from certain roles reserved only for general terms. This does not necessarily amount to a denigration of the logical status of singulars. After all, logicians and philosophers from Plato to Frege have tended to treat sentences with singular subjects as paradigmatic for basic logical simplicity (‘Theatetus sits’, ‘2 is prime’). So, at least for many logicians, even though sentences with singular subjects are essential to the simplest model of logical form, one that is then supplemented by appendages meant to accommodate more logically complex sentences, including those with general subject terms, there are logically important things one can do with general terms that cannot be done at all with singular terms. We shall look at these things in a moment. For post-Fregean logicians, singular expressions are both semantically (reflected already in the division found in their lexicon) and syntactically special. Remember that for Frege singular terms are syntactically complete (and thus impredicative); general terms are incomplete and predicative. Semantically, a singular term is satisfied by, true of, just one object; general terms can satisfy many objects. The absolute distinction between complete and incomplete expressions guarantees the absolute distinction between singular and general expressions. For MPL logicians, singular terms do a lot of heavy lifting. In Quine’s well-known words, they “carry the burden of reference” when placed in the logical uniform of a bound variable. At the same time they are barred from some of the lighter (but logically flashier) work carried out by their non-singular cousins. The idea that genuine reference ultimately rest on the shoulders of singular terms (viz., individual bound variables) leads to the assumption that all reference is logically singular. Sommers has called this the “Fregean Dogma”

112 | Referential Falls

(Sommers 1967). It is the source of some of the convoluted turns found in today’s standard logic. So, now, what is it that cannot be done with singulars that makes them special according to the usual stories logicians tell today? Here is what they say we can do with general terms that we can’t do with singulars: negate, conjoin, disjoin, and – most importantly – predicate. They also tend to go on to hold that the effect of negating a general term in a sentence must logically amount to the negating of the entire sentence (thus, all logical negation is sentential). They then add that conjoining or disjoining pairs of general terms in a sentence logically amounts to conjoining or disjoining a pair of sentences (thus, all conjunction or disjunction is sentential). But they do sometimes recognize that, at least when it comes to our natural, everyday language, we do negate, conjoin, etc. general terms. It’s just that these surface features of natural language mask a deeper, more genuinely logical language that requires such operations to be sentential. This restriction might look bad for the logical status of general terms, but things are far worse for singular terms. For these logicians have difficulty even recognizing what most of us users of natural language do all the time: negate, conjoin, disjoin, and even predicate singular terms. As we made our ascent up Aristotle’s Peak (Chapter 3) we encountered two distinct theories meant to account for the “problem of propositional unity” (What makes a sentence a single linguistic unit rather than just a string of its component words?). Recall that the binary theory, favored by Plato, the early Aristotle, and Frege, holds that a sentence consists of a pair of expressions that are fit to play different but complementary roles. In the modern version, general terms are incomplete, containing gaps; names (singular expressions) are perfectly suited to fill those gaps. A sentence is a linguistic unit by virtue of a gapped expression having its gaps filled by gapless expressions. The ternary theory makes no such distinction among expressions. According to this theory, a sentence consists of a pair of terms bound together to form a single linguistic unit by a binding functor – a logical copula. While the binary theory is immaculate, the ternary theory requires copulation to generate complex expressions such as sentences. The modern binary theory leads to the Fregean Dogma. Since logically simple sentences must consist of a predicate expression completed by one or more (as required) singular expressions (i.e., a name, pronoun, variable), such sentences must be singular. ‘Socrates is wise’ is a singular sentence with ‘Socrates’ referring to Socrates. ‘The present U.S. president is a Democrat’ is a singular sentence that is logically regimented as ‘Some thing, x, is such that x is a present U.S. president and nothing that is not x is a present U.S. president and x is a Democrat’, where each of the logically simple constituent sub-sentences

The Curious Case of the Singular Term | 113

has a singular term , ‘x’, referring to an object in the universe of discourse. ‘Every logician is rational’ is logically regimented as ‘Every thing, x, is such that if x is a logician then x is rational’, where both of the sub-sentences have ‘x’ referring to an object in the universe of discourse. The Fregean Dogma rests on this binary theory. And on the Fregean Dogma rests a further thesis, one that is used to justify the various logical restrictions placed on the things singular terms are allowed to do: the Asymmetry Thesis. The Asymmetry Thesis holds that in a logically simple sentence consisting of a singular term and a general term, a subject and a predicate, the two terms are asymmetric in that each plays a distinct role in the sentence, such that the former cannot (unlike the latter) be negated, conjoined, etc. (For the defense of asymmetry see especially Geach 1962, 1969, 1972, 1975; Strawson 1952, 1957, 1959, 1961, 1970, 1974; Heintz 1984. For critiques see Grimm 1966, Hale 1979, Nemirow 1979, Linsky and King-Farlow 1984, Zemach 1981, Englebretsen 1985a and 1987b.) Early mathematical logicians like Frege and Russell held to this thesis. Its distinction between singular and general terms is the heir of Frege’s object/concept distinction and it harkens back to the traditional metaphysical distinction between particulars and universals. In 1925 Frank Ramsey argued that the asymmetry between subjects and predicates was an illusion common to both traditional and modern logical theories. Both the disputed theories make an important assumption which to my mind, has only to be questioned to be doubted. They assume a fundamental antithesis between subject and predicate, that if a proposition consists of two terms copulated, these two terms must be functioning in different ways, one as subject, the other as predicate. (Ramsey 1925, 404)

The two different sentences ‘Socrates is wise’ and ‘Wisdom is a characteristic of Socrates’ both “assert the same fact and express the same proposition ... they have the same meaning. ... Here there is no essential distinction between the subject of a proposition and its predicate, and no fundamental classification of objects can be based on such a distinction” (Ramsey 1925, 404). In spite of Ramsey’s argument, the asymmetry thesis has continued to dominate the thinking of most mainline logicians. It’s no mystery, of course, why logicians deny the predicability of singular terms. From the point of view of modern mathematical logic emanating from Frege, only general terms are predicational in nature. This is shown by their incompleteness. The binary theory of logical syntax guarantees that general terms alone can be predicates. According to MPL, an atomic sentence is logically simple in the sense that it involves no formative expression (especially no copula). Formative expressions always operate on entire sentences. Conse-

114 | Referential Falls

quently, the only kinds of expressions that can be logically negated or logically compounded (conjoined, etc.) are sentential expressions. Logical negation, conjunction, disjunction, etc. are the results of applying such operators to the main function expression in the sentence or else to the entire sentence. The sentence ‘Mars is red’ could be negated by negating the main function expression (viz., the predicate) to yield ‘Mars is not red’. But this is simply equivalent to the explicit sententially negated ‘It is not the case that Mars is red’. ‘Everything is red’ can likewise be negated by negating the main function expression, which in this case is not ‘... is red’ but rather the universal quantifier. Again, the result of such negation is sentential: ‘It is not the case that everything is red’. So, no matter how one negates a sentence, it is never the result of negating any singular term in that sentence. And similar reasoning applies to the ways pairs of sentences can be logically compounded. No one has offered more striking and memorable arguments for such prohibitions against the logical negation or compounding of singular terms than P.F. Strawson. In mathematics and formal logic one often finds proofs of valid arguments that are indirect. In such a proof, one assumes the premises, assumes as well the negation of the conclusion of the argument, and then applies the available rules of proof to derive something that cannot be true – a contradiction or absurdity. Consequently, such proofs are said to be by reductio ad absurdum (reduction to absurdity). That is how Strawson aimed to show that singular terms cannot be negated or compounded. Let’s concentrate for now on his attack on singular term negation. His strategy is to assume that some singular term is logically negated, then derive a logical absurdity from such an assumption. Why does Strawson want to deny that singular terms can be negated? He claims that general terms “come in incompatibility groups” (or “incompatibility ranges”) but singular terms do not (Strawson 1970, 102-103; Strawson 1974, 19). What he means by an incompatibility group is just the set of mutually incompatible terms, the set of terms that are all non-logically contrary to one another (e.g., ‘red’, ‘blue’, ‘green’, ‘white’, ‘yellow’ ...). Given a general term belonging to such a set, one can negate it to form its logical contrary. Thus, as we’ve already seen, the logical contrary of ‘red’ is ‘non-red’, which is equivalent to the disjunction of all the terms non-logically contrary to ‘red’ (i.e., ‘blue’, ‘green’, etc.). This incompatibility between a general term and its contraries is inherited by the incompatibility between a sentence predicating that general term and any sentence predicating one of that term’s contraries. This is so because, according to MPL, negation of a main function expression (in this case the predicate) amounts to the negation of the entire sentence (Strawson 1970, 104-105). By contrast, singular terms have no contraries (logical or otherwise), for no term is

The Curious Case of the Singular Term | 115

in any way incompatible with a singular term. This asymmetry “seem[s] to be obvious and (nearly) as fundamental as anything in philosophy can be” (Strawson 1970, 102). It is at this point that the reductio ad absurdum proof makes its stand. Suppose that singular terms can be negated. For example, let the singular term ‘Kripke’ have as its negation (its logical contrary) ‘non-Kripke’, where any term and its negation are logically incompatible. Assume also that the Fregean Dogma (singular terms refer to single objects) is correct (something the majority of modern logicians assume – that’s why it’s a dogma). Assume as well that the negation of a singular term is another singular term. From these assumptions it follows that ‘non-Kripke’ must refer to an individual, indeed an individual whose properties are all incompatible with the properties of Kripke. Thus, if Kripke is a logician, then non-Kripke must be a non-logician. If Kripke is righthanded, then non-Kripke is left-handed. If Kripke is an Enlish-speaker, then non-Kripke is a non-English-speaker. If Kripke is a Nebraskan, then non-Kripke is a non-Nebraskan. Now if non-Kripke lacks all the properties that Kripke has, it is also the case that non-Kripke has all the properties that Kripke lacks. If Kripke isn’t French, then non-Kripke is French. If Kripke is married, then nonKripke is single. So far so good. But consider this. If Kripke is not blue, then non-Kripke is blue. If Kripke isn’t green, then non-Kripke is green. If Kripke isn’t red, then non-Kripke is. If Kripke is not six feet tall, then non-Kripke is. If Kripke is not seven feet tall, then non-Kripke is. If Kripke is not eight feet tall, then nonKripke is. Just try to imagine non-Kripke. Whatever non-Kripke is, non-Kripke is blue and green and red, six feet, seven feet, and eight feet tall. It’s nonsense, absurd. There could not be any object that has such incompatible properties. There could not be a non-Kripke. There could not be any thing to which ‘nonKripke’ could refer. The negation of a singular term is impossible because it is absurd; it is absurd because it would result in a new singular term that would refer to an absurdly impossible object. Or maybe not. Strawsonian arguments against the conjunction and disjunction of singular terms are also indirect, meant to reveal absurdities. Suppose a pair of singular terms can be conjoined. For example, let ‘Wittgenstein and Kripke’ be the result of conjoining the two singular terms ‘Wittgenstein’ and ‘Kripke’. Assume the Fregean Dogma. Assume also that the conjunction of a pair of singular terms is itself another singular term. Finally, assume that given any object and any property, that object either has or lacks that property. From this it follows that ‘Wittgenstein and Kripke’ must refer to an individual, one that has all and only the properties that both Wittgenstein and Kripke have (see Strawson 1970, 111n). If both Wittgenstein and Kripke are philosophers, then whatever is referred to

116 | Referential Falls

by ‘Wittgenstein and Kripke’ must be a philosopher. If Wittgenstein is Austrian but Kripke is American, then whatever is referred to by ‘Wittgenstein and Kripke’ must be neither Austrian nor American. So far, so good. But if Wittgenstein is unmarried and Kripke is married, then whatever is referred to by ‘Wittgenstein and Kripke’ is neither married nor unmarried. But this is contrary to the set of assumptions that led us here. An object that is neither married nor unmarried is (so the argument goes) impossible. In a similar manner, the disjunction of a pair of singular terms is ruled absurd. Suppose that a pair of singular terms can be disjoined. For example, let ‘Wittgenstein or Kripke’ be the result of disjoining the two singular terms ‘Wittgenstein’ and ‘Kripke’. Assume the Fregean Dogma. Assume also that the disnjunction of a pair of singular terms is itself another singular term. Finally, assume that given any object and any property, that object either has or lacks that property. From this it follows that ‘Wittgenstein and Kripke’ must refer to an individual, one that has all and only the properties that either Wittgenstein or Kripke have. But if Wittgenstein is Austrian and Kripke is not, then whatever is referred to by ‘Wittgenstein or Kripke’ must be both Austrian and not Austrian. But an object that both has and lacks a given property is impossible. So, the conjunction of a pair of singular terms is impossible because it is absurd; it is absurd because it would result in a new singular term that would refer to an absurdly impossible object. Likewise, the disjunction of a pair of singular terms is impossible because it is absurd; it is absurd because it would result in a new singular term that would refer to an absurdly impossible object. “[T]here are no such things as compound (i.e., conjoined or disjoined) subjects” (Strawson 1970, 100l, also see Strawson 1974, 4-9). Consequently there is nothing any conjoined or disjoined pair of singular terms could possibly refer to. Or, again, maybe not. Even a casual inspection of the ways we ordinarily speak in everyday discourse reveals that the negation of singular terms is common, natural, and simple. What is surprising is that so many brilliant logicians and philosophers have believed that it is impossible. Of course, they only do this during their working hours. The rest of the time they speak just like us. In contrast to MPL, the term logician starts with an undivided lexicon of non-formative terms – all kinds of terms. Semantically, singular terms are satisfied by single objects; general terms can be satisfied by many objects. But that semantic difference is not reflected in a syntactic difference. Any term, singular, general, count noun, mass noun, abstract, concrete, etc. can be used in a more complex expression (e.g., a statement) as a quantified term (i.e., a logical subject term or a logical object term of a relational) or qualified (a predicate term or

The Curious Case of the Singular Term | 117

a relational). Whenever a quantified term is singular, the term logician, on the basis of its semantic singularity, treats the logical quantity of that term as wild. If we happen to know that a subject or object term is singular, then, on the basis of that knowledge alone, we can arbitrarily assign it a quantity as we see fit. That’s the only thing special about singular terms, and nothing like syntactical completeness is required. The semantic theory that accompanies TFL assigns the semantic relation of denotation to any kind of term. Denotation is always relative to a specifiable universe of discourse. A used term denotes whatever is a constituent of that universe as long as it has the property signified by that term. A sentential term, used relative to a specifiable universe of discourse, denotes that universe as long as it has the constitutive property signified by the sentence. When the sentence is used to make a truth-claim (i.e., when it is a statement), succeeds in denoting the relevant universe, it is true; when it fails to so denote, it is false. Reference is a different story. Only quantified terms refer. The reference of a quantified term is the result of a joint effort by both the term and its quantifier. Each member of the team plays its part. The contribution of the term is its denotation; the contribution of the quantifier is its restriction of that denotation. A universally quantified term refers to the whole of the denotation of the term itself; a particularly quantified term refers to a part (but perhaps the whole) of the denotation of the term itself. Consider the sentences ‘All logicians are rational’ and ‘Some logicians are mathematicians’, each used relative to a universe of discourse that has among its constituents all the logicians who have ever lived in the real world. In both sentences, the term ‘logicians’ denotes all the logicians who have ever lived in the real world. In the first sentence, the quantified term ‘all logicians’ refers to just what the term itself denote – all the logicians who have ever lived in the real world. In the second sentence, the quantified term ‘some logicians’ refers to an unspecified part of that denotation. Note that when a term is not quantified (i.e., when it is qualified), it makes no reference – but it nonetheless retains its denotation. This distinction between what a term per se denotes and what it refers to is useful when it comes to understanding the traditional logician’s distinction between distributed and undistributed terms. A necessary condition for a quantified term having as its reference the entire denotation of the term itself is that it is distributed; a necessary condition for a quantified term having as its reference an unspecified part of the denotation of the term itself is that it is undistributed. Consider now the sentence ‘Socrates is wise’. The term ‘Socrates’ is not overtly quantified. But, by any logical theory, that term as used in this sentence, relative to the real world of ancient Athens, refers to Socrates. ‘Socrates’, being a singular term, denotes just

118 | Referential Falls

one thing in that universe – Socrates. Here the reference is the whole of the denotation. Thus ‘Socrates’ is a distributed term here. But, as well, ‘Socrates’, being a singular term refers to a part of its denotation (which has only one part), viz., Socrates. Thus ‘Socrates’ is an undistributed term here. In ‘Plato learned from Socrates’, the term ‘Socrates’ is now the object term of the relational ‘learned from’ and is once more only tacitly quantified. But it behaves logically just as it does when it is a subject term. That’s the way it is (logically) with singular subject terms and object terms of a relational. They are arbitrarily distributed or undistributed, and this is reflected in their (implicit) wild quantity. In effect: ‘every Socrates’ = ‘some Socrates’ = ‘Socrates’ (see Englebretsen 1980a). By the lights of TFL, any (non-formative) term can play any logical role in a logically complex term (particularly in a sentential term). This means that, contrary to what the asymmetrists such as Strawson and Geach say, even singular terms can be qualified, play the role of logical predicate. Sentences such as ‘The teacher of Plato is Socrates’, ‘Mark Twain is Samuel Clemens’, ‘The Morning Star is the Evening Star’ and ‘Quine is not Aquinas’ have qualified singular terms as their predicates. Modern Predicate Logic treats them all as being “identity statements” with the copula (‘is’ or ‘is not’) a special relational expression of identity or non-identity. For MPL, expressions such as ‘is’, ‘is not’, etc., are always ambiguous. Among the roles they can play are predication (‘Socrates is wise’), and identity (‘Twain is Clemens’). These are formulated as: ‘Ws’ and ‘t = c’. The so-called ‘is’ of identity can’t be taken as the ‘is’ of predication because that would allow singular terms to be predicate terms, and Frege taught that only general terms (function expressions) are predicational, for “a proper name can never be a predicative expression” (Frege 1970a, 50.) So, given any sentence of the form ‘S is X’, whether the ‘is’ is to be read, on the one hand, as a qualifier, copula, sign of predication or, on the other hand, as a sign of identity depends entirely on the denotation of the term ‘X’. If ‘X’ is a general term, then ‘is’ is predicational (and, in fact, redundant once the binary theory of logical syntax is adopted – note how it has no place in a formula like ‘Ws’). If, however, ‘X’ is a singular term, then, far from being a redundant grammatical flourish, it becomes the main function expression in the sentence. It is the sign of a special relation – identity. We saw, while walking the Term Funtor Trail, how the entire apparatus of identity theory, with its special relational expression, its special rules of proof, and its special restrictions on the semantic modeling of statements, is quite unnecessary and foreign to the logic of our natural, ordinary language.

Not, Else, But, Other Than, Except | 119

6.3 Not, Else, But, Other Than, Except Entia non sunt subtrahenda praeter necessitatem. A.N. Prior

The Asymmetry Thesis denies that singular terms can be sensibly negated. And philosophers like Strawson have provided arguments that seem to strongly support that contention. How can a term logic account for negative singular terms that does not lead to the conclusion that such terms must refer to inconsistent, absurd, impossible objects? Of course, there is no question about the possibility of negating general terms and saying what the denotation (and, if quantified, what the reference) of such terms would be. Consider the sentence ‘All non-citizens are barred from office’. Here the negated general term ‘noncitizen’ denotes everyone (in the relevant universe of discourse) who is not a citizen; it denotes everyone else, everyone but citizens, everyone other than citizens, everyone except citizens. The negation of a general term (whether or not that amounts, as per MPL, to the negation of the sentence using that term) is itself a general term. The official line today is that, just as the negation of a general term is itself a general term, the negation of a singular term is itself a singular term (one that would have to refer to an impossible object). But the official line is mistaken. The fact is that the negation of a singular term is not itself a singular term – it is a general term. Consider the sentence ‘Whoever shot the sheriff is not Tom’. Who in the world is ‘not-Tom’? We saw how Strawson’s arguments claimed to establish that, since ‘not-Tom’ is a singular term purporting to refer to an individual, and since such an individual would have to have all the properties Tom lacks and lack all the properties Tom has (all of which is impossible), there can be no reference for ‘not-Tom’ – or for any other negated singular term. Nonetheless, ‘non-Tom’ is not a singular term and does not purport to refer to an individual; it denotes not just one thing – it denotes many things. It denotes everyone in the relevant universe of discourse other than Tom. To be not-Tom is just to be someone else. It is to be anyone in the universe of discourse who does not share all and only Tom’s properties, anyone who has at least one property that Tom lacks or lacks at least one property that Tom has. Even Tom’s twin Tim, though he shares a very large number of properties with Tom, doesn’t share them all – he is not Tom, he is one of the many (in the universe of discourse) who are notTom. We users of ordinary language negate singular terms all the time. We may not say ‘not-Tom’ the way logicians advise, but we do say such things as ‘Someone else brought her, but Tom took her home’, ‘Everyone but Tom came to the party’, ‘Sue came with someone other than Tim’s twin’, ‘She kissed all the men

120 | Referential Falls

except the host’, ‘She admired everyone but him’. Moreover, our ordinary linguistic practices are grounded in our natural semantics. According to this, a general term like ‘man’, when used in a sentence, relative to some specifiable universe of discourse, denotes all the men in that universe. ‘Non-man’ denotes everything in that universe that is not a man. ‘Solar planet’ denotes the planets orbiting the Sun. In ‘Every planet but Earth is uninhabited’ the expression ‘planet but Earth’ denotes all the solar planets but Earth, that are not Earth, other than Earth, except Earth. ‘Tom’ denotes Tom. ‘Not-Tom’ denotes everyone in the relevant universe of discourse that is not Tom. We don’t ordinarily use expressions like ‘not-Tom’, but we certainly do use expressions whose logical form is ‘not-Tom’, expressions such as ‘other than Tom’, ‘except Tom’, and others involving ‘but’, ‘else’. Logically negated singular terms are simply specially designed general terms, used to denote what is in the relevant universe of discourse minus the individual denoted by the un-negated singular term. And where is the absurdity in that? (For more on how to negate singular terms see: D.S. Clark 1983, Englebretsen 1985b, 1985c, Zemach 1981, 1985.) The Asymmetry Thesis holds that singular terms cannot be logically predicated, negated, conjoined or disjoined. Term Functor Logic rejects the thesis. Singular terms can be predicated and they can be negated. We’ve seen how this happens in our everyday uses of natural language and how that can be reflected in a logic of terms. Our next stop will take us to where we can get a good look at how TFL allows the compounding of pairs of singular terms to form new conjunctive and disjunctive terms.

7 Strawberry Fields [A]n initial ‘baptism’ takes place. Here the object may be named by ostentation, or the reference of the name may be fixed by a description. When the name is ‘passed from link to link’, the receiver of the name must, I think, intend when he learns it to use it with the same reference as the man from whom he has learned it. Kripke ... names are rigid designators. Kripke Several excuses are always less convincing than one. Aldous Huxley

7.1 Naked Denotation The best thing is to look natural, but it takes makeup to look natural. Calvin Klein

The issue of compounding singular terms was a topic of interest among some logicians and philosophers of logic and language during the ’60s and ’70s (see especially Gleitman 1965, Strawson 1970 and 1974, and Massey 1976). But there has been a renewed and increased attention paid to the topic during the early 21st century. We are going to take a look at the more recent discussion later, but first I want to go back to say something more substantial about those earlier (dare I say, age of Aquarius) accounts that relied so heavily on the Fregean Dogma and the Asymmetry Thesis. Among other things we have seen thus far is the fact that the standard mathematical logician holds (a) that general terms can be grammatically compounded, (b) such compounding is logically the compounding of sentences, (c) singular terms might be grammatically compounded, (d) such compounding is logically absurd. Nonetheless, there are inferences that cannot be adequately analyzed by the standard theory, but they are not beyond the analytic range of our term functor logic. Let’s begin with inferences involving statements whose grammatical subjects or objects are conjoined singular terms (so-called phrasal conjunctions). Examples are: 1. Socrates and Plato are Greek 2. Socrates and Plato weigh 150 kilos 3 Aristotle admires Socrates and Plato 4. The piano was carried upstairs by Al and Betty

122 | Strawberry Fields

5. Plato and his teacher are Greek 6. The teacher of Plato and the teacher of Aristotle are Greek The standard view analyzes these as conjunctions of pairs of sub-sentences. This works pretty well for some sentences, such as 1, 3, 5 and 6. Thus: 1.1 3.1 5.1 6.1

Socrates is Greek and Plato is Greek Aristotle admires Socrates and Aristotle admires Plato Plato is Greek and his teacher is Greek The teacher of Plato is Greek and the teacher of Aristotle is Greek

There are perfectly good valid inferences from 1 to 1.1, from 3 to 3.1, from 5 to 5.1 and from 6 to 6.1. But we would generally be reluctant to make inferences from 2 to 2.1 and from 4 to 4.1. 2.1 Socrates weighs 150 kilos and Plato weighs 150 kilos 4.1 The Piano was carried upstairs by Al and the piano was carried upstairs by Betty What is required for an adequate analysis of these kinds of inferences is a proper account of the logical forms of sentences containing phrasal conjunctions. That the phrasal conjunctions in 2 and 4 are indissoluble ought to be reflected in their logical forms. Simply put, the contemporary theory of logical syntax embedded in MPL is unable to reveal a difference in logical form between the phrasal conjunction in 1 and the phrasal conjunction in 2 (see Gleitman 1965). Similar concerns plague phrasal disjunctions as well. So, what should the logical form of sentences with indissoluble subject or object terms consisting of compounds of singular terms be? Generally, if ‘T’ is a term with a, b, c ... as its denotation (relative to a specified universe of discourse), then there is (or we can always arbitrarily create) another term ‘ a, b, c…⌋’ with the same denotation. The converse need not hold. When ‘T’ is a general term there is little difficulty (and usually no need) to use such a replacement term. Suppose our universe is the actual world during the past few millennia. We have a good term, ‘logician’, to use when we want to say such things as ‘Every logician is wise’, ‘Aristotle is a logician’, ‘Heidegger admires no logician’, etc. Nevertheless, we could coin a new term that has exactly the same denotation as ‘logician’ – ‘ Chrysippus, Aristotle, Ableard, Leibniz, Boole, Frege, Quine…⌋’. Though the two terms have the same denotation, the denotation of the second is explicit, bare, naked, for all to see; the denotation of

Naked Denotation | 123

‘logician’ is implicit, clothed. Still, one would be reluctant to replace general terms with implicit denotation in favor of terms of explicit denotation. Most of the general terms we use have such large denotations that such replacement would be highly impractical. However, if the denotation happens to be very small, then there might well be a reason to use a term that shows its denotation explicitly. ‘Obama’s daughters’ is a general term that denotes (in the actual world) just two individuals. Often a term that explicitly denotes Sasha and Malia would do just as well as ‘Obama’s daughter’. One can easily replace ‘ Russell, Whitehead⌋ were British’ with ‘The authors of Principia Mathematica were British’, or ‘ , !, "⌋ are odd’ with ‘Every prime number between 2 and 9 is odd’. Yet there is no comparable substitute for ‘ Kant, Frege⌋ were Prussian’. Sometimes we want to refer to things for which we have no implicitly denoting term in natural language. It’s as if we have the denotation but not the term. Of course, a term of explicit denotation is always theoretically available. We have to make do with what we have. There are some rare examples of newly coined general terms of implicit denotation aimed, in part, to avoid a corresponding term of explicit denotation (‘Bourbaki’ is a good example). English sentences whose logical forms can be rendered as ‘Every a, b…⌋ is F’ consists of a subject and a predicate. The latter is a qualified term; the former is a quantified term, and that term just happens to be a general term compounded from singular terms. Just like any universally quantified term, when such a term of explicit denotation is universally quantified, it refers to its entire denotation. Thus, ‘Every a, b…⌋’ refers distributively, i.e., to a and b and ... . The logical forms, then, of 1, 3, 5 and 6 are: 1.2 Every Socrates, Plato⌋ is Greek 3.2 Aristotle admires every Socrates, Plato⌋ 5.2 Every Plato, his teacher⌋ is Greek 6.2 Every teacher of Plato, teacher of Aristotle⌋ is Greek I promise to get to the logical forms of 2 and 4 in the second part of this chapter. The particular quantification of a term of explicit denotation is a phrasal disjunction. Just like any particularly quantified term, when such a term of explicit denotation is particularly quantified, it refers to some unspecified part of its denotation. Thus, ‘Some a, b…⌋’ refers undistributively, i.e., to a or b or ... . The sentence ‘At least one author of Principia Mathematica was a lord’ could be paraphrased as ‘Some Russell, Whitehead⌋ was a lord’, which is rendered more colloquially as ‘Russell or Whitehead was a lord’. We can think of ‘or’ as a mark

124 | Strawberry Fields

of undistributed reference here – just as ‘and’ is a mark of distribution when 1.2 above is rendered colloquially as ‘Socrates and Plato are Greek’. Before going on, it should be pointed out that since, for any singular term ‘s’, ‘s’ = ‘ s⌋’, we have another argument for the wild quantity of singulars when quantified, i.e., ‘every s’ = ‘some s’, since ‘every s⌋’ = ‘s and ... and s’ and ‘some s⌋’ = ‘s or ... or s’, and ‘s and ... and s’ = ‘s or ... or s’ = ‘s’. Here we have let “‘x’ = ‘y’” abbreviate “the terms ‘x’ and ‘y’ are fully synonymous – share the same expressive, significative and denotative relations, and can, therefore, be intersubstitutable for one another wherever they are used.” Treating phrasal conjunctions like those in 1, 3, 5 and 6 as universally quantified terms of the form ‘ a, b…⌋’ allows us to incorporate them into the traditional theory of referring expressions and permits an easy account of the validity of such inferences as the one from 1 to 1.3 ‘Socrates is Greek’. Such inferences are enthymematic. Massey justifiably cautioned against what he termed “the enthymematic ploy” (Massey 1976). The ploy consists of taking the corresponding conditional of an inference (that is taken to be intuitively but not fully formally valid) as a suppressed premise. The corresponding conditional of an inference is simply a conditional statement that has the conjunction of the inference’s premises as its antecedent and the conclusion of the inference as its consequent. Since such a manoeuver could be used to render any inference valid, it must be avoided on all occasions. However, this is not a condemnation of enthymemes in general. Inferences such as ‘Sam is a boy. Therefore, Sam is a male’ are valid enthymemes. The suppressed premise (‘Every boy is a male’) is analytic and is not the corresponding conditional. Sentences of the following form are always analytic: 7. Every a, b…⌋ is a, b, … n …⌋ Using them as the suppressed premises of enthymemes will be logically harmless. Consider now the inference from 1 to 1.3. The entire inference, with the suppressed premise made explicit, has the form: Every Socrates, Plato⌋ is Greek Every Socrates⌋ is Socrates, Plato⌋ Therefore: Every Socrates⌋ is Greek (a Barbara syllogism).

Peter, Paul and Mary, the Beatles, and Other Roadside Attractions | 125

By contrast, the inference from 2 to 2.3 (‘Socrates weighs 150 kilos’) is invalid. If two inferences have the same natural language surface form, yet one is formally valid and the other invalid, then they must differ in logical form. In particular, we need to look for a difference in logical form between the phrasal conjunction ‘Socrates and Plato’ as it appears in 1 and as it appears in 2. The latter would not be formulated as ‘Every Socrates, Plato⌋ weighs 150 kilos’. We are clearly not, in normal contexts, making a distributive reference to both Socrates and Plato in 2. And the same holds for the phrasal conjunction in 4.

7.2 Peter, Paul and Mary, the Beatles, and Other Roadside Attractions There is science, logic, reason; there is thought verified by experience. And then there is California. Edward Albee

We saw earlier that the negation of a singular term is not itself a singular term – it is a general term. A similar thing happens when we conjoin or disjoin singular terms. The process of coining a term of explicit denotation from one or more singular terms yields a new general term. Signs of conjunction and disjunction (e.g., ‘and’ and ‘or’) then play the role of logical quantifiers where the new general terms are logical subject terms or logical object terms). But what about the apparent phrasal conjunctions in 2 and 4? Whatever role ‘Socrates and Plato’ is playing in 2, it is not being used to explicitly denote Socrates and Plato. If it denotes at all, what does it denote? We can arbitrarily form a new general term of explicit denotation from any list of singular terms. As well, from any such list of singular terms, we can arbitrarily form a new term that is itself singular and denoting the things denoted by those constituent singular terms collectively. Let ‘⌈a, b …⌉’ be a singular term denoting a, b, etc. – collectively. Let’s call such terms team terms. Bands, orchestras, co-ops, duets, trios, clubs, juries, companies, military units, etc., can often be taken as teams. Often, we denote teams by names (‘Jefferson Airplane’, ‘Amazon.com’) or definite descriptions (‘the O.J. Simpson jury’, ‘the band that played at Lucy and Ricky’s wedding’, ‘the Joneses next door’, ‘the Brontës’). Sometimes, when convenient, we simply denote a team by listing its members (‘Russell and Whitehead’, ‘Charlotte, Emily, and Anne’). Notice that ‘Peter, Paul and Mary’ makes do with the list of members as the name of the trio. General terms of explicit denotation can sometimes denote just one thing; team terms never denote more than one thing. They are always singular terms. They exhibit

126 | Strawberry Fields

their team members but do not denote them. They denote one thing – the team. Moreover, as 7 above shows, the denotation of a term is included in the denotation of any general term that denotes at least the denotation of that term. But, while 7 is analytic, 8 is not. 8. ⌈a, b …⌉ is ⌈a, b … n …⌉ It is important to notice that the members of some teams can vary. Other teams have fixed members. The team (trio) consisting of Peter, Paul and Mary consists of just those three; adding a member, deleting a member, or changing a member would result in a new trio. The Beatles consisted of John, Paul, George and Ringo – that’s it. Most sports teams are not fixed in terms of their members. The Chicago Cubs remains the Chicago Cubs through numerous roster changes. It might be argued that teams with fixed membership are nothing more than sets, since sets are rigidly determined by their members (i.e., they are fixed, invariable in membership). This would be a mistake, however. Sets are always abstract objects; teams need not be abstract. Consider the team consisting of Tom, Dick and Harry (denoted by the singular term ‘⌈Tom, Dick, Harry⌉’). The team is far from abstract. It carried the piano upstairs. The set consisting of the three men ({Tom, Dick, Harry}), being nothing but an abstract object, could hardly carry anything, much less a piano, anywhere. We know that every a, b…⌋ is F if and only if a is F and b is F and ...; but it is not the case that ⌈a, b …⌉ is F if and only if a is F and b is F and ... . Nevertheless, for singular ‘s’, ‘s’ = ‘⌈s⌉’. Indeed, for singular ‘s’, ‘s’ = ‘⌈s⌉’ = ‘ s⌋’. It might be noted, in passing, that while ‘s’ = ‘⌈s⌉’ = ‘ s⌋’, none of these singular terms are fully synonymous with ‘{s}’, a singular term denoting the unit set of s (i.e., the set with just s as a member). We can think of ‘ …⌋’ as a function on one or more singular terms that forms a general term, and ‘⌈…⌉’ as a function on one or more singular terms that forms a new singular term (just as ‘{...}’ does). By treating some phrasal conjunctions (such as those in 2 and 4) as teamdenoting terms, which are always singular and can be given wild quantity, phrasal conjunctions are fully incorporated into the traditional account. The inference from 2 to 2.3 is seen to be formally invalid since its formal validity (as an enthymeme) would require a suppressed premise that does not hold. Thus: ⌈Socrates, Plato⌉ weigh 150 kilos ⌈Socrates⌉ is ⌈Socrates, Plato⌉ Therefore: ⌈Socrates⌉ weighs 150 kilos

Where We Get Together | 127

Phrasal conjunctions are either singular team terms or general terms of explicit denotation. Each such term, when used as a referring expression, is implicitly or explicitly quantified. For general terms of explicit denotation, conjunction (‘and’ in English) is the sign of universal quantity; disjunction (‘or’) is the sign of particular quantity. For team terms, when used as logical subject terms or logical object terms, the implicit quantity is wild. In any case, the presence of singular terms and of phrasal conjunctions of singular terms presents no serious challenge to the traditional notion that all reference is achieved by (possibly implicitly) quantified terms. In this respect, the old theory has a scope of application that seems to exceed that of the new one. But today’s logicians have hardly given up the attempt, and have been clever and resourceful in building the logical machinery required for MPL to deal with the phenomenon of compound singular terms. The most promising of these are called Plural Quantification Logics – PQL.

7.3 Where We Get Together Plurality should not be posited unnecessarily. Ockham I know I am among civilized men because they are fighting so savagely. Voltaire

We have seen how the term logician might deal with both the syntax and semantics of grammatical compounds of singular terms. Now I want to set that aside temporarily while we head off to the place where discussion of such matters are held among logicians and philosophers of logic and language today. Things might seem a bit complex and pedantic for a while, so bear with me. We will soon be back on our more familiar trail. Pre-Fregean logicians generally assumed that whatever reasoning we ordinary folks might engage in is carried out in the medium of our natural language. Frege and most of those who take the road he first envisioned hold a decidedly negative, even jaundiced, view of the logical powers of natural language (thus the urge to build an artificial, “logically perfected” language not only for mathematics but for reasoning in general). Like the traditional logician, the modern term logician takes a more positive, sanguine view of such powers. Hanoch BenYami is a philosopher of logic and language who shares this more optimistic view of the logic of natural language. While advancing this notion, he has, in passing, responded to those who now seek to supplement the standard version

128 | Strawberry Fields

of MPL with formal systems that extend the range of analysis of that logic to compounded singular terms, so-called plural referring expressions. The results are Plural Quantification Logics (PQL). Ben-Yami (Ben-Yami 2009) provides an impressive critique of attempts by a number of logicians who advance PQL (see Boolos 1984, Rayo 2002, Yi 2005-2006, McKay 2006, Oliver and Smiley 2006 and 2013, Rayo 2007, Linnebo 2007, Smiley and Oliver 2008). These formal systems distinguish plural quantification from singular quantification, and that distinction is a particular target for Ben-Yami. He makes it clear that such a distinction is not a feature of natural language, and he concludes “that Plural Quantification Logic has failed in its attempt to represent the logic and semantics of plural constructions in natural language” (Ben-Yami 2009, 224). He then goes on to offer a brief account of his own formal system (a fuller version is Ben-Yami 2004), one that eschews the distinction between plural and singular quantification, accounts for the semantics of plural referential expressions, offers a viable analysis of inferences involving plurals, and remains close to the syntax and semantics of natural language. My logical sympathies are with Ben-Yami. However, I think TFL is a better bet for achieving these goals. We will return to BenYami down the road. Oliver and Smiley ask how we can use plural terms to denote at all (Oliver and Smiley 2008). They begin with the distinction between distributed and collective denotation. A plural term distributively denotes a thing just in case that thing is among the entire set of things the term is true of. Assuming that what a term is true of is what it denotes, which Oliver and Smiley do (Oliver and Smiley 2008, 24), a plural term distributively denotes every thing in its denotation as well as every member of every subset of its denotation. ‘Logician’ distributively denotes every logician, and thus, every British logician, stupid logician, logician who is a mathematician, Aristotle, Quine, the authors of Principia Mathematica, etc. A plural term “is collective if it is not distributive” (Oliver and Smiley 2008, 22). A plural term collectively denotes the things it is true of only “jointly without denoting any one of them separately” (Oliver and Smiley 2008, 22). The main aim here is to reject those accounts of plural denotation that require all such denotation to be either distributive or collective. Their view is that no such choice is required since the two notions of denotation are mutually definable. Their analyses of the accounts to be rejected are clear and effective, but their presentation of their own view is less so. This is because they fail to offer adequate explanations (via their definitions) of what they mean by distributive and collective denotation. They first offer definitions (Oliver and Smiley 2008, 23) that amount to the following:

Where We Get Together | 129

a distributively denotes b if and only if b is/are among a a collectively denotes b if and only if b is/are a They note that when they write ‘a denotes b’ the a is being mentioned rather than used, so we could put it in quotes. (Note: a term is mentioned when it is used to denote itself, as in ‘Long is a short word, which could be written as ‘“Long” is a short word’). But what about the second a in each definition? Surely these are meant to be used rather than mentioned. Let’s try an example. ‘Logician’ distributively denotes Quine if and only if Quine is among logician. Does this make sense? It could make sense if we take the final ‘logician’ here to be short for ‘things “logician” is true of’. What now of collective denotation? According to Oliver and Smiley, a list such as ‘Charlotte, Emily and Anne’, when used to denote collectively, simply denotes Charlotte, Emily and Anne. “End of story” (Oliver and Smiley 2008, 23). In such a case, it does not denote (in any sense) Charlotte alone, Emily alone, or Anne alone. It denotes them jointly. It appears that when a term collectively denotes it denotes just one thing. Charlotte, Emily and Anne are joined together when collective denotation is in play. So let’s try an example according to the second definition. ‘Charlotte, Emily and Anne’ collectively denotes Charlotte, Emily and Anne if and only if Charlotte, Emily and Anne are Charlotte, Emily and Anne. Maybe. Oliver and Smiley then go on (Oliver and Smiley 2008, 25) to define each type of denotation in terms of the other. In doing so they rely on the difference between ‘being a’ and ‘being among a’. a collectively denotes b if and only if b is/are the things that a denotes distributively a distributively denotes b if and only if b is/are among the things that a denotes collectively Earlier (Smiley and Oliver 2006) they spelled out the relation of being among, symbolized by ≻: When b is a plural term the ≻ in a ≻ b will naturally be read as is/are among, or equivalently is one of/are a number of. When b stands for a single thing, however, a ≻ b can only be understood as an identity. The best English reading of ≻ is therefore disjunctive: is/are or is/are among, as the case may be. (quoted in Ben-Yami 2009, 219-220)

Ben-Yami constructed good arguments casting doubt on the idea that ≻, however it is to be read, is even a relation at all. At any rate, a circle seems to be closing here. We need to break out and start again.

130 | Strawberry Fields

Oliver and Smiley are right to notice the use of lists to denote things. As we’ve seen, we do use terms such as ‘Charlotte, Emily and Anne’, ‘Tom, Dick and Harry’, ‘Russell and Whitehead’, ‘Snow White and the seven dwarfs’. In addition to such conjunctive constructions we use such disjunctive terms as ‘Russell or Whitehead’ and ‘The moon or New York City’. When considering the logic of compounds of singular terms when used as logical subjects or logical objects, it’s always good to remind ourselves of the following points: (1) All logical subject and object expressions have an explicit or implicit logical quantity; (2) The reference of any such expression is determined jointly by the denotation of the constituent term or terms and the quantifier; (3) These constituent terms are sometimes singular and sometimes plural (conjunctions or disjunctions of singular terms; (4) The quantity of compound logical subjects or objects is indicated by a functor for conjunction or disjunction (e.g., ‘and’ or ‘or’); (5) Singular subject and singular object expressions have wild logical quantity; (6) Some compounds of singular terms are themselves singular terms denoting their constituents collectively (viz., when they are team terms); others are general terms denoting their constituents distributively. Let’s get back to Ben-Yami. As I said earlier, he offers a system that is meant to rival standard formal logics (Ben-Yami 2004), and their plural quantification extensions (Ben-Yami 2009), in both simplicity and naturalness (not to mention inference power). I believe that his system does this; but I also believe that TFL does it as well – but even more simply and naturally. Consider a simple valid inference from Yi (Yi 2005-2006, 460) that Ben-Yami uses to illustrate how his (Ben-Yami’s) deductive system deals with plural terms that are conjunctions of singulars (Ben-Yami 2009, 224ff): 1. Venus and Serena won a U.S. Open doubles title 2. Venus and Serena are tennis players 3. So, some tennis player won a U.S. Open title The standard predicate calculus can shed little light on such inferences. Some logic of plurals is required. Ben-Yami’s system does this. But the cost is that he must make use of complex rules for eliminating universals (Ben-Yami 2009, 227), introducing particulars (Ben-Yami 2009, 228), and Leibniz’s Law (BenYami 2009, 229). These rules amount to permissions to substitute one expression for another under certain circumstances. By contrast, the logic of terms, supplemented with the account of the conjunctions and disjunctions of singulars I offered earlier, can easily account for such inferences in a simpler and more natural way.

Where We Get Together | 131

We begin with a quasi-formalization of 1 and 2 (using * as a mark of wild quantity): 1.1 *⌈Venus, Serena⌉ won a U.S. Open title 2.1 Every Venus, Serena⌋ is a tennis player

premise premise

We next introduce the following axiom: If every a, b … n⌋ is F, then every ⌈a, b … n−1⌉ is F. In other words, if all the individuals denoted by a term have a given property, then any team composed of members of any subset of those individuals has that property. In particular: 3.1 *⌈Venus, Serena⌉ is a tennis player 4.1 Some tennis player won a U.S. Open title

from 2.1 and the axiom 1.1 and 3.1 (a Darapti )

At this stage one might ask: Which tennis player? There are two answers – and either one will do. One makes reference to a single team of tennis players (viz., the team made up of Venus and Serena); the other makes reference to all the members of that team (viz., Venus and Serena).

8 Into the Metaphysical Bogs Thus, ontology is part of metaphysics, and in fact it seems to be about half of all metaphysics. Thomas Hofweber

8.1 Ontology In … Ontology Out To be is to be the value of a variable. Quine The problems of existence are ... problems whose solutions are provided by logic. Quine Logic loses its authority and becomes inept, if it tries to disclose existence. Santayana Ontology should drive formal logic, not the reverse, in my opinion. E.J. Lowe

Our trail now takes us perilously close to an area fraught with philosophical dangers – the Metaphysical Bogs. Metaphysics, especially ontology, has had its ups and downs over the centuries. At times the trail followed by logicians has lead them close to and even into the bogs. Formal logic itself has been taken as wearing various guises. It was a tool, and organon, for use in carrying out work in the various sciences according to Aristotle. It was a kind of deep psychology of perfectly rational beings for Kant. It was a description of the structure of knowing for Mill. Many post-Fregean logicians took formal logic to be, among other things such as the foundation of mathematics, a guide to ontology, what there is. Philosophers with an interest in logic have asked of the provenance of logic: Is it somehow in our heads or is it out there – in the world? Is it in any way dependent on us, on how we talk or think? Does it aim to account for how we actually reason or how we ought to reason? Is it more than just a tool, and organon? Sommers calls ontology “the science of categories” (Sommers 1963m 351). The traditional view of ontology (“first philosophy”) was that it proceeds in three steps. First, the ontologist determines what the types (categories) of being are – how things are grouped and sorted. Next, the ontologist determines the relations that hold among those categories. Finally, the ontologist offers an account of how those relations determine the structure of categories of things –

134 | Into the Metaphysical Bogs

what one might call the “ontological shape of reality”. Along the way, of course, the ontologist must find the best source of clues for what the categories are and how they are related. The Platonist, for example, takes clues from ordinary grammar. The combining of general terms into sentences seems to reflect how pairs of Forms are combined in reality, the world of Being (see, for example, Ackrill 1957 and Kahn 1972). Moreover, how these Forms seem to stand in inclusion or dependence relations offers clues about how the Forms constitute a hierarchy with the Form of good at the top. By contrast, ontologists in the Aristotelian tradition, still taking their clues from natural grammar, claim that general terms of different kinds reflect different categories of being. Moreover, since natural language sentences seem to require a subject and a predicate, subject terms reflect the category of things about which something is predicated. Such things are in the category of Substance (primary substance when singular, secondary substance when general). Things in other categories (Quality, Quantity, Relation, etc.) are reflected in the general terms that are predicated. Substance is ontologically prior to the other categories because the relations of being said of or being in that constitute the relations of predication between subjects and predicates are dependence relations. Whatever is either said of or in something is ontologically dependent on it. Things that are in no way dependent on others are ontologically prior. What there is, in the ontologically most basic sense, are substances. For some Aristotelians (including the Aristotle of Categories), primary substances are ontologically foundational; for other Aristotelians (including the Aristotle of the middle books of Metaphysics), secondary substances are foundational. At any rate, they generally held that the result is a hierarchically structured ontology. While Aristotle himself seemed reluctant to rest his “first philosophy” on the logical insights found in Prior Analytics, modern logicians have often been eager to ground their ontologies on their logic – especially the syntax of their formal logical language – more specifically in the structure of quantification. One reason for this approach is a shift in the conception of the proper aim of ontology. Traditionally seen as a systematic account of categories and their structures, it is now seen as an account of existence, of what exists or what speakers or theories say exists. The contrast between ontology as aiming at categorial structure and ontology as aiming at existence is substantial. Certainly not all contemporary ontological theories take existence as their primary concern (for a survey of the field see Westerhoff 2005). In fact, as we will soon see, even though most logician-ontologists see it that way, not all do. In the 20th and early 21st centuries, armed with their new logic, positivist philosophers, most especially logical positivists, argued that the classical em-

Ontology In … Ontology Out | 135

piricists were right to see all sensible statements as either analytic, necessary, and a priori or as synthetic, contingent and a posteriori. Senseless sentences were held to be without sense because they were neither, and this because they were unverifiable. Their conclusion was that metaphysical claims, since they are among the statements that can, in principle, never be verified, are without sense, absurd, meaningless. As it happens, the logical positivists’ commitment to this “principle of verification” came back to bite them (the principle seems not to satisfy itself). Yet anti-metaphysics wasn’t quite dead. Eventually, Rudolph Carnap attempted to breathe some life back into some parts of this antimetaphysics by allowing that some apparently metaphysical statements concerning existence are sensible as long as they are made within, relative to, a specifiable “framework” or theory; they are senseless when made outside any framework. Thus, it makes sense to claim that an even prime number exists, when the framework is the set of natural numbers, but not when the claim is made outside of such a specifiable framework (Carnap 1965). This Carnapian reprieve did not last long for most logicians. It still relied on the notion that sensible statements are exclusively and exhaustively either analytic or synthetic, and the possibility of even making such a distinction was challenged (fairly effectively) by Quine (Quine 1953a, see also Quine 1976b). One would expect modern logicians to avoid the bogs, to be reluctant to take on any metaphysical baggage. Yet ontological insights are seen as valuable by-products of their logic. If ontological insights can be found in formal logic, where are they? Logicians in the Quinean tradition believe that the syntax of MPL is the best site for mining ontological nuggets. To be more precise, the claim is that the syntax of the formal language does not provide any ontology per se, but it does provide insight into the implicit ontological commitments of language users. To be even more precise, such commitments are due to the choice of constituents of the universe of discourse – those things which can serve as the value of the variables bound by the quantifiers found in the logical formulations of statements made by ordinary language users. In Quine’s famous bon mot: “To be is to be the value of a variable.” Of course, being, existence – that’s the tricky part. How can one know what there is? What exists? Scientists, crime detectives, and people of an empiricist bent would advise one to look, explore, investigate, if necessary, use tools that help extend our perceptual powers. But, as countless philosophers have reminded us (and children eventually learn on their own), looks can be deceiving. Our senses are fallible. Perhaps the best way to proceed is to retreat to a more tractable problem: What do we think, believe there is? What are our ontological commitments? Where would we look for answers to

136 | Into the Metaphysical Bogs

such questions? Just here is where Quine’s take on quantifiers, essential parts of the syntax of MPL, offers a ready answer. How did he come to this remarkable solution to this old metaphysical problem? In the 18th century, Hume suggested in his Treatise (I.ii.6) that existence is not a property of any individual things that we might say exists. Soon after, Kant, in his first Critique (A598B626-A600B628), spelled out more fully and clearly that, while we can predicate ‘exists’ of an individual subject, we provide no information about that subject since existence, the property we aim to ascribe to it, is not a real property. Existence (and nonexistence) is not a property that any individual thing has or lacks. This was the Hume-Kant lesson about existence. It was a negative lesson that most philosophers have learned. Nevertheless, it left room for the following: If existence is not a property of individuals, could it, instead, be a property of something else? When Frege built his new formal language, the syntax of that logical language provided him with an answer: existence is not a property of objects – it is a property of concepts (Frege 1953, §53). A sentence such as ‘Some thing is a mermaid’ is parsed as ‘There is at least one thing that is a mermaid’, which is to be understood as ‘At least one object “falls under” the concept mermaid, the concept mermaid is not empty’. The sentence is not about mermaids (or any object at all); it is about mermaid, a concept (and, as we’ve seen, concepts are not objects for Frege). Existence is a property only of concepts. Russell was an early promoter of Frege’s new logic and he too had taken the negative Hume-Kant lesson to heart. But, unlike Frege, he did not go on to argue that existence must be a property of concepts. Instead, he argued (Russell 1918, lecture V) that it is a property of “propositional functions,” open formulas with an unbound variable, such as ‘Mx’. Binding that variable with a quantifier (viz., ‘∃xMx’) yields a true sentence. Eventually, Quine spelled out the positive thesis concerning existence by simply cutting to the chase: “Existence is what the existential quantifier expresses” (Quine 1953, 13). So there it is. The existential quantifier is the syntactical device designed to express existence (or at least what a speaker is ontologically committed to by using a statement that can be formulated using that quantifier). Yet as philosophers (e.g., Priest 2008, Hintikka and Vilkko 2006) have recently made clear, the effect of the Frege-Russell-Quine program was to replace the old particular quantifier with the new, quite different, existential quantifier. Priest’s title summarizes the situation quite neatly: “The Closing of the Mind: How the Particular Quantifier Became Existentially Loaded Behind Our Backs.” As we saw on our Logic Mountain Range hike, the traditional particular quantifier was merely a fragment of an Aristotelian logical copula. It was never construed as a

Ontology In … Ontology Out | 137

device either for binding variables (pronouns) or for expressing existence (see Englebretsen 1990, Klima 2001, Klima 2005). The move from particular to existential was barely noticed because our focus had been on the Frege-RussellQuine thesis concerning existence rather than on their shift to the existential quantifier. The logical syntax of the new logic had provided grounds for the claim that logical syntax could guide us through any ontological bog. This was done not on the basis of any convincing argument, but rather on the unannounced replacement of the particular by the existential. It is a mere illusion, “the ‘philosophical mirage’ of ontological commitment through quantification” (Klima 2005, 11). Modern logicians who draw ontological claims from the syntax of their formal language are like stage magicians pulling rabbits from hats. The hats aren’t really empty – and neither is the logic. It’s been constructed with some ontological bits. Pulling ontological conclusions from such a logic looks impressive, magical; the audience is awed. Unfortunately, some logicians are also awed. They seem unaware of how their hat was constructed in the first place, with its hidden compartments, secret flaps, etc. Not only does the logical syntax of MPL provide a solution to the problem of what exists, viz, what a speaker is ontologically committed to, by means of the existential quantifier, the very nature of the things that constitute the values (denotations) of the variables being quantified is shown to be empty. Things, objects in the universe of discourse, the possible denotations of the variables simply have no nature; they are bare particulars. As we’ve already seen, this idea was bequeathed by Frege’s fundamental distinction between concepts and objects. Objects are propertyless. The world is a world of such objects, things waiting to be characterized, characterized by concepts waiting to be applied to those objects. When ‘Every book is valuable’ is read as ‘Every thing is such that if it is a book then it is valuable’, those pronouns (variables) must be seen as making reference not to books or valuables or any sort of things – only things, unsorted, uncharacterized, will do (for much more on this see Oderberg 2005a). For the modern logician, then, formal logic is formal ontology. It may not be much ontology, and it may not be interesting ontology, but it’s all the ontology we can get. This idea that any ontological insights can be provided simply by understanding the principles that ground the grammar of an artificially constructed formal language such as MPL yields a “remarkable ontology” (Lowe 2012, 347). It has been effectively and memorably skewered by Barry Smith. I can do no better than to quote him at length. A dark force haunts much of what is most admirable in the philosophy of the last one hundred years. It consists, briefly put, in the doctrine to the effect that one can arrive at a correct ontology by paying attention to certain superficial (syntactic) features of first-

138 | Into the Metaphysical Bogs

order logic as conceived by Frege and Russell. More specifically, it is a doctrine to the effect that the key to the ontological structure of reality is captured syntactically in the ‘Fa’ (or, in more sophisticated versions, in the ‘Rab’) of first-order logic, where ‘F’ stands for what is general in reality and ‘a’ for what is individual. Hence “fantology”. Because predicate logic has exactly two syntacticallydifferent kinds of referring expressions – ‘F’, ‘G’, ‘R’, etc., and ‘a’, ‘b’, ‘c’, etc. – so reality must consist of exactly two correspondingly different kinds of entity: the general (properties, concepts) and the particular (things, objects), the relation between these two kinds of entity being revealed in the predicateargument structure of atomic formulas of first-order logic. Fantology is a twentiethcentury variant of linguistic Kantianism, or in other words of the doctrine that the structure of language (here: of a particular logical language) is the key to the structure of reality. Classical fantologists include Frege, Russell and the Wittgenstein of the Tractatus, yet the work of almost all twentieth-century analytical philosophers bears traces of fantological influence, though this influence is of course more notable in some circles (for instance among the logical positivists in Vienna) than in others. Where the early fantologists argued explicitly that first-order predicate logic mirrors reality, present-day philosophers are marked by fantology only tacitly, through their use of predicate logic and of ways of thinking associated therewith. (Smith 2005, 153)

Perhaps, after all, formal logic (or at least logical syntax) is not much of a guide to things metaphysical. One might well question the idea that logic, whatever its primary function or functions might be, ought not volunteer to take on any metaphysical tasks. For “the demand to be a clear and adequate representation of one’s metaphysics may well impossibly add to the burdens placed on the ‘purposes’ of logic” (Dipert 1995, 56, n. 15). So, how can one acquire insight into ontology – or at least our ontological commitments? It certainly does seem reasonable to assume that such commitments are revealed by how we think about the world. But how we think is revealed by how we behave – particularly how we use our language, how we speak, write, etc., by the statements we make. You say there are ghosts and I deny this. We clearly have different ontologies. Most of the time, however, our ontological commitments are not revealed so overtly. If the modern logicians are wrong to counsel logical syntax as the appropriate guide, then we should look elsewhere. Objects have properties and terms are predicated of one another. So what? Having a language adequate for talking about what there is need not require that that language embed its ontology syntactically. Semantics could be charged with reflecting ontological structures, but that task should be outside the “job description” of syntax. Our walk through parts of the Semantic Forest showed us how we might use lessons learned there to make our way through the Metaphysical Bogs. In thus proceeding, we should notice that the logical syntax of TFL yields virtually no guidance here. For example, ‘Some logicians are rational’ has the logical form ‘+L+R’, which gives no clues about

What Were We Thinking? | 139

objects, properties, truth, etc. But the semantic theory that accompanies TFL holds that it’s the denotation of the quantified term and the signification of the qualified term that should hold our attention. The syntax of TFL is ontologically neutral. The ontological commitments of users of ordinary language in quotidian circumstances are pre-logical. They enter language only at the semantic stage, when decisions concerning the relevant, appropriate universe of discourse, the significations, and the denotations of terms are made. (For more on this, though drawing slightly different lessons, see Crane forthcoming.)

8.2 What Were We Thinking? Logic is the anatomy of thought. Locke The sole end of logic is to explain the principles and operations of our reasoning faculty. Hume How did it come to be that logic which, at least in the views of some people 2300 years ago, was supposed to deal with evaluation of argumentation in natural languages, had done a lot of extremely interesting and important things, but now this? Y. Bar-Hillel Logic may not really be an empirical cognitive science of reasoning and interaction (facts are strong medicine ...): but its inspiration for new theory building certainly derives from observations about natural logic and actual human behavior. J. van Benthem Is there a cognitively veridical logic that illuminates the process of how we mentally arrive at our everyday deductive judgments? Fred Sommers

According to Frege, logic is objective, universal, completely independent of us, of how we actually reason, of how we evolved to be able to reason at all. This is so because a genuine formal logic is to be seen as concerned with Thoughts (propositions expressed by statements) rather than thoughts. The latter are subjective, private items of interest to psychology but not to logic. The former are objective and public. We can express (or hide) our private thoughts, but, when we share them, we share only public (shareable) Thoughts. These are what stand in logical relations with one another; they are premises and conclusions of deductive arguments. Of course, expressing ourselves (our propositions) in a natural language can be tricky – best to do it in an artificial language specially designed for logical purposes, a logically correct, logically perfect

140 | Into the Metaphysical Bogs

language with none of the flaws of natural languages (ambiguity, vagueness, etc.). This is particularly so when we are engaged in high level theorizing of the sort that typically goes on in scientific discourse. In spite of this official line, the fact is that when we reason in everyday life we do so in the medium of our natural language (this is so even for logicians). It is hard to gainsay the fact that the results of carrying out the standard modern logicians’ program of building a formal logic free of the taint of psychologism, the idea that logic should be concerned not with propositions (Frege’s Thoughts) but our private subjective thoughts, are impressive and generally positive (see Crane 1014). But what is one to do if he or she believes that there are, nonetheless, good reasons to assess the ways we actually do reason (usually correctly) in ordinary situations, using our native language? Traditional logicians tended so see as one of their major tasks the provision of an account of how ordinary humans actually reason deductively using only the intrinsic logical powers of their everyday languages. Yet the temptation to formulate a logic of natural language strikes most modern logicians as singularly atavistic, even pointless, given that they have provided a formal system that accounts for how we ought to reason rather than how we do reason. As Frege said, the laws of logic “prescribe universally the way one ought to think if one is to think at all” (Frege 1903, xv). Perhaps. Still, it is possible to carry out a program of building a formal logic of deduction that begins with the recognition that we do actually reason (usually accurately) using the medium of our natural language. Such a logic goes on to construct a simple formal, symbolic language (with a syntax not excessively remote from that of natural language) that models the logically salient features of our natural language and identifies the principles by which we naturally reason, and is descriptive of how we do so. Naturally, the obvious and immediate objection to such claims is that we often don’t reason very well. We contradict ourselves, fail to notice mistakes in reasoning, fail to draw correct conclusions, are often blind to logical equivalences and inequivalences, and so forth. However, the fact is that, as long as the rational task at hand is not too complex or too extensive, we do generally reason well – and we acquire that ability at an early age (in the absence of any special instruction). Sommers has summarized this view clearly and succinctly: It is obvious that [children] do not reason in the logical language of MPL. Indeed, the net effect of insisting that the logical language of MPL is “canonical” is to distance formal logic from the process of actual reasoning. In other words, the canonization of MPL has led logicians away from facing the problem of how to account for the fact that we intuitively reason so well. (Sommers 2005, 217-218)

What Were We Thinking? | 141

An obvious conjecture is that ordinary reasoners, untrained in the fine points of manipulating quantifiers and bound variables, unconsciously exploit the logical powers of their natural language embodied in the positively and negatively charged characters of simple words like ‘all’, ‘some’, ‘not’, ‘if’, ‘and’, etc. “To fulfill its traditional mission of exposing the ‘Laws of Thought’ Logic must directly address the cognitive puzzle of how untutored human beings reason by exploiting the charged +/− character of the natural language particles” (Sommers 2008, 7). We suppose the “man is a rational animal”. But if this is so, if we are, by nature, rational, then why; do we so often appear to fail to reason correctly? Reason may be honored in word, and we may well characterize the most important parts of our mental life as rational, but the fact is that a very large part of our mental life has little to do with reasoning, reckoning, deducing, calculating, etc. Our emotional life is essentially nonrational. There are times when we do aim toward rationality, when we deduce, conclude, infer, yet fail. These are times when we are more than nonrational – we are irrational. To say that we are rational animals is to say that we have the capacity, the competence, to reason correctly, to be logical. It is not to say that we in fact always act rationally. Our rational performance doesn’t always match our rational competence. This gap between our rational performance and our rational competence has been the subject of a large number of systematic studies over the past several decades by large numbers of cognitive psychologists. They have investigated how we reason deductively, cataloguing the kinds of errors that are commonly made and speculating on the causes of such errors. It seems that we tend to make more errors when too much information is to be dealt with, or when negation is involved, more errors dealing with particular quantity rather than with universal quantity, and so forth. Memory limits and emotional interference seem to beget errors. We are often distracted from attention to form by our temptation to focus on the material element (the information involved). Our linguistic failures (errors of grammar or processing) sometimes lead to errors of reasoning. (See, for example, Braine 1978, Cheng and Holyoak 1989, Cosmides 1989, Crain and Khlentzos 2008, Henle 1962, Johnson-Laird 1983, Johnson-Laird and Byrn 1991, Macnamara 1986, Osherson 1975, Pelletier, Elio and Hanson 2008, Rips 1994, Wetherick 1989.) As one would expect, there is not always universal agreement among such a large and diverse group of thinkers, especially concerning matters of interpretation of empirical results or even understanding the nature of our reasoning capacity and its relation to logic. For example, there is disagreement about whether there is a “logic of thought” or “mental logic” on the one hand or rather

142 | Into the Metaphysical Bogs

no such logic at all. Johnson-Laird has held that “There can be reasoning without logic. More surprisingly, perhaps, there can be valid reasoning without logic” (Johnson-Laird 1983, 40). If this is so, then the principles that distinguish valid from invalid reasoning must not be derived from the nature of our mentality, they must be independent of us. “Valid inferences were made long before the invention of logic; and they can be made without relying, consciously or unconsciously, on rules of inference” (Johnson-Laird 1983, 126). In his studies of how subjects deal with syllogisms, Johnson-Laird argues that such reasoning is simply a matter of constructing “mental models” of premises and inspecting them to see if the conclusion is already modeled. But as one investigator has remarked following his own studies, “I have found no one who admits to employing Johnson-Laird’s type of mental model in the solution of syllogisms” (Wetherick 1989, 123). In sharp contrast with Johnson-Laird’s understanding of our reasoning capacity, John Macnamara argued that Johnson-Laird’s dismissal of a mental logic is radically mistaken (Macnamara 1986, 45-48). A person must already have an innate logical ability to even learn any external formal system of logic. There is in us a “basic” logical ability that is unlearned, intuitive. [F]or the purposes of everyday reasoning, with which psychology is most concerned, competence had to be confined to a basic logic, simple enough to be related to every day thought yet rich enough to support the towering logical structures that abound today. It also became evident that we could learn more about basic logic through the study of natural-language expressions, rather than of formal-language ones ... (Macnamara 1986, ix)

Our basic logical competence differs from our linguistic competence. Both can be improved by instruction, but the former, in contrast with the latter, is unlearned. Children are born able to reason; they are not born able to read. “[I]t is simply not the case that preschool children are alogical or nonlogical in the sense that the majority of them are illiterate” (Macanamara 1986, 2). He wrote that “It is nonsense to think that basic logical skills can be learned” (Macnamara 1986, 28). Basic logic is contrasted with those “towering logical structures” built by logicians. Those logics go far beyond our unlearned basic logic, but they nonetheless depend upon them. One has to have a basic logical ability in order to even learn formal logic; “the student must bring to the formal study of logic a set of fundamental logical notions and principles. ... Logicians ... do their work by consulting their own intuitions ... [they] build on a basis that they bring to the study of logic, a basis that is common to all adults at least” (Macnamara 1986, 5-7).

What Were We Thinking? | 143

But might not the more sophisticated formal logic built by logicians at least be of use to the cognitive psychologist as a tool to shine a light back on the basis of such a logic, revealing something about the nature of basic logic? Macnamara didn’t seem to hold much hope that MPL would be appropriate for this task. He said, “Increasingly it is becoming apparent that the first-order predicate logic ... is not a good guide to the logic of ordinary language and ordinary thinking” (Macnamara 1986, 31). In spite of this, he argued extensively that the “language of thought” in which our basic logic is understood is replete with notions that correspond perfectly with the logical particles (formatives) beloved of logicians. Thus the notions of set-membership (Macnamara 1986, 74), truth-functions (Macnamara 1986, 107ff), quantifiers (Macnamara 1986, 169), and, perhaps, proper set-inclusion (Macnamara 1986, 162). Even the truth values and basic principles of logic are constituents of our basic logic, for “neither the concepts of truth and falsity nor the fundamental principles of logic are learned” (Macnamara 1986, 109). It should be kept in mind here, that while these notions are taken to be unlearned elements of our basic logic, expressed in our language of thought, the words we use for them in our public discourse are, like any other expressions in our ordinary language, learned. The idea that basic logical competence is innate is often referred to as “logical nativism” and has been defended, in the face of opposition from the heirs of Frege’s anti-psychologism, not only by psychologists such as Macnamara but by a growing number of cognitivist psychologists. Here are two examples: The innateness of basic logical abilities is not a bitter pill, however. The abilities are exactly the sort of things one might expect to be innate, if anything is, given the centralrole of reasoning in cognition and learning. ... the presence of innate logical abilities doesn’t imply that people never make mistakes in deduction, or that they are incapable of improving their inference skills, any more than the presence of innate grammatical abilities implies that people never make grammatical errors, or that they can’t improve their grammar through learning. (Rips 1994, 375) At present, we see no plausible alternative to logical nativism. Empirical evidence from child language (including 2-year-old children) and cross-linguistic research (from topologically different languages) supports logical nativism, and several a priori arguments provide additional grounding. (Crain and Khlentzos 2008, 53)

So, if the human rational capacity is unlearned, innate, why? Surely such a unique ability must have had some evolutionary value for our remote ancestors. Rationality must have made them more fit for their biological niche than their competitive species. It would seem that logic is in our genes. We are hardwired for ratiocination. William S. Cooper has taken a radical stand concerning this

144 | Into the Metaphysical Bogs

position, writing, “The laws of logic are not independent of biology but implicit in the very evolutionary processes that enforce them. The process determines the law” (Cooper 2001, 2). It’s not that logic is exactly hardwired in our genes; it’s that logic is nothing more than an inevitable result of evolution. He calls this the “Reducibility Thesis: Logic is reducible to evolutionary theory” (Cooper 2001, 2). We evolved, along with all other species, according to discernible laws of evolution. The laws of formal logic (here Cooper has in mind only MPL) depend for their force on the more basic laws of evolutionary biology. The question of whether or not the laws of evolutionary biology are themselves subject to any logical laws doesn’t seem to arise. One thing Cooper wants to avoid is the notion that the laws of logic are in some way laws of language (Cooper 2001, 195-196). By contrast, Derek Bickerton has argued that human language is the very foundation of our capacity for abstract thinking (presumably including reasoning): “it wasn’t a ‘highly developed brain’ that gave us language ... and abstract thought, but language that gave us abstract thought and a highly developed brain” (Bickerton 2009, 5). “I believe that language came first and enabled human thinking” (Bickerton 2009, 191). The conventional view among biologists, anthropologists, and linguists is that homo sapiens could not have developed complex language until they had developed large, complex brains. Bickerton turns this view on its head. He contrasts human language with animal communication systems (ACS). While both depend on the use of discrete symbols (sounds, marks, calls, etc.), ACSs don’t combine symbols to form more complex symbols – human languages (even very primitive ones, creoles, and pidgins) do (Bickerton 2009, 43ff). Most importantly, the primitive symbol (words) of human languages combine with syntactic regularity. “Words combine as separate units – they never blend. They’re atoms, not mudballs” (Bickerton 2009, 45, see also 229-231). Moreover, one must not be tempted to think that human language is simply the result of an evolved and elaborated ACS. If ACSs formed a ladder to language, the species closest to us should have the most languagelike ACSs. But they don’t, and the reason they don’t is because ACSs are not failed attempts at language. They’re not crude and misguided attempts to do what we do. They are autonomous systems that exist to serve the adaptive needs of each species that has one. (Bickerton 2009, 57).

Just as humans developed the abilities to construct tools, engage in fishing, hunting, and planting because such activities paid off from an evolutionary point of view, they developed language over a long period of time for the same reason. Children may normally acquire full language ability very early, but our species did not (Bickerton 2009, 185-187). What humans acquired when they

What Were We Thinking? | 145

first acquired language was an ability to communicate complex information. What that led to – eventually – was an ability to engage in abstract thinking. Certainly, language is not the means by which we structure the world of thought, but it would never have gotten off the ground, never developed into what it is today, and certainly never have raised thought to a new power if it hadn’t first entered the real world in the tangible form of communication. ... only external events can shape internal events, because only external events are visible to natural selection. (Bickerton 2009, 185)

If Bickerton’s account of the origins of human language is correct, that it resulted from the interplay between our innate capacity and the contingencies of the outer world in which we lived, then it seems reasonable to take our rational capacity to have come along somehow with our language. Philosophers and, one hopes, logicians have a concern with how we in fact reason in everyday situations. Recently Vanessa Lehan-Streisel, in the course of defending a version of psychologism in logic, has stressed the point that, because reasoning in such situations is always carried out in the medium of natural language, philosophers and logicians must recognize that “the justification of good inference [is] necessarily based in the meaning of logical terms as they are used in natural language” (Lehan-Streisel 2012, 579). “[W]e need also to have some idea of how people come to know or to use logical terms and inferences. [R]egardless of what we believe the metaphysical underpinnings of logic to be, we can only explain the use of logic in natural language by studying this use” (LehanStreisel 2012, 580). She is right to highlight the importance of how ordinary people, untrained in any more sophisticated system of formal logic, make inferences based on the logical terms of their natural language (expressions representing logical concepts such as quantifiers and truth-functions). We’ve already seen during our brief journey that TFL exploits the charged (positive/negative) character of natural language formatives. It seems that our reasoning capacity rests on our innate linguistic capacity. It borrows from and exploits the oppositional nature of salient particles for the formal elements of logic. As Sommers has said, “the suggestion that we reason by cancellation of elements that have opposite signs is a plausible candidate for a theoretical description of the deductive process” (Sommers 1976a, 614). It “will illuminate the actual process of reasoning” (Sommers 1978, 42). This would mean that rationality is an emergent property of natural language. A “cognitively veridical logic” (an “empirically informed logic” in Lehan-Streisel’s words, 2012, 584) that recognizes this could be empirically tested. This conjecture can be more fully formulated as: Sommers’ Empirical Conjecture

146 | Into the Metaphysical Bogs

If untutored humans, including pre-adolescent children, are able to intuitively (and unconsciously for the most part) reckon deductively using only their native language, then 1. There must be a relatively simple usable algorithm for deductive reasoning, 2. Such an algorithmic ability must be either innate or exceptionally simple to acquire, 3. It must somehow be embedded in or closely dependent on certain natural language features, 4. It should be possible for scientists (e.g., cognitive psychologists) to empirically investigate this phenomenon in order to eventually confirm or disconfirm it, 5. The +/− reckoning that characterizes TFL seems a suitable first candidate for such a cognitively veridical logic. It’s now time, as we proceed to our next stop, to let Sommers once again speak for himself.

9 Ratiocination: An Empirical Account∗ by Fred Sommers Abstract: Modern thinkers regard logic as a purely formal discipline like number theory, and not to be confused with any empirical discipline such as cognitive psychology, which may seek to characterize how people actually reason. Opposed to this is the traditional view that even a formal logic can be cognitively veridical – descriptive of procedures people actually follow in arriving at their deductive judgements (logic as Laws of Thought). In a cognitively veridical logic, any formal proof that a deductive judgment, intuitively arrived at, is valid should ideally conform to the method the reasoning subject has used to arrive at that judgment. More specifically, it should reveal the actual reckoning process that the reasoning subject more or less consciously carries out when they make a deductive inference. That the common logical words used in everyday reasoning – words such as ‘and’, ‘if’, ‘some’, ‘is’, ‘not’ and ‘all’ – have fixed positive and negative charges has escaped the notice of modern logic. Here we show how, by unconsciously recognizing ‘non’ and ‘all’ as ‘minus-words’, while recognizing ‘and’, ‘some’ and ‘is’ as ‘plus-words’, a child can intuitively reckon, for example, ‘not (−) all (−) dogs are (+) friendly’ as equivalent to ‘some (+) dogs aren’t (−) friendly’: −(−D+F) = +D−F.

9.1 Introduction As soon as one learns that all colts are horses, one is in no doubt that anyone riding a colt is riding a horse. And when, after a week into the baseball season, we read that no major league team has won all of its games, we automatically reckon that every team has lost some game. Having just been informed that all reptiles are cold-blooded the child who already knows that snakes are reptiles is in a position to conclude that snakes are cold-blooded. These are commonplace examples of the kind of deductive judgments people intuitively make many times a day, deriving conclusions from premises, spotting inconsistencies, noticing that one sentence is logically equivalent to

|| ∗ This chapter first appeared in Ratio, 21 (2008) pp. 115-133, and appears here with the permission of Fred Sommers, David Oderberg, editor of Ratio, and John Wiley & Sons.

148 | Ratiocination: An Empirical Account by Fred Sommers

another. But as with most things we do fairly well (walking, singing on key, etc.,) we don’t know much about how we do it. It is reasonable to expect that when neuroscience emerges from its infancy we will get discriminate information about what happens in the brain as we move from ‘every colt is a horse to ‘everyone who rides a colt rides a horse’ or when we judge that −(−x+y) = −x+y, or judge that ‘not all dogs are friendly’ “says the same thing” as ‘some dogs aren’t friendly’. Meantime we properly make do with less basic characterizations of ratiocination. For example, in arithmetic we explain that we get from − (−x+y) to +x−y by distributing the external minus sign of ‘–(−x+y)’ inward, changing ‘−x’ to ‘+x’ and ‘+y’ to ‘−y’. Rules like this are learned and inculcated and people consciously apply them in their mathematical reckonings. The same kind of explanation, however, is not available for deductive judgments involving ordinary English sentences. Most people are untutored in logic and in any case we are not, as children, given rules to use in getting from something like ‘all snakes are reptiles’ to ‘whoever feeds a snake feeds a reptile’. Ask a student of beginners’ algebra why she finds ‘− (−x+y)’ equal to ‘+x−y’ and she’ll tell you about the rule she uses to get from the left expression to the right. Ask the same student why she finds it obvious that ‘not every dog is friendly’ “says the same thing” as ‘some dogs aren’t friendly’ and she says ‘it just is obvious’. The phenomenon of discursive ratiocination thus challenges us to give empirically correct accounts of our intuitive reckonings, i.e. unconscious, mode of reckoning. For we do not find a veridical account of how we reckon by looking in textbooks of logic.

9.2 Unnatural Argumentation The sentences that figure in our everyday ratiocinations contain no bound variables, and no hypothetical or conjunctive clauses. But a textbook proof that (1) ‘not all dogs are friendly’ is logically equivalent to (2) ‘some dogs aren’t friendly’ starts out by “translating” (i) and (ii) into the standard quantifier-variable notation, regimenting them as (1*) not: for every x, if x is a dog, then x is friendly. (2*) for some x, x is a dog and not: x is friendly.

Unnatural Argumentation | 149

and proceeds, by applying rules of propositional and predicate logic, to show in about six steps that (1*) and (2*) are equivalent.1 The proof does formally justify our intuitive judgment that (1) and (2) are equivalent. It is, however, not intended to be descriptive of how anyone actually reasons, and no one who reads the proof ever reacts to it by saying ‘Aha!, so that’s why I find it so obvious that saying “not all dogs are friendly” is like saying “some dogs aren’t friendly”.’ The same applies to other intuitive deductive judgments. Most teenagers who come across the sentence, ‘all colts are horses but there is someone who is riding a colt and not riding a horse’ would immediately judge it to be inconsistent and false. What would account for his judgment? Here again standard logic casts no light. Taking about twelve to fifteen carefully chosen steps to derive a contradiction of the form ‘p and not p’ or ‘some X is not an X’, a logic text will present a formal proof that (3) For every x if x is a colt then x is horse. and (4) For some x there is a y such that y is a colt and x rides y, and for every z if z is a horse than x does not ride z.2 are jointly inconsistent. But this logician’s proof that (5) Every colt is a horse. and (6) Someone riding a colt is not riding a horse.

|| 1 The proof (each way) takes 3 steps: (1) ~(∀x)(Dx⊃Fx) (2) (∃x)~(Dx⊃Fx) (3) (∃x)(Dx&~Fx) and back from (3) to (1). 2 In symbolic notation: (3F) (∀x)(Cx⊃Hx) (4F) (∃x)(∃y)((Cy&Rxy)&(∀z)(Hx⊃~Rxz)).

150 | Ratiocination: An Empirical Account by Fred Sommers

cannot both be true does not describe or explain, nor does it claim to describe or explain, how we intuitively and instantly arrive at that judgment in a fraction of a second. The proof justifies our intuitive judgment, but it still leaves us in want of an account of how anyone in real life instantly judges that the conjunction of (5) and (6) is ‘obviously inconsistent’. How do people, untutored in logic, instantly and correctly arrive at the subjective certainty that (5) and (6) are jointly inconsistent? This ratiocinative judgment cannot be a leap of faith: there has to be some definite procedure or algorithm that we all unconsciously apply in arriving at it. The procedure is clearly worth exposing. For it promises to provide a novel, and even formal justification of the judgment of inconsistency as well as a faithful description of how we actually reach that judgment. We already know the ratiocinative procedure can’t be anything like a textbook proof. For it is vastly more efficient. Unlike a textbook proof, natural ratiocination reaches a verdict of inconsistency almost on sight. Not only do we take no time to translate (5) and (6) into the idioms of modern predicate logic, but there seems to be neither time nor need to apply any of the methods we are taught in logic courses in order to derive a contradiction from the conjunction of (5) and (6). Plainly, something very much simpler is going on. Since the natural method of ratiocination is clearly more efficient than the conventional methods of modern predicate logic, I shall take some space to say why I think modern logicians, preoccupied with formal justifications of deductive reasoning, have not bothered to look for, expose, and explain how we actually do it.3 The general lack of interest in actual ratiocination is largely taken for granted and rarely arouses comment. But Yehoshua Bar-Hillel noted it with surprise and disapproval in 1969, asking, ‘How did it come to be that logic, which, at least in the views of some people 2300 years ago, was supposed to deal with argumentations in natural languages, has done a lot of important things, but not this?’4

|| 3 Logic used to be called the science of the ‘laws of thought’. Logicians no longer call it that, perhaps because they believe that it cannot be regarded as the science that characterizes our actual rationcinations. 4 Bar-Hillel, 1969.

Why the General Inattention to Natural Argumentation | 151

9.3 Why the General Inattention to Natural Argumentation Before there was quantification theory and modern predicate logic (MPL), there was Aristotelian (traditional) term logic. Traditional term logic (TTL) was popular. Its logical syntax was that of the natural language in which people think and reason; it did not translate sentences into the formulas of an artificial language. But its inference power was unacceptably limited. It was especially weak when it came to handling ‘multiply general sentences’, sentences that predicate relational terms of more than one subject. TTL could not, for example, produce a satisfactory formal proof that ‘some boy loves every girl’ entails ‘every girl is loved by some boy’ or that ‘every horse is an animal’ entails ‘every head of a horse is a head of an animal’. Mathematically minded logicians like Gottlob Frege, Charles Peirce and Bertrand Russell, developed modern predicate logic (MPL) to deal with this and other inadequacies of traditional term logic. When the quantifier-variable language of MPL became ‘canonical’, MPL supplanted TTL and the teaching of logic was revolutionized. On the one hand we had in TTL a natural, variable-free logic that was unable to cope with a wide range of inferences. On the other hand we now have in MPL an inferentially powerful replacement whose logical language of quantifiers and bound variables is not the natural language in which we actually do our thinking.5 That the sentences of natural language lack a ‘proper’ logical syntax has counted heavily against them. Frege himself was only the first among many who came to regard the unreconstructed sentences of natural language with suspicion, characterizing natural language itself as logically misleading and a poor vehicle for logical reckoning. Alfred Tarski’s work on truth contributed to the suspicion that natural language was logically defective, and to the popularity of working with formalized, constructed languages. W. V. Quine, who probably did more than anyone to consolidate the revolution in logic, calls attention to the superior inference powers of MPL while acknowledging that its artificial language imposes ‘Procrustean’ constraints: ‘[A]ll of austere science submits

|| 5 For a modern system of a term logic – called “Term Functor Logic” (TFL) – that is inferentially as powerful as MPL, see Sommers (1982), especially chapter 9, Which presents TFL as a formal rival to MPL. In the present paper I focus on actual ratiocination, advancing the thesis that the term functor logic is the driving mechanism accounting for the way we arrive at our intuitive, everyday deductive judgments. See also Sommers and Englebretsen 2000.

152 | Ratiocination: An Empirical Account by Fred Sommers

pliantly to the Procrustean bed of predicate logic. Regimentation to fit it serves not only to facilitate logical inference, but to attest to conceptual clarity.’6 That sentences regimented in accordance with the canons of predicate logic have a conceptual clarity lacking in the variable-free originals, is one of the most attractive features of the quantifier-variable notation. A canonical regimentation of a sentence like ‘some ceilings are leaking’ is truth-conditionally explicit. It tells you exactly what the world must be like for that sentence to be true: there must be a thing x such that x is a ceiling and x is leaking. The ontological explicitness of the quantifier/bound-variable syntax is an undoubted, if somewhat prolix, virtue; nevertheless it is not the syntax of the sentences that figure in our deductive reasoning.7 Nor has it been demonstrated that using a logical language that is not the one we actually use in our everyday deductive thinking facilitates logical inference. Michael Dummett attributes Frege’s ‘disrespect for natural language’ to his ‘discovery of quantification’,8 a discovery that gave modern logic much more inference power than traditional Aristotelian logic. Even a simple variable-free sentence like ‘all colts are horses’must be regimented and recast as a quantified conditional of form ‘if x is F, then x is G’, before we can hope to show that it entails ‘anyone who rides a colt rides a horse.’ Inference power was the prize but canonical translation was the price. And it was willingly paid. For, as Dummett says, ‘modern logic stands in contrast to all the great logical systems of the past …in being able to give an account of sentences involving multiple generality, an account which depends upon the mechanism of quantifiers and bound variables.’9 This view of modern logic is widely accepted, and not only among logicians. Noam Chomsky is keenly aware of the need to offer an empirical explanation for the extraordinary linguistic competence of native speakers, including their competence in the use of natural language to reason deductively. Since we routinely and competently reason even with multiply-general but variable-free sentences like ‘every rider of a colt is a rider of a horse’ and ‘some boy loves every girl’, we would expect Chomsky to reject the view that the quantifier/variable notation is essential to any comprehensive account of deductive reasoning. Instead, we find him suggesting that the sentences that figure in our

|| 6 Quine 1987, 158. 7 The virtues and vices of this kind of ontological clarity are thoroughly discussed in chapter 10 of Sommers 1982. 8 Dummett, 1981, 19ff. 9 Ibid., xxxi.

Why the General Inattention to Natural Argumentation | 153

everyday reasoning may not be what they appear to be: ‘The familiar quantifiervariable notation would in some sense be more natural for humans than a variable-free notation for logic.’10 At one point he says that ‘[t]here is some empirical evidence that it [the brain] uses quantifier-variable rather than quantifier-free notation.”11 In the present early stage of its development, neuroscience is nowhere near the point of being able to make assertions about brain activity specifying the syntax of the sentences that figure in our deductive judgments. Nor is there any empirical basis for Chomsky’s astonishing assertion that a quantifier/boundvariable syntax is ‘more natural for humans’ than a variable- free syntax; that suggestion, like the one about the brain, merely attests to Chomsky’s unquestioning adherence to the conventional doctrine that the sentences we reason with are quantifier/variable in form – an all too typical example of doctrinal orthodoxy trumping common sense. If – as they plainly appear to be – the sentences that figure in our natural argumentations are variable-free, then natural argumentation must work without the aid of the quantifiers and the bound variables that modern logicians conventionally introduce in their proofs. The task then, is to expose how everyday argumentation with the variable-free sentences of natural language actually works. Bar-Hillel asks why modern logic, which ‘has done a lot of [other] important things’ has neglected natural argumentation. Part of the answer has to do with ‘the important things’ Bar-Hillel has in mind, including the use of the quantifier/variable notation, which made possible the development of predicate logic as a calculus. Such achievements are routinely invoked in explaining to students of logic why the sentences of natural language must be canonically regimented to render them useful for logical reckoning. But regimentation into quantifier/variable notation is precisely what is not done in actual ratiocination. This being so, it is understandable that modern logic would have nothing to do with ratiocination or any other, more public, mode of reasoning with unregimented sentences of natural language. The low opinion of natural language, the dogmatic (and empirically implausible) assumption that quantifiers and bound variables are essentially involved in everyday deductive reasoning with multiply general sentences, and the consequent inattention to all forms of natural argumentation, have contributed to perpetuate the general ignorance of how logically untutored human beings reason naturally as well as they do. To divest oneself of these attitudes || 10 Chomsky 1980, 165. 11 Chomsky 1981, 34.

154 | Ratiocination: An Empirical Account by Fred Sommers

and views frees one to examine ratiocination in an empirical spirit, unconstrained by any assumption that the sentences in it have a form quite different from the form they appear to have, or the belief that the rules by which we move from premises to conclusions must be rather like the rules by which we construct proofs in predicate logic. Traveling light has well-known advantages. In the present instance, a fresh and unencumbered, empirical enquiry into the nature of ratiocination is rewarded by the pleasing discovery of a mode of reckoning with the variable-free sentences of natural language that can be formalized as a pure logical system12 but that gives every indication of being in actual use by the ratiocinating subject.

9.4 How We Do It Any satisfactory report of what is going on in a ratiocination must take into consideration the fact that we intuitively reason with variable-free sentences of our native language, that we do so with extraordinary celerity, efficiency and confidence, and that, mostly, our reasoning is valid. In looking for a psychologically veridical and logically sound account of how we do these things, we learn a lot about logic itself. Logicians traditionally distinguish between the words of a sentence that determine its logical form and the words which carry its matter. The logical form of ‘not all dogs are friendly’ is ‘not all X are Y’, whose formative elements are ‘not’, ‘all’ and ‘are.’ The form of the logically equivalent sentence, ‘some dogs aren’t friendly’ is ‘some X aren’t Y’; its formative elements are the words ‘some’ and ‘aren’t’. In both sentences, the material elements are the terms ‘dogs. And ‘friendly.’ In the quantifier/variable language of predicate logic, the ‘canonical’ form of ‘not all dogs are friendly’ is ‘not: for every x, if x is ϕ then x is ψ’; the form of the logically equivalent sentence ‘some dogs are unfriendly’ is ‘there is an x such that x is ϕ and not: x is ψ’. In symbolic notation, the formatives involved are ‘~’, ‘∀’, ‘∃’, ‘⊃’, ‘&’, and the bound variable, ‘x.’ The material elements in both sentences are the terms ‘dogs’ and ‘friendly’. In a natural ratiocination we reckon with terms and formatives in ways that allow us to transform a sentence of one form into a logically equivalent sentence of a different form, or we may reckon with the terms and the formatives of several sentences taken as premises to a conclusion that follows from them. The

|| 12 Sommers 1982.

How We Do It | 155

logic of ratiocination is a logic of terms and natural formative elements.13 By contrast, predicate logic is a logic of predicates, quantifiers, bound variables, and propositional connectives.

9.4.1 The charged character of the natural formatives Consider again the difference between a mathematical ratiocination like the one that takes us from ‘−(−x+y)’ to ‘+x−y’, and a piece of discursive ratiocination such as the move we make transforming ‘not all dogs are friendly’ into ‘some dogs aren’t friendly’. Although only the first move is effected by the conscious application of a rule that has been learned, the ratiocinative move that gets us from one sentence to the other is strikingly similar. Just as the algebraic transformation is effected by distributing the external minus sign of ‘−(−x+y) inside, changing ‘−x’ to ‘+x’ and ‘+y’ to ‘−y’, so the discursive transformation is effected by distributing the external ‘not’ of ‘not all dogs are friendly’ inside, changing ‘all dogs’ to ‘some dogs’ and ‘are friendly’ to ‘aren’t friendly’. This suggests that the discursive ratiocination is also algebraic. In searching for clues to how people intuitively reason, I focused on the natural formative words we use in everyday deductive reasoning – words such as ‘non’, ‘some’, ‘every’, ‘and’, and ‘if’. Studying hundreds of unregimented sentences that figured in natural argumentation, I discovered that the natural formatives we commonly reckon with are positively or negatively charged and that we reckon with their charges as we reckon with the opposing ‘+’ and ‘−’ functors of elementary algebra. Some logical words function and are treated by us as positive or ‘plus’ functors, others function and are treated by us as negative or ‘minus’ functors. The following is a partial but representative list of logical words and particles that specifies their distinct oppositional characters. ‘some’ (‘a’), ‘is’ (‘was’ ‘are’, etc.) ‘and’, are positive formatives. ‘every’ (‘all’…), ‘not’ (‘no’, ‘non-‘...), ‘aren’t’, are negative formatives.

|| 13 The general form of a sentence taking part in a ratiocination is ‘yes/not: some/every ϕ/nonϕ is/isn’t ψ/non- ψ’. A term may be a compound, formed from one or more terms and a formative (e.g. ‘non-citizen’, ‘farmer and citizen’) or relational, consisting of a relational term and one or more non-relational terms and formatives, e.g. ‘rides a colt’, ‘sold some pesticides to every farmer’).

156 | Ratiocination: An Empirical Account by Fred Sommers

This list can be enlarged to include copulas like ‘were’, ‘will be’ etc., negative particles like ‘-less’ (as in ‘colorless’), ‘un-‘ (as in ‘unwise’), and the propositional connective ‘if’ (which, unlike ‘and’, is negative) (see sec. 7.2 below) In algebra proper, we reckon quite consciously with only two positive or negative operators in the ways we have been taught to do. In reasoning with sentences of natural language, we unconsciously reckon with the positive or negative charges of logical particles like ‘some’, ‘all’, ‘non’, ‘are’, and ‘aren’t’. In both modes of reasoning there are, for purposes of deductive reckoning, only two oppositional, reckoning operators, though this is obscured in the case of the discursive reasoning, where the logical formatives we reckon with give the appearance of playing a variety of logistical roles. In answer to the question of how we intuitively reason deductively with sentences of our native language, I propose the empirical hypothesis that in our everyday deductive reckonings, we focus on and reckon with the charges of the formative elements of the kind represented in the above list.14 I shall call this the cognitive hypothesis. If the hypothesis is correct, we intuitively judge two sentences of the form ‘some X is Y ‘and ‘some Y is X’ to be equivalent because we read both ‘some’ and ‘are’ as plus-words. In effect, our judgment that ‘some women are senators’ says what ‘some senators are women’ says is essentially a reckoning that +W+S = +S+W. In the same way, a sound-minded child reckons with ‘not’, and ‘all’ as negative formatives and ‘are’ as a positive formative, reading ‘not all dogs are friendly’ thus: Not: all dogs are friendly. ⇓ ⇓ ⇓ ⇓ ⇓ − ( − Dogs + Friendly ) Coming upon ‘not all dogs are friendly’, the child may instantly reckon it equivalent to ‘some dogs aren’t friendly’: −(−Dogs + Friendly)

≡ +Dogs – Friendly

|| 14 Formative words like ‘all’, ‘some’, ‘not’ and ‘and’ are natural logical constants. That each such constant has either a plus character or a minus character is a metalogical truth. Thus ‘some A is B ≡ some B is A’, which we transcribe as ‘+A+B ≡ +B+A’, is an axiom of logic (the Logical Law of Commutation) in just the way that ‘a+b = b+a’ is an axiom of mathematics. And just as ‘−(a+b) = −a−b’ is a theorem of algebra, so ‘No A are B ≡ all A aren’t B’ is a truth of logic represented in TFL as ‘−(A+B) ≡ −A−B’.

How We Do It | 157

The same child will reckon ‘not a creature was stirring’ equivalent to ‘every creature wasn’t stirring’: Not: a creature was stirring ≡ Every creature wasn’t stirring ⇓ ⇓ ⇓ ⇓ ⇓ ⇓ ⇓ ⇓ ⇓ − ( + Creature + Stirring ≡ − Creature − Stirring And when, having learned that all snakes are reptiles, a child learns that all reptiles are cold-blooded, he may add the two propositions and conclude that all snakes are cold-blooded: [−Snakes + Reptiles]+[−Reptiles + Cold-blooded] ⟹ −Snakes + Coldblooded

9.4.2 What This Suggests About Our Cognitive Development Children reason deductively well before they learn any school algebra. If the +/− account of deductive reasoning is psychologically veridical, it throws important light on early cognitive development. After a child achieves some command of her native language, we must attribute to her an operational sensitivity to the specific oppositional values of the key logical formatives. She begins to reckon with ‘no’ as if it were a minus-sign, with ‘and’ as a plus-sign, with ‘some’ as a plus-sign, with ‘every’ as a minus-sign, with ‘is’ as a plus-sign, and so on. Little of this is conscious: she may be aware that ‘no’ and ‘not’ are negative elements and vaguely aware that ‘and’ and ‘is’ are plus-like, but her logical aptness is more a practical knowing how than a theoretical knowing that. Coming upon the sentence ‘some dogs are unfriendly’ she will mentally represent it as ‘+dogs+(−friendly)’ and then reckon it equivalent to ‘+dogs−friendly, in effect, inferring ‘some dogs aren’t friendly’ from ‘some dogs are unfriendly.’ Coming upon ‘not all dogs are friendly’ she will mentally reckon with it as ‘−(−dogs+friendly)’ which she may then find equivalent to ‘+dogs−friendly’, or ‘some dogs aren’t friendly.’ None of this know-how is an application of conscious knowledge. For example, at no point in her intellectual development is the child aware that she is reckoning with ‘some’ and ‘every’ as formatives opposed to each other as ‘+’ and ‘−’, a lack of awareness that continues into adulthood (even after she becomes a teacher of logic). The general ability to intuitively make deductive judgments by exploiting the charged character of the natural logical constants seems to me to be an en-

158 | Ratiocination: An Empirical Account by Fred Sommers

dowment of our intelligent species and not something each human child learns anew. That young children are endowed with this Meno-like know-how, would help to explain why – as Plato famously noted – they later find elements of geometry and algebra already familiar when they actually come to learn them.

9.5 Pace Frege and Pace Chomsky That the natural formative elements are charged in a +/− way is an important metalogical truth that has escaped the notice of conventional logicians.15 That everyday ratiocination proceeds by unconsciously taking advantage of the charged character of the formatives is an empirical hypothesis in cognitive psychology. If it is correct, ratiocination abstracts from the distinctive meanings of logical formatives such as ‘not’, ‘all’ and ‘if’, reckoning with all three as minuswords. Similarly, it abstracts from the distinctive meanings of ‘and’, ‘some’ and ‘is’, reckoning with all three as plus-words. In effect, for purposes of reckoning, the number of logical formatives is reduced to two, thereby rendering actual ratiocination remarkably simple and efficient. What would human beings have been like if the species had not hit on the method of deductive reasoning that exploits the charged character of the logical formatives? Suppose instead that people reckoned as conventional logicians do, by scrupulously giving negative formatives elements like ‘all’, ‘not’ and ‘if’ distinctive roles to play, and doing the same with positive formatives like ‘some’ and ‘and.’ Deductive reasoning would have been far too cumbersome and timeconsuming to have evolved as a natural human activity for children and adults and we would probably not have become the rational animals we are.16 As matters stand, intuitively judging ‘not all dogs are friendly’ as algebraically equivalent to ‘some dogs aren’t friendly’ by reckoning that −(−Dog+Friendly) = +Dog – Friendly is child’s play and eight-year-old children unconsciously do it all the time. By contrast, reckoning that ‘Not: for every x, if x is a dog then x is friendly’ says the

|| 15 See Sommers (1970), Sommers (1982), and Sommers (1990). 16 In conventional modern logic, the inference from ‘not all dogs are friendly’ to ‘some dogs aren’t friendly’ involves five formatives, ‘~’, ‘∀x’, ‘∃x’, ‘⊃’, and ‘&’. In older term logic, the formatives are ‘non’, ‘all’, are’, some’ and ‘aren’t’; in the +/−system of term functor logic, we reckon with only two.

The Virtues of Term Functor Logic | 159

same thing as ‘There is an x such that x is a dog and not: x is friendly’ takes a bit of doing even for an adult. I call the logic that I propose is actually at work in ratiocination, Term Functor Logic (TFL).17 I think that a revival of classical term logic in the form of Term Functor Logic would restore the subject of logic to its proper classical status as the discipline concerned with the way rational beings reason when they reason correctly (logic as the study of the ‘Laws of Thought’).

9.6 The Virtues of Term Functor Logic Both MPL and TFL have adequate inference power.18 But inference power is only one factor to consider when judging the merits of a logical system. Two other, less noted, factors are: (1) deductive efficiency: measured in terms of speed and brevity, the proof procedures of TFL are very much more efficient than those of MPL; (2) naturalness: measured by how close the syntax of its logical language is to the syntax of the sentences of the language in which we actually reason, the sentences of the +/− logical language of TFL are natural, whereas the sentences of the quantifier-variable logical language of MPL are artificial. When modern predicate logic replaced traditional term logic in the universities, the subject of logic was immediately rendered too difficult to be taught before college age and the older, more natural, term logic lost its place in the lower school curriculum. It has not been replaced and the educational loss is inestimable. Even young children are quite capable of learning logic at an early age, but the logic they learn must comport with the way they themselves reason. Teaching logic to the lower schools by teaching the +/− version of term logic – inferentially powerful, natural and very teachable to high school students – is the logical thing to do.

|| 17 For a lucid informal characterization of Term Functor Logic, see George Englebretsen’s preface to Sommers and Englebretsen (2000). An example of a logical principle in TFL is the law of equivalence: Two propostions are logically equivalent, if and only if (i) they are algebraically equal and (ii) they have the same logical quantity, (both being universal or both being particular). For example, ‘some billionaires are honest [+(+B+H)] and ‘all non-billionaires are honest’ [+(−(−B)+H)], though algebraically equal, are not logically equivalent. 18 For some discrepancies see Sommers 1990.

160 | Ratiocination: An Empirical Account by Fred Sommers

9.7 An Anticipated Confirmation If it is correct that in everyday reasoning, people unconsciously exploit the charged character of the logical formatives, then in the not too distant future (neuroscience having sufficiently advanced) we should expect to find that the same brain process that is responsible for our judgment that −(−x+y) = +x−y, is responsible for our judgment that ‘not every X is Y’ ≡ ‘some X isn’t Y’. By becoming aware of the charged character of the natural formatives, we are on the way to appreciating why people who have never studied a page of formal logic can intuitively reckon with English sentences with the same celerity and sureness that fourteen-year-olds reckon with elementary algebraic expressions. Here, at random, are two typical ways we treat common logical forms: (1) ‘No X is Y’ is the vernacular abbreviation of ‘not: some X is Y’ and we so transcribe these equivalent forms: − (X+Y) ≡ − (+X+Y). Thus, ‘no creature was stirring’ is equivalent to ‘not a creature was stirring’. (2) Relational ‘multi-general’ sentences are naturally represented, in the term calculus and easily reckoned with. For example, TFL transcribes the form ‘every X is R to some Y’ as ‘−X+(R+Y)’. Thus ‘no team has won every game’ transcribes as ‘− (T+(W−G))’ and we reckon it equivalent to ‘every team has lost some game’: −(T+(W−G))≡−T−(W−G)≡−T+((−W)+G). (Note that TFL doesn’t show that ‘no team has won every game’ is equivalent to ‘every team has lost some game’: when a game is called off because of darkness, the teams merely ‘fail to win’.)

9.7.1 Regimentation in TFL Sentences that figure in our ratiocinations often contain logical words that do not belong on a list of oppositional formatives. An example is the word ‘only’, which is neither a ‘plus-word’ nor a ‘minus-word’. The form ‘only X is Y’ is a vernacular abbreviation of ‘no non-X is Y’; in reckoning with a sentence like ‘only citizens are voters’ we must first paraphrase it as ‘no non-citizens are voters’, a sentence that can be transcribed and reckoned with. Regimented as ‘no non-citizens are voters’ we may proceed to show that ‘only citizens are voters’ is

An Anticipated Confirmation | 161

equivalent to ‘all non-citizens are non-voters and to ‘all voters are citizens’: −((−C)+V) ≡ −(−C)+(−V) ≡ −V+C. According to this account, we arrive at the judgment that ‘only citizens are voters’ is equivalent to ‘all voters are citizens’ only after having first paraphrased ‘only citizens are voters’ as ‘no non-citizens are voters’. This deductive judgment should therefore take slightly more time than a more direct judgment of equivalence like the one between ‘not all dogs are friendly’ and ‘some dogs aren’t friendly’ (where no paraphrase is necessary). A prediction like this may be testable and it suggests a general project of devising various cognitive experiments to test comprehensively the empirical hypothesis.

9.7.2 Inferences from two or more premises The conjunctive particle ‘and’ is a plus-word. In a ratiocination that involves more than one premise, a valid inference is made by adding the premises and canceling their middle terms, leaving the extremes for the conclusion. Any classically syllogistic argument has n monadic propositions (n-1 premises and the conclusion) and n recurrent terms. A standard syllogism has 3 propositions and 3 recurrent terms. A hoary textbook example is: A1 Hence

All humans are mortal All Greeks are human All Greeks are mortal

−H+M −G+H −G+M

Here is another typical example of a valid syllogism: A2 Hence

Every farmer is a citizen Some farmer is a philosopher Some citizen is a philosopher

−F+C +F+P +C+P

Both A1 and A2 are valid syllogisms. Note that in both, the sum of the transcribed premises is equal to the conclusion. (A syllogism is valid if and only if it satisfies the following two conditions: (i) its conclusion is equal to the sum of its premises, and (ii) either it contains only universal sentences (e.g., A1) or it contains exactly two particular propositions, one being the conclusion (e.g., A2).)19 || 19 For explanations of this and other principles of TFL, see Sommers and Englebretsen 2000, especially chapters 3, 4, and 5.

162 | Ratiocination: An Empirical Account by Fred Sommers

The empirical hypothesis also illuminates propositional reasoning. ‘Not’ is a minus-word and ‘and’ is a plus-word, so ‘not (p and not-q)’ transcribes as’−(p+(−q))’. ‘If p then q’ is defined as equivalent to ‘not both p and not-q’. This fixes ‘if’ as a minus-word and ‘then’ as a plus-word: If p then q ≡def. not: p and not q –p+q ≡def. –(p+(–q)) Basic inference patterns like modus ponens, modus tollens and the hypothetical syllogism are +/− transparent: Modus Ponens

Modus Tollens

Hypothetical Syllogism

−p+q

−p+q

−p+q

p

−q

−q+r

∴q

∴ −p

∴ −p+r

9.7.3 The colt/horse inference We have yet to see how the empirical hypothesis illuminates a ratiocination like the one causing instant awareness that ‘someone who is riding a colt isn’t riding a horse’ can’t possibly be true since every colt is a horse. We do intuitively and instantly judge that (A) ‘Every colt is a horse’ and (B) ‘Some rider of a colt is not a rider of a horse’ are jointly inconsistent. What makes us think so? This ratiocinative judgment is explained by the glaring selfcontradiction that instantly results from conjoining (B) to (A): viz., that some rider of a horse isn’t a rider of a horse: (A) Every colt is a horse −C+H (B) Some rider of a colt + (R + C) – (R + H) isn’t a rider of a horse _________________________________________________ (C) Some rider of a horse isn’t a rider of a horse

+ (R + H) – (R + H)

Of course, most people just learn that (A) is true; they don’t come across a conjunction of (A) and (B). To avoid self-contradiction, a rational person who accepts (A) will deny any proposition of form ‘someone who Rs a colt doesn’t R a

Conclusion: Learning about Logic by Attending to Ratiocination | 163

horse’. To deny (B) [⟹−(+(R+C)−(R+H))] is tantamount to affirming ‘every rider of a colt is a rider of a horse’ [⟹−(R+C)+(R+H)]. That is why a ratiocinating subject that has accepted (A) as a premise will move to the conclusion that every rider of a colt is a rider of a horse. An alternative account of how ratiocination gets to ‘every rider of a colt is a rider of a horse’ from ‘every colt is a horse’ assumes that we make use of tautological premise (T), to which we add (A): (T) Everyone riding a colt is riding a colt (A) Every colt is a horse (R) Everyone riding a colt is riding a horse

− (R + C) + (R + C) −C+H − (R + C) + (R + H)

Which of these two routes an average teenager takes in inferring ‘everyone riding a colt is riding a horse’ from ‘every colt is a horse’ is for cognitive psychologists to determine. Either way gets the teenager from premise to conclusion very quickly (though the first does so indirectly). And either way accomplishes two things at once: (1) It tells a story that explains why someone who knows that every colt is a horse intuitively concludes that anyone riding a colt is riding a horse. And (2) it formally validates the intuition, by providing a proof in TFL that ‘every rider of a colt is a rider of a horse’ is true if ‘every colt is a horse’ is true.

9.8 Conclusion: Learning about Logic by Attending to Ratiocination In the effort to explain what goes on in a natural argumentation such as a ratiocination (where logical reckoning is private and moves with the speed of thought), we discover that all the basic natural logical constants have oppositional charges, a metalogical fact that makes it possible even for children to reckon intuitively and efficiently with sentences of their native language in much the way they are later explicitly taught to reckon with the +/− functors of elementary algebra. The Term Functor Logic is thus revealed as the natural logic of our everyday ratiocinations. It is how our deductive intuitions work. Of course, intuition is not infallible; we often reckon wrongly. One may intuitively judge ‘some males aren’t senators’ [+M−S] to be equivalent to ‘some senators aren’t males’ [+S−M]; the inequality reveals they are not equivalent.

164 | Ratiocination: An Empirical Account by Fred Sommers

Although I firmly believe that cognitive science will confirm the claim that deductive ratiocination treats the natural formatives as oppositionally charged operators, I do not claim that this is anything more than an empirical hypothesis that is very probably true. It is conceivable that a developed neuroscience might someday show it to be wrong or misleading, directing us to a quite different account of ratiocination, of which, in the present state of our understanding of the brain, we can have no conception. Meantime, the account on offer is clear, it is comprehensive (covering the variegated range of common deductive judgments), it explains the celerity of our everyday reasoning, and it is, for now at least, without serious rivals.

10 Back to Logic Lodge, Base Camp Desist your logification right now ...! David Mitchell

Aristotle was the first to systematically investigate the principles that govern how we reason deductively. For the next several centuries, other logicians attempted to modify or extend that system because they recognized certain of its limitations. Eventually, Frege scrapped the system altogether and formulated a completely different one. The old logic was quickly abandoned for the new because the latter was seen to be so much more powerful than the former, and, in effect, incorporated the old as a proper part. Here’s what Fred Sommers did. Less than a century after Frege’s remarkable innovation, now in a period when the modern predicate logic Frege had built had become the standard and nearly universally accepted system of formal logic available, Sommers demonstrated that the modifications and additions of the old logic that so many had attempted could all be accomplished by the simple recognition of the charged character of natural language particles that carry the burden of logical form. This insight allowed him to formulate a revamped version of the old logic (what Oderberg calls the “old new logic”) that gives it at least all the expressive and inferential power of the “official” system of formal logic, but was nonetheless both more natural and simpler. So now, with some help and guidance from Fred Sommers himself, we’ve come to the end of our brief trek over part of the Logic Trail. It’s a trail that wends its way up hills and down into valleys, over peaks and across bogs. The country we’ve explored is vast, far more extensive than Aristotle, or even Frege, realized. And though we traveled a long way through this country and have seen a number of interesting places, there is still much more to see and examine. Even on our own explorations we paused only briefly at our various stopping places. Each of them has further delights and puzzles to offer on future tours (and, of course, there are many other, often better, guides than I).

References One must never miss an opportunity of quoting things by others which are always more interesting than those one thinks up oneself. Proust Ackrill, J.L., 1957. “Plato and the Copula: Sophist 251-9,” Journal of Hellenic Studies, 77: 1-6. Ackrill, J.L., (ed, transl), 1963. Aristotle’s Categories and De Interpretatione, Oxford, Clarendon Press. Alvarez, E. and M. Correia, 2012. “Syllogistic with Indefinite Terms,” History and Philosophy of Logic, 4: 297-306. Aristotle, 1962. Aristotle: The Categories, On Interpretation, Prior Analytics, H.P. Cooke and H. Tredennick (transl), Cambridge, MA: Harvard University Press. Aristotle, 1963. Aristotle’s Categories and De Interpretatione, J.L. Ackrill (ed, transl), Oxford: Clarendon Press. Aristotle, 1989. Aristotle: Prior Analytics, R. Smith (transl), Indianapolis: Hackett. Austin, J.L., 1950. “Truth,” Proceedings of the Aristotelian Society, Supp. 24. Bar-Hillel, Y., 1969. “Formal and Natural Languages: A Symposium,” J.F. Staal (ed), Foundations of Language, 5: 256-284. Beall, J.C. (ed), 2007. Revenge of the Liar: New Essays on the Paradox, Oxford: Oxford University Press. Ben-Yami, H., 2004. Logic & Natural Language: On Plural Reference and Its Semantic and Logical Significance, Aldershot: Ashgate. Ben-Yami, H., 2009. “Plural Quantification Logic: A Critical Appraisal,” The Review of Symbolic Logic, 2: 208-232. Bickerton, D., 2009. Adam’s Tongue: How Humans Made Language, How Language Made Humans, NY: Hill and Wang. Boethius, 1877-80. Commentarii in Librum Aristotelis, C. Meiser (ed), Vienna and Leipzig: Tempsky/Freitag. Boole, G., 1952. The Laws of Thought, La Salle, IL: Open Court. Boole, G., 1952a. Studies in Logic and Probability, La Salle, IL: Open Court. Boolos, G., 1989. “To be is to be the value of a variable (or to be some value of some variables),” in Logic, Logic, and Logic, Cambridge, MA: Harvard University Press. Böttner, M. and W. Thümmel (eds), 2000. Variable-free Semantics, Osnabrück: Secolo Verlag. Braine, M., 1978. “On the Relation Between the Natural Logic of Reasoning and Standard Logic,” Psychological Review, 85: 1-21. Brown, S.R., 2012. The Last Viking: The Life of Roald Amundsen, Cambridge, MA: Da Capo Press. Cappelen, H. and J. Hawthorne, 2009. Relativism and Monadic Truth, Oxford: Oxford University Press. Carnap, R., 1956. “Empiricism, Semantics, and Ontology,” Meaning and Necessity, Chicago: University of Chicago Press. Carroll, Lewis, 1895. “What the Tortoise Said to Achilles,” Mind, n.s., 4: 278-280. Cheng, P. and K. Holyoak, 1989. “On the Natural Selection of Reasoning Theories,” Cognition, 33: 285-313. Chomsky, N., 1980. Rules and Respresentations, Oxford: Blackwell.

168 | References

Chomsky, N., 1981. Lectures on Government and Binding, Dordrecht and Cinnaminson, NJ: Foris Publications. Clark, D.S., 1983. “Negating the Subject,” Philosophical Studies, 43: 349-353. Cooper, W.S., 2001. The Evolution of Reason: Logic as a Branch of Biology, Cambridge: Cambridge University Press. Corcoran, J., 1974. “Aristotle’s Natural Deduction System,” Ancient Logic and its Modern Interpretations, J. Corcoran (ed), Dordrecht: Kluwer. Corcoran, J. and S. Wood, 1980. “Boole’s Criteria for Validity and Invalidity,” Notre Dame Journal of Formal Logic, 21: 609-638. Cosmides, L. 1989. The Logic of Social Exchange: Has Natural Selection Shaped How Humans Reason? Studies with the Wason Selection Task,” Cognition, 31: 187-276. Crain, S. and D. Khlentzos, 2008. “Is Logic Innate?” Biolinguistics, 2: 24-65. Crane, T., 2014. Aspects of Psychologism, Cambridge, MA: Harvard University Press. Crane, T., forthcoming. “Existence and Quantification Reconsidered,” Aristotelian Metaphysics, T. Tahko (ed), Cambridge: Cambridge University Press. Davidson, D., 1980. Essays on Actions and Events, Oxford: Clarendon Press. De Morgan, A., 1926. Formal Logic, La Salle, IL: Open Court. De Morgan, A., 1966. “On the Syllogism, I-VI,” De Morgan: On the Syllogism and Other Writings, P. Heath (ed), London: Routledge & Kegan Paul. Dipert, R., 1995. “Peirce’s Underestimated Place in the History of Logic: A Response to Quine,” Peirce and Contemporary Thought, K.L. Ketner (ed), NY: Fordham University Press, pp. 3258. Dummett, M., 1981. Frege: Philosophy of Language, Cambridge, MA: Harvard University Press. Dutilh Novaes, C., 2008. “A Comparative Taxonomy of Medieval and Modern Approaches to Liar Sentences,” History and Philosophy of Logic, 29: 227-261. Englebretsen, G., 1972. “Vacuousity,” Mind, 81: 273-275. Englebretsen, G., 1976. “The Square of Opposition,” Notre Dame Journal of Formal Logic, 17: 531-541. Englebretsen, G., 1980. “Singular Terms and the Syllogistic,” The New Scholasticism, 54: 6874. Englebretsen, G., 1980a. “Denotation and Reference,” Philosophical Studies (Ire.), 27: 229236. Englebretsen, G., 1981. “Do We Need Relative Identity?” Notre Dame Journal of Formal Logic, 23: 91-93. Englebretsen, G., 1984. “Quadratum Auctum,” Logique et Analyse, 107: 309-325. Englebretsen, G., 1985. “Defending Distribution,” Dialogos, 15: 157-159. Englebretsen G., 1985a. “Geach on Logical Syntax,” The New Scholasticism, 59: 177-184. Englebretsen, G., 1985b. “On the Proper Treatment of Negative Names,” Journal of Critical Analysis, 8: 109-115. Englebretsen, G., 1985c. “Negative Names,” Philosophia, 15: 133-136. Englebretsen, G., 1986. “Singular/General,” Notre Dame Journal of Formal Logic, 27: 104-107. Englebretsen, G., 1986a. “Czezowski on Wild Quantity,” Notre Dame Journal of Formal Logic, 27: 62-65. Englebretsen, G., 1987. “Truth and Existence,” The New Syllogistic, G. Englebretsen (ed), New York: Peter Lang. Englebretsen, G., 1987a. “Logical Polarity,” The New Syllogistic, G. Englebretsen (ed), New York: Peter Lang.

References | 169

Englebretsen, G., 1987b. “Subjects,” Studia Leibnitiana, 19: 85-90. Englebretsen, G., 1988. “A Note on Leibniz’s Wild Quantity Thesis,” Studia Leibnitiana, 20: 8789. Englebretsen, G., 1990. “A Note on Copulae and Qualifiers,” Linguistic Analysis, 20: 82-86. Englebretsen, G., 1996. Something to Reckon With: The Logic of Terms, Ottawa: University of Ottawa Press. Englebretsen, G., 2005. “Trees, Terms, and Truth: The Philosophy of Fred Sommers,” in Oderberg 2005, pp. 24-48. Englebretsen, G., 2006. Bare Facts and Naked Truths: A New Correspondence Theory of Truth, Aldershot: Ashgate. Englebretsen, G., 2008. “Winnowing Out the Liar...and Others,” The Reasoner, 2: 3-4. Englebretsen, G., 2010. “How to Use a Valid Derivers License,” The Reasoner, 4: 54-55. Englebretsen, G., 2010a. “Making Sense of Truth-Makers,” Topoi, 29: 147-151. Englebretsen, G., 2012. Robust Reality: An Essay in Formal Ontology, Frankfurt: Ontos. Englebretsen, G., forthcoming. “La Quadrature du Carré,” Soyons Logiques, F. Schang, A. Moktefi, and A. Moretti (eds), London: College Publications. Englebretsen, G., forthcoming a. “What Did Carroll Think the Tortoise Said to Achilles?” The Carrollian. Englebretsen, G. and C. Sayward, 2011. Philosophical Logic: An Introduction to Advanced Topics, London and N.Y.: Continuum. Frederick, D., 2013. “Singular Terms, Predicates and the Spurious ‘Is’ of Identity,” Dialectica, 67: 325-343. Frege, G., 1903. Grundgesetze der Arithmetik, vol. 2, Jena: Pohl. Frege, G., 1953. Foundations of Arithmetic, 2nd ed., J.L. Austin (transl, ed), Oxford: Blackwell. Frege, G., 1964. The Basic Laws of Arithmetic, M. Furth (transl, ed), Berkeley and Los Angeles: Cambridge University Press. Frege, G., 1967. “The Thought,” Philosophical Logic, P.F. Strawson (ed), Oxford: Oxford University Press. Frege, G., 1970. “On Sense and Reference,” Translations from the Philosophical Writings of Gottlob Frege, 2nd edition, P. Geach and M. Black (eds), Oxford: Blackwell. Frege, G., 1970a. “On Concept and Object,” Translations from the Philosophical Writings of Gottlob Frege, 2nd edition, P. Geach and M. Black (eds), Oxford: Blackwell. Frege, G., 1970b. “Begriffschrift,” Translations from the Philosophical Writings of Gottlob Frege, 2nd edition, P. Geach and M. Black (eds), Oxford: Blackwell. Frege, G., 1979. Gottlob Frege: Posthumous Writings, H. Hermes, et al. (eds), Oxford: Oxford University Press. Frege, G., 1997. The Frege Reader, M. Beaney (ed), Oxford: Blackwell. Friedman, W.H., 1978. “Uncertainties over Distribution Dispelled,” Notre Dame Journal of Formal Logice, 19: 653-662. Geach, P.T., 1962. Reference and Generality, Ithaca, NY: Cornell University Press. Geach, P.T., 1969. “Should Traditional Grammar be Ended or Mended?” Educational Review, 22: 18-25. Geach, P.T., 1972. Logic Matters, Oxford: Blackwell. Geach, P.T., 1975. “Names and Identity,” Mind and Language. S. Guttenplan (ed), Oxford: Oxford University Press. Geach, P.T., 1976. “Distribution and Suppositio,” Mind, 85: 432-435. Gleitman, L., 1965. “Coordinating Conjunction in English,” Language, 51: 260-293.

170 | References

Goldstein, L., 2006. “Fibonacci, Yablo and the Cassationist Approach to Paradox,” Mind, 115: 867-890 (all references are to the on line pdf version: http://kar.kent.ac.uk/9134/1/Fibonacci,_Yablo_and_Cassasionist). Goldstein, L and A. Blum, 2008. “When is a Statement Not a Statement?– When It’s a Liar,” The Reasoner, 2: 4-6. Grimm, R.H., 1966. “Names and Predicables,” Analysis, 26: 138-146. Hale, R., 1979. “Strawson, Geach, and Dummett on Singular Terms and Predicates,” Synthese, 42: 275-295. Heintz, J., 1984. Subjects and Predicables, The Hague: Mouton. Henle, M., 1962. “On the Relation between Logic and Thinking,” Psychological Review, 69: 366-378. Herzberger, H, 1970. “Paradoxes of Grounding in Semantics,” Journal of Philosophy, 17: 145167. Hintikka, J. and R. Vilkko, 2006. “Existence and Predication from Aristotle to Frege,” Philosophy and Phenomenological Research, 73: 359-377. Hobbes, T., 1655. Elementorum Philosphiae Prime, De Corpore, London: Andrew Crook. Hobbes, T., 1839. The English Works of Thomas Hobbes, vol. I, W. Molesworth (ed), London: Kessinger. Hodges, W., 2009. “Traditional Logic, Modern Logic and Natural Language,” Journal of Philosophical Logic, 38: 589-606. Hofweber, T., 2005. “A Puzzle about Ontology,” Noûs, 39: 256-283. Horn, L.R., 1989. A Natural History of Negation, Chicago: University of Chicago Press; expanded and revised edition, 2001, Stanford: Center for the Study of Language and Information. Johnson-Laird, P.N., 1983. Mental Models, Cambridge, MA: Harvard University Press. Johnson-Laird, P.N. and R.J. Byrn, 1991. Deduction, Hillsdale, NJ: Lawrence Erlbaum. Kahn, C.H., 1972. “On the Terminology for Copula and Existence,” Islamic Philosophy and the Classical Tradition, S.M. Stern, A. Hourani and V. Brown (eds), Columbia, SC: University of South Carolina Press, pp. 146-149. Katz, B.D. and A.P. Martinich, 1976. “The Distribution of Terms,” Notre Dame Journal of Formal Logic, 17: 279-283. Klima, G., 2001. “Existence and Reference in Medieval Logic,” New Essays in Free Logic, A. Heike and E. Morscher (eds), Dordrecht: Kluwer. Klima, G., 2005. “Quine, Wyman, and Buridan: Three Approaches to Ontological Committment,” Korean Journal of Logic, 8: 1-11. Kripke, S., 1972. Naming and Necessity, Cambridge, MA: Harvard University Press. Kripke, S., 1975. “Outline of a Theory of Truth,” Journal of Philosophy, 72: 690-716. Reprinted in Martin 1984 (all quotes are from this reprinted version). Ladd-Franklin, C., 1883. “On the Algebra of Logic,” Studies in Logic, by Members of the Johns Hopkins University, Boston: Little, Brown & Co., pp. 17-71. Lehan-Streisel, V., 2012. “Why Philosophy Needs Logical Psychologism,” Dialogue, 51: 575586. Leibniz, G.W., 1966. “A Paper on ‘Some Logical Difficulties’,” Leibniz: Logical Papers, G.H.R. Parkinson (ed), Oxford: Clarendon Press, pp. 115-121. Leibniz, G.W., 1966a. “Of the Art of Combination,” Leibniz: Logical Papers, G.H.R. Parkinson (ed), Oxford: Clarendon Press, pp. 1-11. Linnebo, Ø., 2003. “Plural Quantification Exposed,” Noûs, 37: 71-92.

References | 171

Linsky, B. and J. King-Farlow, 1984. “John Heintz’s ‘Subjects and Predicables,” Philosophical Inquiry, 6: 49-56. Lowe, E.J., 2012. “Categorical Predication,” Ratio, 25: 369-386. Macnamara, J., 1986. A Border Dispute: The Place of Logic in Psychology, Cambridge, MA: MIT Press. Makinson, D., 1969. “Remarks on the Concept of Distribution in Traditional Logic,” Noûs, 3: 103-108. Martin, J.N., 2013. “Distributive Terms and the Port Royal Logic,” History and Philosophy of Logic, 34: 133-154. Massey, G., 1976. “Tom, Dick and Harry and All the King’s Men,” American Philosophical Quarterly, 13: 89-107. McKay, T.J., 2006. Plural Predication, Oxford: Clarendon Press. Nemirow, R.L., 1979. “No Argument Against Ramsey,” Analysis, 39: 201-209. Noah, A., 1980. “Predicate-Functors and the Limits of Decidability in Logic,” Notre Dame Journal of Formal Logic, 21: 701-707. Noah, A., 1982. “Quine’s Version of Term Logic and its Relation to TFL,” Appendix E of Sommers 1982. Noah, A., 1987. “The Two Term Theory of Predication,” in Englebretsen 1987. Oderberg, D. (ed), 2005. The Old New Logic: Essays on the Philosophy of Fred Sommers, Cambridge, MA: MIT Press. Oderberg, D., 2005a. “Predicate Logic and Bare Particulars,” in Oderberg 2005. Oliver, A. and T. Smiley, 2006. “A Modest Logic of Plurals,” Journal of Philosophical Logic, 35: 317-348. Oliver, A. and T. Smiley, 2008. “Is Plural Denotation Collective?” Analysis, 68: 22-33. Oliver, A. and T. Smiley, 2013. Plural Logic, Oxford: Oxford University Press. Osherson, D., 1975. “Logic and Models of Logical Thinking,” Reasoning, R. Falmagne (ed), Hillsdale, NJ: Lawrence Erlbaum, pp. 17-46. Papazian, M. 2012. “Chrysippus Confronts the Liar: The Case for Stoic Cassationism,” History and Philosophy of Logic, 33: 197-214. Parsons, T., 2006. “The Traditional Square of Opposition,” Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/square/. Parsons, T., 2006a. “The Doctrine of Distribution,” History and Philosophy of Logic, 27: 59-74. Parsons, T., 2013. “The Power of Medieval Logice,” in Later Medieval Metaphysics: Ontology, Language, and Logic,” C. Bolyard and R. Keele (eds), NY: Fordham University Press. Pelletier, F.J., R. Elio, and P. Hanson, 2008. “Is Logic All in Our Heads?” Studia Logica, 86: 1-86. Pérez-Ilzarbe, P. and M. Cerezo, 2012. “Truth and Bivalence in Aristotle: An Investigation into the Structure of Saying,” http://www.academia.edu/616585/Truth_and_Bivalence_in_ Aristotle._An_Investigation_into_the_Structure_of_Saying . Priest, G., 2008. “The Closing of the Mind: How the Particular Quantifier Became Existentially Loaded Behind Our Backs,” Review of Symbolic Logic, 1: 42-55. Quine, W.V., 1936. “Concepts of Negative Degree,” Proceedings of the National Association of Science, 22: 40-45. Quine, W.V., 1936a. “Toward a Calculus of Concepts,” Journal of Symbolic Logic, 1: 2-25, Quine, W.V., 1937. “Logic Based on Inclusion and Abstraction,” Journal of Symbolic Logic, 2: 145-152. Quine, W.V., 1953. “On What There Is,” From a Logical Point of View, Cambridge, MA: Harvard University Press.

172 | References

Quine, W.V., 1953a. “Two Dogmas of Empiricism,” From a Logical Point of View, Cambridge, MA: Harvard University Press. Quine, W.V., 1959. “Eliminating Variables Without Applying Functions to Functions,” Journal of Symbolic Logic, 24: 324-325. Quine, W.V., 1960. “Variables Explained Away,” Proceedings of the American Philosophical Associattion, 104: 343-347. Quine, W.V., 1961. From a Logical Point of View, 2nd edition, NY: Harper & Row, Quine, W.V., 1962. Mathematical Logic, rev. edition, Cambridge, MA: Harvard University Press. Quine, W.V., 1966. “Existence and Quantification,” Ontological Relativity and Other Esssays, NY: Columbia University Press. Quine, W.V.,1970. Philosophy of Logic, Englewood Cliffs, NJ: Prentice-Hall. Quine, W.V., 1971. “Predicate Functor Logic,” Proceedings of the Second Scandanavian Logic Symposium, J. Fenstand (ed), Amsterdam: North-Holland, pp. 309-315. Quine, W.V., 1976. “The Variable,” The Ways of Paradox and Other Essays, revised and enlarged edition, Cambridge, MA: Harvard University Press, pp. 272-282. Quine, W.V., 1976a. “Algebraic Logic and Predicate Funtors,” The Ways of Paradox and Other Essays, revised and enlarged edition, Cambridge, MA: Harvard University Press, pp. 283-307. Quine, W.V., 1976b. “On Carnap’s Views on Ontology,” The Ways of Paradox and Other Essays, revised and enlarged edition, Cambridge, MA: Harvard University Press, pp. 203211. Quine, W.V., 1981. “Predicate Functors Revisited,” Journal of Symbolic Logic, 46: 649-652. Quine, W.V., 1981a. “Predicates, Terms, and Classes,” Theories and Things, Cambridge, MA: Harvard University Press, pp. 164-172. Quine, W.V., 1987, Quiddities, Cambridge, MA: Harvard University Press. Ramsey, F., 1925. “Universals,” Mind, 34: 401-417. Rayo, A., 2002. “Words and Objects,” Noûs, 36: 436-464. Rayo, A., 2007. “Plurals,” Philosophical Compass, 2: 411-427. Rearden, M., 1984. “The Distribution of Terms,” Modern Schoolman, 61: 187-195. Rips, L., 1994. The Psychology of Proof, Cambridge, MA: MIT Press. Rooij, R, van, 2012. “The Propositional and Relational Syllogistic,” Logique et Analyse, 55: 86101. Russell, B., 1918. The Philosophy of Logical Atomism, reprinted in Logic and Knowledge, R.C. Marsh (ed), London: Allen & Unwin, 1956. Ryle, G., 1938. “Categories,” Proceedings of the Aristotelian Society, 38: 189-206. Reprinted in Logic and Language, Second Series, Oxford: Blackwell, 1959. Ryle, G., 1949. The Concept of Mind, London: Hutchinson & Co. Ryle, G., 1950. “If, So and Because,” Philosophical Analysis. M. Black (ed), Ithaca, NY: Cornell University Press. Ryle, G., 1951-52. “Heterologicality,” Analysis, 11: 61-69. Reprinted in MacDonald 1954, pp. 45-53 (all quotations are from the reprinted version). Ryle, G., 1954. Dilemmas, Cambridge: Cambridge University Press. Smith, Barry, 1990. “Characteristica Universalis,” Language, Truth and Ontology, K. Mulligan (ed), Dordrecht: Kluwer, pp. 50-81. Smith, Barry, 2005. “Against Fantology,” Experience and Analysis, M. Reicher and J. Marek (eds), Vienna: ÖBV&HPT. Sommers, F., 1959. “The Ordinary Language Tree,” Mind, 68: 160-185.

References | 173

Sommers, F., 1963. “Types and Ontology,” Philosophical Review, 72: 327-363. Reprinted in Philosophical Logic, P.F. Strawson (ed), Oxford: Oxford University Press, 1967, and in Philosophy of Logic, D. Jacquette (ed), Oxford: Blackwell, 2002. Sommers, F.,1965. “Truth-Value Gaps: A Reply to Mr. Odegard,” Analysis, 25: 66-69. Sommers, F., 1965a. “Predicability,” Philosophy in America, M. Black (ed), Ithaca: Cornell University Press. Sommers, F., 1967. “On a Fregean Dogma,” Problems in the Philosophy of Mathematics, I. Lakatos (ed), Amsterdam: North-Holland. Sommers, F.,1969. “On Concepts of Truth in Natural Languages,” Review of Metaphysics, 23: 259-286. Sommers, F., 1969a. “Do We Need Identity?” Journal of Philosophy, 66: 499-504. Sommers, F., 1970. “The Calculus of Terms,” Mind, 79: 1-39. Sommers, F., 1975. “Distribution Matters,” Mind, 84: 27-46. Sommers, F., 1975a. “Leibniz’s Program for the Development of Logic,” Essays in Memory of Imre Lakatos, R.S. Cohan, et al (eds), Dordrecht: D. Reidel, pp. 589-615. Sommers, F., 1976. “Frege or Leibniz?” Studies on Frege, III, M. Schirn (ed), Stuttgart: Frommann Holzboog. Sommers, F., 1978. “The Grammar of Thought,” Journal of Social and Biological Structures, 1: 39-51. Sommers, F., 1982. The Logic of Natural Language, Oxford: Clarendon Press. Sommers, F., 1990. “Predication in the Logic of Terms,” Notre Dame Journal of Formal Logic, 31: 106-126. Sommers, F., 1993. “The World, the Facts, and Primary Logic,” Notre Dame Journal of Formal Logic, 34: 169-182. Sommers, F., 1994. “Commenting,” talk presented at the University of Ottawa, 28 October 1994. Sommers, F., 1994a. “Naturalism and Realism,” Midwest Studies in Philosophy, 19: 22-38. Sommers, F., 1996. “Existence and Correspondence to Facts,” Formal Ontology, R. Poli and P. Simons (eds), Dordrecht: Kluwer. Sommers, F., 1997. “Putnam’s Born-Again Realism,” Journal of Philosophy, 94: 453-471. Sommers, F., 2005. “Comments and Replies,” The Old New Logic: Essays on the Philosophy of Fred Sommers, Cambridge, MA: MIT Press, pp. 211-231. Sommers, F., 2008. “Reasoning: How We’re Doing It,” The Reasoner, 2: 5-7. Sommers, F., 2008a. “Ratiocination: An Empirical Account,” Ratio, 21: 115-133. Sommers, F, and G. Englebretsen, 2000. An Invitation to Formal Reasoning, Aldershot: Ashgate. Speranza, J.L. and L.R. Horn, 2012. “History of Negation,” History of Logic, Vol. II: Logic: A History of its Central Concepts, D. Gabbay, F. Pelletier, J. Woods (eds), Amsterdam: Elsevier. Srzednicki, J., 1966. “It is True,” Mind, 75: 385-395. Strawson, P.F., 1950. “On Referring,” Mind, 59: 320-344. Strawson, P.F., 1952. Introduction to Logical Theory, London: Methuen. Strawson, P.F., 1957. “Logical Subjects and Physical Objects,” Philosophy and Phenomenological Research, 17: 441-457. Strawson, P.F., 1959. Individuals, London: Methuen. Strawson, P.F., 1961. “Singular Terms and Predication,” in P.F. Strawson, Logico-Linguistic Papers, London: Methuen.

174 | References

Strawson, P.F., 1970. “The Asymmetry of Subjects and Predicates,” Language, Belief, and Metaphysics, H.E. Kiefer and M. Munitz (eds), Albany: State University of New York. Strawson, P.F., 1974. Subjects and Predicates in Logic and Grammar, London: Methuen. Tarski, A., 1956. “The Concept of Truth in Formalized Languages” in Logic, Semantics, Mathematics, J. Corcoran (ed), J. Woodger (tran), Indianapolis, IN: Hackett, pp. 152-278. Wesoły, M., 2012. “Restoring Aristotle’s Lost Diagrams of the Syllogistic Figures,” Peitho, 3: 83-114. On line at http://peitho.amu.edu.pl/. Westerhoff, J., 2005. Ontological Categories, Oxford: Oxford University Press. Wetherick, N.E., 1989. “Psychology and Syllogistic Reasoning,” Philosophical Psychology, 2: 111-124. Whitehead, A.N. and B. Russell, 1910. Principia Mathematica, vol. 1, Cambridge, Cambridge University Press. Williamson, C. 1971. “Traditional Logic as a Logic of Distribution Values,” Logique et Analyse, 14: 729-746. Wilson, F., 1987. “The Distribution of Terms: A Defense of the Traditional Doctrine,” Notre Dame Journal of Formal Logic, 28: 439-454. Yi, B., 2005-2006. “The Logic and Meaning of Plurals,” Journal of Philosophical Logic, 34: 459506 and 35: 239-288. Zemach, E.M., 1981. “Names and Particulars,” Philosophia, 10: 217-223. Zemach, E.M., 1985. “On Negative Names,” Philosophia, 15: 139-138.

Index Abbott, E. A., 81, 90 Abelard, P., 6, 44, 45, 60 absurdity, 114–16, 119, 121, 135 Ackrill, J. L., 83, 134 active/passive voice, 52, 73 adicity, 66, 73 affirmation/denial, 33, 41, 42, 45, 51, 52 Albee, E., 125 algebraic addition/multiplication, 47 algorithm, 48, 53, 55, 146, 150 Alvarez, E., 69, 88 amalgamation, 62, 66 ambiguity, 34, 35, 52, 90 Amundsen, R., 1, 2, 7 analytic/synthetic, 124, 126, 135 anaphoric, 15–17, 21, 26, 29–30 – background/antecedent/requirement, 15, 17, 19, 21, 22, 26, 27, 29, 30, 96, 97 animal communication systems (ACS), 144 Aristotle, VII, 4, 7, 8, 15, 18, 33, 35, 37–46, 48–50, 56, 59, 60, 70–72, 74, 76, 81, 83, 87–89, 96, 99, 100, 112, 133, 134 Aristotle’s proviso, 88 Arnaud, A., 37 Asymmetry Thesis, 112–114, 118–121 Austin, J.L., 13 bare/propertyless/uncharacterized/unsor ted, 99, 104, 137 Bar-Hillel, Y., 139, 150, 153 basic logical ability/competence, 142, 143 Beall, J.C., 3 Bedeutung, 51, 61, 100 Begriffschrift, 108 being, 133–135 – among/one of, 129 – said of vs. in, 134 – vs. saying, 104 Benthem, J. van, 139 Ben-Yami, H., 127–130 Bickerton, D., 144, 145

binding (of variables/pronouns), 52, 65, 78, 88, 136, 137 Boethius, 83 Bohr, N., 1 Boole, G., 36, 44, 47, 48, 74, 75, 99, 101, 108 Boolos, G., 128 Borges, J., 99 Böttner, M., 78 Bradwardine, T., 13 Brahe, T., 38 Braine, M., 141 Brecht, B., 91 Brown, S.R., 1 Byrn, R.J., 141 cancellation, 59, 62, 145 Cappelen, H., 105 Carnap, R., 135 Carroll, Lewis (C. Dodgson), VII, 75, 76, 94 cassationism/cassationist, 12–14 categorical sentence/statement/proposition, 39, 45–47, 53 categorically (categorially) distinct/different, 4, 5 category, 133, 134 category mistake, 11, 12, 15, 89, 96 Cerezo, M., 97 characteristica universalis, 108 charge (positive/negative, plus/minus, +/−, 34–36, 41, 45, 62, 63, 70, 141, 145, 147, 155, 156–158, 160, 163, 164 – suppressed/implicit, 62–64 Cheng, P., 141 Chomsky, N., 152, 153, 158 Christie, A., 99 Chrysippus, 2, 12–13, 39, 96 Clark, D.S., 120 cognitive development, 157 Coke, Z., 33 comment/commenting, 6–11, 13–31 – self, 6–11, 13, 19, 21, 29 computation, 36, 42

176 | Index

concept (creation), 101, 103 concept (term sense), 4, 5, 103–105, 110 concept (unexpressed), 103 conclusion, 38, 42, 43, 48–49, 55, 56, 62, 71–77, 140, 142, 147, 154, 161, 163 connective, 155, 156 connotation, 61 constituent (of a domain/totality/universe of discourse), 103–107, 110, 112, 117 content of a statement, 100 contradiction, 35, 42, 82-85, 88, 90, 96, 114, 149, 150, 162 – self-contradiction, 162 contrary/contrariety, 35–37, 41, 42, 83, 86, 96 – subcontrary, 83, 86, 95 conversion, 43, 93 Cooper, W.S., 144 Copernicus, N., 38 copula, 36, 40–42, 44, 45, 49, 51, 58, 60, 62, 64–67, 69, 71, 112, 113, 118, 137, 156 – logical, 40–42, 44, 45, 49, 51, 60, 64, 65, 112, 137 – split/unsplit, 44–46, 58–60, 62, 64, 65, 67, 69 Corcoran, J., 74, 75 Correia, M., 69, 88 correspondence – concept-property, 103, 104 – proposition-fact, 106, 107 – theory of truth, 107 corresponding conditional, 124 Cosmides, L., 141 Crain, S., 141, 143 Crane, T., 139, 140 Darwin, C., 37, 75 Davidson, D., 52, 53 De Morgan, A., 36, 47, 48, 71 deductive judgment, 147–149, 153, 157, 161, 164 defined/undefined logical form, 86, 88, 90, 91, 93, 95, 96 definite description, 15, 103, 111, 125 demonstrative, 46

denote/denotation, 61, 64, 104–107, 109, 111, 117–31, 137, 139 – distributive/collective, 125, 128–130 – explicit/implicit, 122, 123, 125, 127 – joint, 129 – of a domain, 117, 119, 120 – of a statement, 105, 107, 109, 117 – of an object, 104–107, 109, 111, 117, 119, 125, 126, 128 dictum de omni, 60–62, 64, 69–72, 74–77 Dipert, R., 138 disanalogies between statement and term logic, 67, 68, 97 distributed/undistributed, 46, 61–64, 69–72, 74–77, 117 domain/universe of discourse, 47, 97, 99, 104 –107, 109, 110, 112, 117, 119, 120, 122, 135, 137, 139 Doyle, A.C., 110 Dummett, M., 152 Dutilh Novaes, C., 2, 13, 14 elimination, 48, 75 Elio, R., 141 Englebretsen, G., 7, 34, 45, 69, 76, 81, 89, 94, 101, 107, 113, 118, 120, 137, 151, 159, 161 enthymematic ploy, 124 enthymeme, 56, 76, 88, 124, 126 equivalence, 63, 65, 72, 84, 85, 93, 149, 156, 157, 159–163 – algebraic, 72, 158 – logical, 43, 72, 85, 93, 147, 148, 154, 159 – relation, 65 evolution/evolutionary biology/natural selection, 139, 144, 145, 158 existence, 89, 90, 106, 110, 133–137, 144 – as a property of concepts, 110, 136 – as a property of propositional functions, 136 – not a real property, 110, 136 existential claim, 88, 106 existential import, 88 express/expression, 100, 102–107, 109– 114, 116, 118, 120 – formal, 55–57, 59, 60, 67, 70

Index | 177

– material/nonmaterial, 39, 40, 46, 51, 53, 55, 60, 63, 154 – of a concept, 103-105 – of a proposition, 5–7, 10–14, 16–19, 23, 24, 28, 31, 96, 97, 106, 107, 113, 139 – of a thought, 5 expressive/inexpressive, 17, 19–22, 27– 30, 88, 96 fact, 17, 22, 107, 113 falling under a concept, 100, 136 fantology, 138 first philosophy, 133, 134 form – general, 41, 44, 45, 62, 63, 69, 155 – logical, 39, 40, 47, 50–53, 94, 110, 111, 120, 122, 123, 125, 154, 160 formal features, 56, 57, 59, 60 – associative, 56, 57, 59 – reflexive, 57, 59, 60, 65, 67 – symmetric, 56, 57, 59, 65 – transitive, 57, 59, 60, 65, 67 formative, 33, 35, 45, 58, 143, 145, 154– 158, 160, 163 Frederick, D., 64 Frege, G., VIII, 35, 36, 44, 48–52, 55, 63, 64, 67, 69, 70, 73, 78, 99–104, 107– 113, 118, 127, 136–140, 143, 151, 152, 158 Fregean Dogma, 111, 112, 115, 116, 121 Frege-Russell-Quine thesis/program, 136, 137 Friedman, W. H., 69 Friedman, W. H. function (sentential), 55 function expression, 50, 51, 55, 99, 100, 109, 113, 114, 118 functor, 39, 53, 130, 155, 163 – binary, 34, 39, 56, 58–60, 62, 79, 130 – split/unsplit, 88, 89 – term, 55, 70 – unary, 34, 35, 39, 56, 59, 62, 79, 91 future contingent, 96 gap/gapped/gapless expression, 50–52, 63, 64, 66, 99, 100, 112 Geach, P., 41, 69, 113, 118

Geach, P.T. Gleitman, L., 121, 122 Gödel, K., 50 Goethe, J., 74 Goldstein L. Goldstein, L., 4, 6, 8, 14 grammar, 38–40, 50, 52, 109, 134, 137, 141, 143 Griffith, L., 1, IX Grimm, R.H., 113 grounded/ungrounded, 9, 13, 21, 24, 30 Hale, R., 113 Hanson, P., 141 Hawthorne, J., 105 Heavside, O., 33 Heintz, J., 113 Henle, M., 141 Herzberger, H., 9 Hintikka, J., 136 Hobbes, T., 36, 45, 55 Hodges, W., 69 Hofweber, T., 133 Holyoak, K., 141 Horn, L.R., 33, 34, 78 Hume, D., 109, 110, 136, 139 Hume-Kant point/question/lesson, 136 Huxley, A., 121 idea, 99–103, 108 identity, 55, 56, 64, 65, 102, 118 – logic of, 56 – statement, 65, 118 – theory, 65, 118 incompatibility, 114, 115 – group/range, 114 indirect quotation, 3, 4, 102 individual, 123, 131, 136 insolubilia, 2, 13 ‘is’, 118 – of copula, 118 – of identity, 118 – of predication, 118

178 | Index

Jevons, W.S., 47, 70 Johnson, W.E., 47 Johnson-Laird, P.N., 141, 142 Kahn, C.H., 134 Kant, I., 109, 110, 133, 136 Katz, B.D., 69 Kearney, A., IX Kepler, J, 38, 75 Keynes, J.N., 47 Khlentzos, D., 141, 143 King-Farlow, J., 113 Klein, C., 121 Klima, G., 137 Kripke, S., 4, 9, 17–21, 24, 27, 121 Ladd-Franklin, C., 75 Lancelot, C., 47 language, 144, 145 – creoles, pidgins, 144 – formal/logical/artificial, 35, 39, 48–52, 56–59, 62, 66, 67, 69, 70, 72, 78, 79, 81, 82, 108, 109, 112, 134–38, 140– 42, 151, 152, 159 – natural/ordinary/native/everyday, 2, 4, 20, 25, 30, 31, 33, 35, 38, 39, 46, 48, 50–53, 55–60, 62, 64, 68, 70, 78–81, 87, 89, 109, 112, 119, 120, 123, 125, 127, 128, 134, 135, 139–43, 145, 146, 150–54, 156, 157, 163 – of thought, 143 Law of Bivalence, 3, 11, 12, 95, 96 Law of Equivalence, 159 Law of Excluded Middle (LEM), 83, 85, 88, 89, 91, 96 Law of Incompatibility (LIC), 86, 90 Law of Noncontradiction (LNC), 82, 85, 88, 90, 95 Law of Quantified Opposition (LQO), 86, 91 Law of Subcontrariety (LSC), 86, 88-91, 95 laws of logic, 47, 90, 140, 144 laws of thought, 47, 141, 147, 150, 159 Lehan-Streisel, V., 145, 146 Leibniz, G.W., 36, 38, 44, 45, 47, 64, 66, 68, 71, 94, 108 Leibniz’s Law, 102, 130

lexicon/vocabulary, 38, 51, 53, 55, 63, 64, 109, 111, 116 license, 74–77 Linnebo, Ø., 128 Linsky, B., 113 Locke, J., 139 logic/logicians – algebraic, 36, 47, 48, 62, 71, 75, 100, 104, 108 – Aristotelian, 49, 134, 137, 151, 152 – basic, 56, 66, 142, 143 – cognitively veridical, 139, 146, 147, 154, 157 – empiricism, 135 – first-order, 55, 138, 143 – formal, 36–39, 49–50, 52, 81, 86, 100, 108, 109, 114, 128, 130, 133, 135, 137–145, 147, 160 – logically perfect, 127 – medieval/scholastic, 36, 39, 41, 44–46, 48, 49, 58, 60, 61, 70, 71, 75 – mental/of a thought, 142 – Modern Predicate Logic (MPL), 2, 34– 36, 48–50, 53, 55, 56, 63, 65, 66, 70, 72–75, 78, 79, 86-88, 99, 108–111, 113–116, 118, 119, 121, 122, 127, 128, 130, 133–138, 140, 141, 143, 144, 149–155, 158, 159 – natural, 139, 151, 159 – of natural language, 118, 127, 140, 143 – of relationals, 47 – old/new – Plural Quantification Logic (PQL), 127, 128, 130 – Port Royal, 47 – positivism, 135, 138 – Predicate Functor Algebra (PFA), 78, 79, 100 – propositional/sentential/statement/ truth-functional, 46, 48, 55, 56, 64, 66–68, 149, 162 – statement logic as a special branch of term logic, 67–69, 97 – Stoic, 2, 12–13, 35, 39, 46, 48, 56 – syllogistic, 39, 40, 42–50, 56

Index | 179

– term, 39, 40, 44, 46, 47, 48, 53, 56, 63– 75, 78–80, 103, 107, 109, 110, 116, 119, 120, 127, 130, 151, 158, 159 – Term Functor Logic (TFL), 53, 56, 73, 74, 78, 79, 109, 117, 118, 120, 121, 128, 130, 139, 145, 146, 151, 156, 158–161, 163 – traditional, 3, 33, 39, 41, 44–48, 50, 53, 56, 62, 64–67, 70–72, 93, 100, 113, 117, 127, 140, 151, 158, 159 – Traditional Term Logic (TTL), 151 – variable-free, 151 Logical Law of Commutation, 156 logicism, 50, 108 Lowe, E.J., 133, 137 Macnamara, J., 141–143 Makinson, D., 69 Martin, J.N. Martinich, A.P., 69 Massey, G., 121, 124 mathematics, 100, 108, 114, 156 McKay, T.J., 128 McMillan, H., IX mental models, 142 Mercer, J., 33 metaphysics, 133 Mill, J., 133 Mitchell, D. mode of presentation, 100, 102 Morgenbesser, S., 95 name (proper), 46, 52, 55, 63, 64, 66, 72, 99–103, 107, 109–112, 118, 121, 125 namely rider, 10, 14, 16–18, 23, 25, 29–31 nativism, 143 naturalness, 130, 131 negation, 29, 33–37, 41, 42, 45, 47, 48, 57–59, 62, 63, 67, 72, 75, 83–86, 88–92, 106, 111, 147, 155–58 – logical, 33, 35, 42, 90, 111, 113, 114, 120 – sentential/statement denial, 33, 35, 42, 45, 57, 58, 62, 63, 72, 75, 91, 113, 114, 119 – term, 33, 36, 37, 41, 42, 47, 57, 58, 59, 62, 86, 111, 112, 114–116, 119, 120 Nemirow, R.L., 113

neuroscience, 148, 153, 160, 164 Newton, I., 38, 75 Nicole, P., 47 Noah, A., 78 nominalization, 19, 22–25, 30 nonsense/senseless, 4, 7, 10, 12, 15, 89, 90, 94, 102, 115, 135, 142 notation – quantifier-free, 153 – quantifier-variable, 148, 151–154, 159 – variable-free, 153 numerical subscripts, 66, 71 object, 50 – impossible, 115, 116, 119 object/concept (Frege), 51, 99–103, 107, 109–111, 113, 115, 116, 136–138 oblique cases, 102 Ockham, W., 44, 127 Oderberg, D., VII, 78, 81, 137 Oliver, A., 128–130 ontological status, 4 ontology/ontological, 78, 99, 101, 107– 110, 133–139, 152 – commitment, 78, 135–138, 152 – dependence, 134 opposition, 35, 36, 155–157, 160, 163 Osherson, D., 141 Papazian, M., 2, 12, 13 paradox, 1–14 – Liar, 1–15, 17–20, 28, 31 – Strengthened Liar (Liar’s Revenge), 3, 13 Parsons, T., 69, 71, 93 Pascal, B., VII Peirce, C.S., 47, 151 Pelletier, F.J., 141 Pérez-Ilzarbe, P., 97 phrasal conjunctions/disjunctions, 121– 127 Plato, 40, 50, 103, 111, 112, 158 Platonic Form, 134 Platonism/Platonists, 108, 134 Poincaré, H., 89 power – expressive – inferential, 130, 151, 152, 159

180 | Index

predication, 49 premise, 38, 42, 43, 48–49, 55, 56, 62, 70–77, 88, 93, 140, 142, 147, 154, 161, 163 – hidden/suppressed, 88, 124, 126 – major, 43 – middle, 43 – minor, 43 – tautologous, 163 presence/absence, 106 presupposition, 87, 96, 102, 103 Priest, G., 136 principle of verification, 135 Prior, A.N., 118 pronoun, 46, 52, 55, 63, 65, 78, 99, 111, 112, 137 proof, 114, 118 – direct/indirect, 114 – theory, 39, 46, 47, 109 property/characteristic, 3, 6, 12, 103–107, 110, 115–117, 119, 136, 138 – constitutive, 106, 107, 110, 117 – having/lacking, 106, 107, 110, 115, 119, 136 – of objects, 103, 104, 116, 136 – positive/negative, 106 – un-had, 103 proposition, 4–14, 16–31, 105–107, 113 – expressed/unexpressed, 139 propositional attitudes, 101, 102, 105, 106 propositional depth, 7–11, 13, 23–31 – determinate/indeterminate, 7–11, 13, 25–31, 97 Propositional Depth Requirement, 7, 8, 10, 25, 27–29 propositional nesting/embedding, 22–26, 30 protosentence, 17 Proust, M., 167 psychologism/anti-psychologism, 47, 100, 108, 140, 143, 145 psychology/psychologists, 47, 49, 108, 133, 139, 142, 143 – cognitive, 139, 141, 143, 146, 147, 158, 161, 163 Putnam, H., 78

quality/qualifier, 41, 44–46, 58, 59, 61– 64, 86–88, 116–118 quantification, 134, 137, 151, 152 – of singular terms, 64 – plural/singular, 116, 128, 130 quantity/quantifier, 41, 44–46, 51, 52, 56, 58–65, 68, 69, 71, 78, 79, 84–88, 93, 94, 99, 109, 114, 116–119, 125, 130, 135, 136, 141, 143, 145, 152, 153, 155 – implicit/hidden/tacit, 46, 64, 117, 118, 127, 130 – particular/existential, 45, 46, 51, 61– 64, 69, 83, 85, 88, 94, 117, 123, 127, 130, 136, 137, 141, 159, 161 – universal, 45, 46, 51, 61, 62, 64, 69, 84, 88, 93, 114, 117, 123, 124, 127, 130, 141, 159, 161 – wild/arbitrary, 64, 65, 68, 69, 71, 94, 109, 116, 118, 124, 126, 127, 130, 131 Quine, W.V., 15, 18, 20, 21, 35, 72, 73, 78, 79, 99, 100, 110, 111, 118, 133, 135, 136, 151, 152 Ramsey, F., 113 rational competence vs. rational performance, 141–145, 152 Rayo, A., 128 Rearden, M., 69 reason/reasoning/ratiocination, 36, 46, 47, 49, 50, 52, 58, 59, 108, 109, 114, 127, 133, 139–65 – actual/ordinary/everyday, 52, 81, 108, 109, 133, 139–141, 147, 148, 150–153, 155, 156, 158, 160, 163, 164 – intuitive, 141, 146–148, 150, 154–158, 160, 162, 163 – natural, 150, 154 – pure, 109 – syllogistic, 49 reductio ad absurdum, 114 reduction/perfection of imperfect syllogisms, 44 reference, 7, 10, 15–31, 61, 123, 127, 130, 131, 137 – back, 17, 26 – burden of, 78, 99, 111 – determinate, 96, 97

Index | 181

– direct/indirect, 102, 103 – distributive/undistributive, 123–125 – elliptical, 23, 25 – object of, 100, 102, 107, 108, 111 – of a statement, 101 – plural, 128 – self, 19, 21, 28, 29 reference/referents, 41, 51, 52 referring expression, 10, 15, 16, 19, 22, 23, 25, 30, 127, 138 – singular, 99 Reichenbach, H., 97 Rips, L., 141, 143 Rooij, R, van, 70 rule of inference, 60, 65, 70, 71, 74, 76, 109, 148, 149, 154, 155 – Conjunctive Addition, 64 – Existential Generalization, 64, 72 – Hypothetical Syllogism, 162 – immediate, 71 – mediate, 70, 71 – Modus Ponens, 72, 162 – Modus Tollens, 162 – syllogistic, 74 Russell, B., 1, 15, 23, 35, 50, 64, 87, 110, 113, 136, 138, 151 Ryle, G., IX, 1, 2, 15, 17–19, 29–30, 33, 39, 53, 76, 89, 95 Santayana, G., 133 satisfaction, 110, 111, 116 saturated/unsaturated (complete/incomplete), 50–53, 63, 99 Saussure, F., 55 semantic features, 11, 25, 104, 105, 124 semantic(s), 6, 7, 9, 11, 20, 39, 46, 48, 51, 55, 61, 94, 99, 100, 103, 107, 109– 111, 116–119, 127, 128, 138, 139 – natural, 119 – theory, 39, 46, 48, 51, 99, 107, 109, 117, 139 – value, 106 sense, 51, 61, 100–105, 108, 113, 135 – of a statement, 100–102, 105 sentence, 2–14

– statement making, 15–17, 20, 22, 27– 30, 96, 97 sentence/statement, 139 – atomic/molecular, 55, 64, 138 – compound, 65–71, 121 – multiply general, 151–153, 160 – relational, 47, 65–67, 70, 71, 73, 160 – self-referential, 5, 6, 11, 13 – singular, 15, 64, 65, 68, 70, 71, 94 – variable-free, 152–154 Shakespeare, W., 1, 81 signify/signification, 104–107, 117, 139 – of a property, 104–107, 117 Sinn, 51, 61 Skolem, T., 49 Smiley, T., 37, 128–130 Smith, B., 138 Sommers' Empirical Conjecture / cognitive hypothesis, 146, 156, 158, 161, 162, 164 Sommers, F.,VIII, IX, 5–8, 10, 14, 15–31, 64, 67, 69, 70, 81, 86, 89, 94, 95, 99, 107, 108, 111, 133, 139–141, 145–165 Speranza, J.L., 33, 78 square of opposition, 81–95, 97 – Aristotelian, 83 – hexagonal, 91, 94, 95 – modern, 87, 88 – normal, 85, 86, 90, 91, 95 – primary, 84, 89 – propositional, 98 – singular hexagon, 94 – singular line of opposition, 95 – singular line of propositional opposition, 97, 98 – squared, 92, 93, 97 – traditional, 81–84 Srzednicki, J., 6 statement, 4–17, 20, 22, 24–30 – meaningful/meaningless, 20, 25, 27, 135 – sensible/senseless, 135 Strawson, P.F., 15–17, 19, 26, 29, 30, 35, 87, 93, 95, 103, 113–116, 118, 119, 121 subalternation, 83, 88, 93 subject – definite referring, 15, 23

182 | Index

subject/predicate distinction, 44, 49 substance, 134 – primary, 134 – secondary, 134 substitution, 62, 70–72, 74–77, 130 Suppes, P., 55 supposition, 61 – theories of, 61 syllogism, 40, 42–45, 47, 55, 71, 72, 75, 76, 124, 142, 161, 162 – perfect/imperfect, 42–44, 47, 72, 76 syllogistic figure/form, 43 symmetric – formal features, 67, 71 syntax, 36, 39, 40, 44–46, 48–52, 109, 110, 113, 118, 127, 128, 134–40, 152, 153, 159 – Aristotle's theory of, 35, 44, 49 – binary theory of, 40, 48–50, 112, 113, 118 – Frege’s theory of, 51, 52, 55 – quantifier/bound variable, 152, 153 – quaternary theory of, 45 – scholastic theory of, 44 – ternary theory of, 40, 44, 64, 79, 112 – theory of logical, 39, 46, 50, 108, 109, 112, 113, 118, 122, 151 – traditional theory of, 44 – variable-free, 153 Tarski, A., 2, 17, 20, 151 tautology, 59, 60, 71, 88, 163 team membership, 125, 126, 128, 131 – fixed/variable, 126 term, 4, 5, 12, 55 – complex, 33, 34, 39, 45, 89, 105, 107, 118 – compound, 56–59, 89, 121, 155 – conjoined, 47 – disjoined, 47 – elementary, 56 – general, 63, 79, 94, 99, 100, 109, 111– 114, 116, 118–123, 125–127, 130, 134 – major, 43, 56, 62 – mass, 63 – middle, 43, 48, 56, 62, 75, 161 – minor, 43

– plural, 128–130 – privative, 41 – propositional, 22–25 – qualified, 41, 116–118, 123, 139 – quantified, 41, 116, 117, 119, 123, 124, 127, 130, 139 – relational, 109, 116–118, 151, 155 – sentential, 34, 96, 105, 106, 117, 118 – simple, 33, 34, 39 – singular, 63–65, 68, 69, 71, 94, 97, 100, 109–128, 130 – singular (compound), 102, 103, 114, 116, 120–123, 127, 128, 130 – singular (conjoined), 111, 112, 115, 116, 120, 121, 125, 130 – singular (disjoined), 111, 112, 115, 116, 120, 125, 130 – singular (negated), 111, 112, 114–116, 119, 120 – singular (predicated), 112, 113, 118, 120 – singular (qualified), 118 – singular (quantified), 116, 124 – team, 125–127, 130, 131 the True/False (Fregean), 67, 101–103, 107, 108 third realm, 101, 102, 108 Thought (Fregean), 100–102, 108, 139, 140 thought/thinking, 4, 5, 12, 101, 139, 140, 143–145 – expressed/unexpressed, 4–14 – grasping, 5, 101, 108 Thümmel, W., 78 totality, 104–106 – fixed-variable, 104, 105 truth claim, 5–7, 11, 68, 105, 117 truth conditions, 72, 73 truth function, 143, 145 truth value, 3, 88, 89, 91, 93, 96, 101, 102, 107, 108, 143 – bearers of, 4, 12, 13, 18, 96, 101, 105 – gap, 3, 12, 91 – glut, 3, 12 – underdetermined, 89, 90, 94, 97 truth/falsity, 2–7, 9–12, 17–21, 24, 27, 30, 67, 68, 101, 105, 107, 143

Index | 183

unit set, 126 unity (sentential, statement, propositional), 40, 51, 112 used vs. mentioned, 129 vacuous/nonvacuous, 15, 17–21, 27–29, 88, 93, 94, 96, 102–105, 107 – denotatively, 104, 107 – expressively, 14 – significatively, 104, 107 vagueness, 35, 52 valence, 63, 72 valid/invalid, 39, 42, 43, 48, 55, 60, 61, 71–73, 75–77, 114, 124–126, 161 Validity, 72 variable, 65, 66, 75, 78, 79, 137 – bound/free individual, 78, 79, 99, 111, 135, 141, 148, 154, 155 – value of a, 133, 135, 137 Venn, J., 47 Vilkko, R., 136 Voltaire (F.-M. Arouet), 127 Wallace, A., 38 Wesoły, M., 43 Westerhoff, J., 135 Wetherick, N.E., 141, 142 Whitehead, A.N., 35 Williamson, C., 69 Wilson, F., 69 winnow/winnowing, 20, 22, 27, 30 Wittgenstein, L., 1, 11, 138 Wood, S., 74, 75 world, 68, 72, 102, 104, 105, 107, 117, 138 – in vs. of, 107 – possible/imagined, 105, 107 – the real/actual, 68, 104, 105, 107, 117, 122, 123 Yi, B., 128, 130 Zemach, E.M., 113, 120