110 75 6MB
English Pages 431 [420] Year 2022
PALGRAVE STUDIES IN CLASSICAL LIBERALISM SERIES EDITORS: DAVID F. HARDWICK · LESLIE MARSH
Epistemology of the Human Sciences Restoring an Evolutionary Approach to Biology, Economics, Psychology and Philosophy Walter B. Weimer
Palgrave Studies in Classical Liberalism
Series Editors David F. Hardwick, Department of Pathology and Laboratory Medicine, The University of British Columbia, Vancouver, BC, Canada Leslie Marsh, Department of Pathology and Laboratory Medicine, The University of British Columbia, Vancouver, BC, Canada
This series offers a forum to writers concerned that the central presuppositions of the liberal tradition have been severely corroded, neglected, or misappropriated by overly rationalistic and constructivist approaches. The hardest-won achievement of the liberal tradition has been the wrestling of epistemic independence from overwhelming concentrations of power, monopolies and capricious zealotries. The very precondition of knowledge is the exploitation of the epistemic virtues accorded by society’s situated and distributed manifold of spontaneous orders, the DNA of the modern civil condition. With the confluence of interest in situated and distributed liberalism emanating from the Scottish tradition, Austrian and behavioral economics, non-Cartesian philosophy and moral psychology, the editors are soliciting proposals that speak to this multidisciplinary constituency. Sole or joint authorship submissions are welcome as are edited collections, broadly theoretical or topical in nature.
Walter B. Weimer
Epistemology of the Human Sciences Restoring an Evolutionary Approach to Biology, Economics, Psychology and Philosophy
Walter B. Weimer Washington, PA, USA
ISSN 2662-6470 ISSN 2662-6489 (electronic) Palgrave Studies in Classical Liberalism ISBN 978-3-031-17172-7 ISBN 978-3-031-17173-4 (eBook) https://doi.org/10.1007/978-3-031-17173-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: Pattadis Walarput/Alamy Stock Photo This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This book is dedicated to all who fight scientism in science and society
Praise for Epistemology of the Human Sciences
“Weimer is a polymath. His writings range over disparate domains including induction, psychology, epistemology, economics, and mensuration theory. His views have proven to be not only trenchant but prescient. For example, Donald Hoffman’s position regarding “The Case Against Reality,” and the constructivist nature of perception was presaged by Weimer over forty-five years ago. Similarly, those confronting the replication crisis in today’s psychotherapy research, would do well to take seriously his admonitions regarding measurement theory. This volume should be essential reading for anyone involved in or concerned about the nature of the sciences.” —Neil P. Young, Ph.D. Clinical and experimental psychologist “Minds/brains are complex systems within complex systems (living organisms) within complex systems (human societies) within complex systems (ecosystems). Consequently, knowing the mind is infinitely more challenging than knowing the objects studied by the physical sciences. Weimer’s book rises to the challenge, thoroughly reviewing the strengths and shortcomings of both famous and forgotten thinkers such as Bühler,
vii
viii
Praise for Epistemology of the Human Sciences
Hayek, Popper, and von Neumann to identify key issues for an evolutionary epistemology: consciousness, duality, determination, description, explanation, mensuration, semiotics, and rationality. The result is a guidebook that points the human sciences in the right direction.” —John A. Johnson, Professor Emeritus of Psychology, Penn State University “Having researched and written on the neglected problems surrounding measurement and experimentation in the social sciences, I am encouraged to find those topics highlighted and emphasized as of central importance in this book on epistemology. Social scientists need to realize their fields cannot simply borrow the tools and techniques of physical science without understanding the limitations and differences involved.” —Günter Trendler, Industrial Services Project Manager, Ludwigshafen a. R., Germany
Epigraph Source Acknowledgements
Chapter 3: S. Siegel (1956) Nonparametric Statistics for the Behavioral Sciences, Page 22. Chapter 4: A. Islami and G. Longo (2017) Marriages of Mathematics and Physics: A Challenge for Biology, Page 13. Chapter 5: A: C. D. Broad (1933/1949) The “Nature” of a Continuant, 1949, Page 476 B: A. Rand (1982/1984) The Stimulus and the Response: A Critique of B. F. Skinner, Page 186. Chapter 6: I. Lakatos (1976) Proofs and Refutations: The Logic of Mathematical Discovery. Page 142. Chapter 7: C. G. Hempel (1952) Fundamentals of Concept Formation in Empirical Science, Page 74. Chapter 8: W. Block (2020) Personal Comunication, also Letter to the Editor, The Wall Street Journal. Chapter 9: Robin Williams (1979) Vinyl Album (Cover) Title:
ix
x
Epigraph Source Acknowledgements
Reality: What a Concept. Casablanca NBLP 7162 © Casablanca Record and Filmworks © Playboy Publications, Inc. Published by Little Andrew Publicatios, Inc. Chapter 10: A: H. Weyl (1949) Philosophy of Mathematics and Natural Science, Page 300. B: K. R. Popper (1977) Part 1, in The Self and Its Brain, Page 70. C: V. Vanberg (2004) Austrian Economics, Evolutionary Psychology and Methodological Dualism: Subjectivism Reconsidered. Page 4. D: F. A. Hayek (1952) The Sensory Order, Page 121. Chapter 11: F. A. Hayek (2017) Within Systems and About Systems, Page 2. Chapter 12: H. Weyl (1949) Philosophy of Mathematics and Natural Science, Pages 215–216. Chapter 13: H. H. Pattee (1981) Symbol-Structure Complimentarity in Biological Evolution, Page 118. F. A. Hayek (1978) New Studies in Philosophy, Politics, Economics and History of Ideas, Page 43. Chapter 14: J. Bronowski (1978) The Origins of Knowledge and Imagination, Pages 105–106. H. H. Pattee (2001) The Physics of Symbols: Bridging the Epistemic Cut, Page 13. Chapter 15: F. A. Hayek (1973/2012) Law, Legislation and Liberty, Vol. 1: Rules and Order, Page 33. Justice L. Brandeis (1927) Dissenting Opinion. Olmstead V. United States XXX, 277, U. S. 479 of 1927.
Epigraph Source Acknowledgements
xi
Chapter 16: A. McIntyre (1979) Rear Jacket Cover, R. Rorty, Philosophy and the Mirror of Nature. Chapter 17: R. B. Gregg (1984) Symbolic Inducement and Knowing: A Study in the Foundations of Rhetoric, Page 136. T. S. Kuhn (1970) The Structure of Scientific Revolutions, Page 200. Chapter 18: W. W. Bartley III, The Retreat To Commitment. Page XXVI.
Contents
1
Preface References
1 10
2
Understanding, Explaining, and Knowing The Nature of Understanding From Axiomatics to Hypothetico-Deductive Method Learning and the Limited Role of Experience Where Does the Illusion of Certainty Come From? Mathematics and Other Notational Forms of Linguistic Precision How Does Meaning Relate to Understanding? The Use of Mathematics in the Social and Physical Domains Measurement Understanding and Knowledge Are Functional Concepts Not Subject to Natural Law Determinism Pitfalls and Promises of Ambiguity and Ignorance A Bucket or a Searchlight? References
13 13 16 17 18 19 21 22 23 25 28 32 34
xiii
xiv
Contents
Part I 3
4
5
Knowledge as Classification, Judgment, and Mensuration
Problems of Mensuration and Experimentation Physics and the Cat Another Fundamental Problem: Experimental Science Requires Classical Level Apparatus Historical Excursus: The Nature and Role of Experiment in Classical Science Change Is Inevitably Scale-Dependent, and Theoretically Specified References
39 41
Problems of Measurement and Meaning in Biology The State-of-the-Art (Isn’t the Best Science) Probability Absolutes and Absolute Probabilities Replicability Is Scale Dependent What is an Organism? Phenomenalistic Physics is Incompatible with the Facts of Biology and the Nature of Epistemology References
53 54 57 58 60
Psychology Cannot Quantify Its Research, Do Experiments, or Be Based on Behaviorism A: Psychology Has Neither Ratio Measurement Nor Experimentation The Psychology of Robots Has Nothing to Do with the Psychology of Subjects No One Has Ever Discovered a Natural Law in Psychology Social Science Is Just Fine with Demonstration Studies B: Epistemic Fads and Fallacies Underlying Behaviorism The Failure of Phenomenalism Excursus: Consciousness Alone Is Not the Issue The Spell of Ernst Mach The Haunted Universe Doctrine of Behaviorism Control at All Costs
42 45 46 51
62 68 71 72 73 74 77 79 80 81 81 85 87
Contents
6
7
8
xv
References
89
Taking the Measure of Functional Things The Role of Statistical Inference in Contemporary Physics How Shall We Study Co-occurrence Relationships? In Defense of Miss Fisbee References
93
Statistics Without Measurement Nonparametric Statistical Procedures Work with Nominal, Ordinal, and Some Interval Data Generalizability, Robustness, and Similar Issues Back to the Drawing Board, at Least for a While Testing a Theory in Psychology is Paradoxical for Those Who Do not Understand Problems of Scaling and Mensuration Back to History for a Moment References Economic Calculation of Value Is Not Measurement, Not Apriori, and Its Study Is Not Experimental Austrian “Subjectivism” Begins with the Impossibility of “Physical” Mensuration Behavioral Economics Is Just Applied Social Psychology What Has Been Called “Experimental Economics” Is Actually Constrained Demonstration Studies This Is Your Problem as a Consumer of “Scientific” Knowledge Scaling Procedures Crucially Influence the Progress of Science Probability Theories Help Nothing Here Human Action Is Not Given Apriori Productive Novelty Cannot Occur in an Apriori System
95 98 101 104 105 107 110 111
111 113 115 117 118 121 121 123 124 126 127 129
xvi
Contents
Creativity Is Tied to Ambiguity References Part II 9
130 132
What can be Known, and What is Real
Structural Realism and Theoretical Reference Structural Realism and Our Knowledge of the Non-mental World Acquaintance and Description From Phenomenalism to Structural Realism Science and Structure From Structure to Intrinsic Properties Science and the Search for Structural Descriptions Acquaintance Is Not Knowledge References
10 The Mental and Physical Still Pose Insuperable Problems A: The Classic Problems Sentience and Qualia The Problem of Functionality Again B: Consciousness, Objectivity, and the Pseudo Problem of Subjectivity Our Individual Consciousness Can Never Be Causal Within Our Own Bodies Consciousness Does Not Exist in Time Consequences of the Fact That Acquaintance Is Not Knowledge The Traditional Problem of Objectivity Is Backwards Excursus: The Chicken and Egg of Subjectivity and Objectivity C: Clarifications of False Starts and Important Issues Austrian Subjectivism Is a Misnomer and Often a Red Herring Awareness of Our Own Internal Milieu Is “Silent Consciousness” of Epistemic Importance?
137 138 144 146 150 152 152 154 155 157 158 160 162 162 163 165 166 167 170 173 173 173 175
Contents
Excursus: Chance, Constraint, Choice, Control, Contingency Rate-Independent Formal Concepts Are Not Objects of the Laws of Nature D: Knowledge Depends Upon the Functional Choices of Nervous Systems Boundary Conditions Harness the Laws of Nature Initial Conditions and Boundary Conditions Information Structures Are Constraints, but Not Just Boundary Conditions Physical Information (Differences or Bits) Does Not Explain Meaning Functionality Is Fundamentally Ambiguous Until Its Derivational History Is Specified Old Wine in Better Bottles References Part III 11
12
xvii
176 177 179 179 180 182 182 184 186 190
There are Inescapable Dualisms
Complementarity in Science, Life, and Knowledge Observers and the Observed Subjects Make Choices Life Began with Functional Instruction Symbols and Meanings Are Rate-Independent Physicality Can Only Be Disambiguated—And Hence Understood—By Concomitant Functional Analysis Physics Is Only a Beginning Context Sensitivity and Ambiguity Emergence Beyond Physicality Semiotic Closure as Self-Constraint: Agency as a Matter of Internal Determination References Complimentarities of Physicality and Functionality Yield Unavoidable Dualisms Downward Causation If Laws Do Not Cause Emergence, What Enables It?
195 196 198 200 201 204 206 207 207 216 222 225 227 230
xviii
Contents
Evolution and the Competitive Basis of Cooperation Epistemology Originated In and Is Shaped by Selection Pressure in Open Systems Adaptive Systems in Learning and Cognition Economic Orders Are Not Agents and Do Not Have Expectations Recapitulation: Adaptive Behavior Shows Apparent Teleology Does Not Violate Causality The Laws of Nature Are Not the Same as the Rules of Behavior Another Recapitulation: The Physical Sciences Also Require a Duality of Descriptions References Part IV 13
231 232 233 238 240 242 244 245
Complexity and Ambiguity
Understanding Complex Phenomena Explanation of the Principle A Precise But Unspecifiable Definition of High Complexity Limits of Explanation: Complexity and Explanation of the Principle The Superior Power of Negative Rules of Order Negative Rules of Order Constrain the Social Cosmos Science Is Constrained by Negative Rules of Order Negative Rules of Order in Society Excursus: The Context of Scientific Inquiry Excursus: Notes on the Methodology of Scientific Research References
14 The Resolution of Surface and Deep Structure Ambiguity The Inevitable Ambiguity of Behavior Deep Structure Ambiguity Is Fundamentally Different from Surface Structure Ambiguity Why Is Behavior in Linear Strings?
249 250 251 252 254 256 258 259 261 264 267 269 271 273 276
Contents
Excursus: Ambiguity and Dimensionality Dimensionality of the Mind Surface Structures, Deep Structures, and the Ambiguity of Dimensionality References Part V 15
16
280 285 287 291
The Corruption of Knowledge: Politics and the Deflection of Science
Political Prescription of Behavior Ignores Epistemic Constraints Progressivism and the Philosophy of Rationalist Constructivism Liberalism and the Division of Labor and Knowledge The Data Relevant to PoliticalTheory Is Economic, Psychological and Sociological Science Is No Longer a Spontaneously Organized Endeavor The Moral: The Constructivist Desire to Make Everything Subject to Explicit or “Rational” Control Cannot Work Evolved Social Institutions Are Indispensable Knowledge Structures Sociology Has Lost Sight of Earlier Insights References
Part VI
xix
295 297 298 301 306
308 308 313 316
Appendix: The Abject Failure of Traditional Philosophy to Understand Epistemology
Induction Is an Insuperable Problem for Traditional Philosophy Is There a Foundation to Knowledge? From Certainty to Near Certainty or Probability The Retreat to Conventionalism in Sophisticated Neo-Justificationism Hermeneutics and the New Pragmatism Realism Is Explanatory, Instrumentalism Is Exculpatory
321 323 325 327 330 333
xx
Contents
References
335
17
Rhetoric and Logic in Inference and Expectation The Functions of Language Criticism Is Argument, Not Deduction Theories Are Arguments, and Have Modal Force Adjunctive Reasoning in Inference Science Is a Rhetorical Transaction References
337 339 339 340 341 343 347
18
Rationality in an Evolutionary Epistemology Comprehensive Views of Rationality Critical Rationalism Starts with the Failure of Comprehensive Comprehensively Critical Rationalism Rationality Is Action in Accordance with Reason Rationality Does not Directly Relate to Truth or Falsity Action in Accordance with Reason Is a Matter of Evolution within the Spontaneous Social Order Rationality and Its Relativity Rationality Is Neither Instantly Determined Nor Explicit Like the Market Order, Rationality Is a Means, not an End Comprehensively Critical Rationality is Rhetorical (and so Is All Knowledge Claiming) Rationality in the Complex Social Cosmos The Ecology of Rationality Science and Our Knowledge Must be Both Personal and Autonomous Rationality and The “New” Confusion About Planning in Society References
349 350 352 354 355 356 358 361 362 364 365 366 367 369 370 376
References
379
Name Index
395
Subject Index
401
List of Tables
Table 4.1
Table 7.1
Table 11.1
Table 13.1
Table 17.1
Commonly discussed measurement scales in social domains, with brief defining relations, appropriate statistics, and type of test (Siegel, 1956) Common scale types, permissible transformations, domains, arbitrary parameters, and meaningful comparisons (after Houle et al., 2011) Differences between physicality (inexorable laws or physical boundary conditions) and, on the other side of an epistemic/cybernetic cut, the functional realm of choice control Minimum complexity for the understanding of science: two types of activity and three levels of analysis After Weimer (1979) Truth tables for common propositional forms compared to the adjunctive conditional form
56
108
203
262 342
xxi
1 Preface
A truism of the biological and social studies is that topics such as “the methodology of scientific research” or “epistemology” or “philosophy of science” are to be met with a groan. Students avoid such courses until forced to take them at the last minute, and most professors and researchers don’t want to “waste their time” either teaching them or studying the issues they present. The faculty assume they are wasting their time because the students won’t learn or remember anything, and the students hate mathematics and don’t want to memorize more formulas (which, unfortunately, is almost all such courses involve) just to pass another course. Such “high falutin’” issues are regarded either as detached entirely from day-to-day research, or as involving mere rituals one must go through to look “scientific,” which means they are only something to pay lip service to in order to publish in prestigious journals. The usual attitude is that we are doing just fine on our own, happily adding our well-designed research to a “significant” body of knowledge that is steadily accumulating, and our students are doing just fine doing what we tell them to do. So writing a book on methodology and epistemology is to be avoided, exceeded in its avoidance only by the onerous tasks of reading such a book, or teaching its contents. If you must do it, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_1
1
2
W. B. Weimer
be sure you just tell us we are doing fine, and only update our entrenched approaches, don’t point out that “musty old literature” has continually shown that what we are doing is of little empirical or theoretical value, because we don’t believe that ancient history. The answer to that line of response is “No, we have not done well at all.” That “osterich approach” (head buried in the sand at the mention of something frightening) has an appeal, but also a very high cost. Nagging issues invariably intrude—how can we be contributing to scientific knowledge if we don’t even know what knowledge is, or how it is acquired? How can we follow any scientific “method” if we have no knowledge of what science is, or no understanding of what adequate methodology involves? How can we evaluate whether prestigious professor X’s work and research program is better than professor Y’s? How do we know our field is “scientific” (which invariably means “empirically based”) in any sense? What if the “practical” advice we provide to everyday citizens in the world is wrong, or more often, just useless? Not really knowledge at all? What makes you think that what you are doing evades the well-articulated criticisms of the past and present? Such issues are present because of one simple fact: epistemology puts unavoidable constraints upon ontology. The nature of epistemology— how and what we can know—puts limits, which can never be exceeded, upon speculation about ontology—our theories of what exists or is real. It tells us what we can consider as actual knowledge claims, and what, beyond those claims, remains unknown and in many cases simply unknowable, and in all honesty must be labeled as just metaphysical speculation. And not surprisingly, when epistemological issues are presented outside the usual boring framework of mathematical and statistical procedures one is forced to use in order to get published, both students and faculty are quite willing to learn about them. Is it possible to present epistemology to both researchers and students in a fashion that they will both learn from and also find palatable? Hopefully yes. I believe it can be done, and this volume is an attempt to do so. The alternatives to traditional statistical memorization texts or philosophical discussions of probability, induction and knowledge “justification,” and so on can be far more interesting, and far more alive, as
1 Preface
3
are the issues in the sciences themselves. Indeed, they often are issues in the sciences themselves. Examined from the standpoint of an evolutionary approach to epistemology (and the nature of the empirical differences between what is involved in scientific knowledge claims in the physical sciences in comparison to the “soft” social and life sciences), the answers are devastating to the osterich approach noted above, to traditional and contemporary philosophy (especially the philosophy of social science) and the methodology primers based upon them. Knowledge is not what traditional accounts say it must be, and there is no such thing as “the” scientific method to acquire it. Commonly used sophisticated research techniques are often incapable of delivering either real measures (or real experiments) or meaningful results or actual “knowledge” at all. The social domains (especially) study entities of essential complexity in spontaneously arisen orders of functional phenomena in a very different manner than how we study the “simple” and always identical phenomena which the physical domains postulate as their objects. These approaches are complimentary (both are necessary, neither reduces to the other) rather than either-or. A final preliminary point. Please note at the outset one thing that this book is not. Epistemology should not be confused with, nor is it synonymous with, either the philosophy of science or the philosophy of the social sciences. Epistemology has to do with the theory of knowledge and its acquisition. It is concerned with the nature of science only to the extent that science exemplifies the nature of and acquisition of knowledge. The philosophy of science is broader, dealing with other topics that are common to scientific endeavors, and discusses the nature of knowledge only to the extent that it is important for topics such as the nature of explanation, the “logic” of science, the issue of reductionism, recent hot button issues such as sex and race in science, and other topics. For that, there are many introductory texts available, such as Godfrey-Smith (2003), or Risjord (2014). This book does not compete with them, and it does not care what contemporary philosophers say except with regard to the nature of knowledge.1 In fact, actual epistemology is more and more the domain of psychologists and scientists (ranging from physicists and biologists to psychologists, sociologists, and economists) and less
4
W. B. Weimer
and less the province of traditional justificationist and nonevolutionary philosophy. Much of the core findings overviewed in subsequent chapters originated in the work of biologists, economists, mathematicians, and psychologists rather than philosophers of science. Chapter organization and main themes. After an introductory chapter, we begin in Part I by looking at human knowledge as the result of nervous system classification, which is always judgmental (or valueladen), and focus upon the fundamental activity of measurement. The chapters 2 through 8 below introduce epistemic problems of mensuration—of correctly (or meaningfully) assigning numbers to data in a domain—which are far different in the complex subjects than in physics, where it is easy and “natural” by comparison. The necessity of epistemic constraints upon knowledge—such as the necessity of a duality of descriptions of physical (rate-dependent or dynamical) and functional (rate-independent, intentional or “teleological” or meaningful) nature in all sciences—requires fundamental changes in our conception of how knowledge is acquired. Traditional philosophical accounts of knowledge and its acquisition are incorrect and outmoded. Knowledge is not “justified true belief” gained by inductive “logic” from a given factual basis as its source. As Xenophanes said 2500 years ago, no one has ever known certain truth, and “even if by chance he or she were to utter the final truth, they would not know it: For all is but a woven web of guesses.” The biology and psychology of inference and expectation—which supplies our cognitive apparatus and thus our knowledge—cannot be modeled upon the conception of individuals as mindless machines (like robots) fully and completely described by “social physics” and molecular biology. Better accounts have long been available, and we need to discard the still all too prevalent revivications of earlier inadequate views and utilize the better ones. Measurement and its role provides a case in point. Mensuration in physical theory is easy because it deals only with identical and unchanging objects, never always differing living subjects. Objects such as atoms, alpha particles, quarks, etc., are all identical, and any one of them can substitute for any other one in our experiments and theory. But those objects could never be known to exist
1 Preface
5
as such unless there were subjects (of conceptual activity) who, as agents, study the physical domain and construct theories about it. Those theories transcend the purely physical domain which they are about— they are epistemic products of human conceptualization. All knowledge depends upon the existence of the functional or semiotic (as well as pragmatic) domain. Conceptualization can transcend and harness the physical realm: it is not a deterministic byproduct of purely physical processes (or just another physical process). Our choices harness physicality and produce genuine novelty—new behavior and new knowledge, even new existent things. Traditional determinism exists only in the rateindependent realm of human conception—it is a theoretical idea, a way of conceiving things, not a fact of “external” reality. That novelty (from our choices) can even reshape the physical universe, since it (our thought) can guide our actions in doing so—as in our building of extensions of our senses for knowing our environment, and machines and artifacts that terraform our planet and change our econiche. We are co-creators of our econiche. Part II examines the nature of our knowledge of both ourselves and the non-mental realm (including our own bodies in the latter category) as disclosed by structural realism. Epistemology is the theory of the nature of knowledge as well as just its acquisition. Here, we must trace the history of the gradual refinement of realism, the thesis that there is a real world external to our senses which is causally responsible for what we can come to know, to indicate why the world of appearance (naive or direct realism and the doctrine of phenomenalism) is not reality. All we can know is a matter of the relations between our appearances, never any ultimate or intrinsic properties of that reality. This separation of the knower from the known is explored throughout the book in many aspects and ramifications. In traditional philosophy, these are problems of the relation of mind and body—stemming from the Cartesian separation of a mental “substance” from the physical world (originally intended to stand outside reality to judge or assess it). Descartes created the ontological dualism of the mental and the physical. Examination of epistemology refutes such a speculation. What we can know—the nature of epistemology—constrains what ontological theories are possible. Part II reformulates the distinctions between knower
6
W. B. Weimer
and known as dualisms in epistemology, not ontology. We can reformulate the separations of traditional mind–body problems by distinguishing the physical from the functional domains of existence, and explore how fundamentally emergent phenomena characterize living systems even though they are embodied in “physical” forms. In place of the usual mind–body problems, we find that epistemology requires a context of constraints consisting of the complimentary employment of dualisms (opposed or irreducible alternatives). Without specifying accounts from the viewpoint of these dual perspectives, we are unable to explain life or the evolution of the knowledge it has produced. Part III explores inescapable dualisms and complimentarities in science, life, and the nature of knowledge. Epistemology is one of the life sciences. That is a striking claim to traditional approaches, used to regarding it as an ahistorical “rational reconstruction” of codified propositions. The theory of knowledge can only be understood as an endeavor carried on by living subjects of conceptual activity. All life sciences require an evolutionary and historical account, because no two subjects (and subjects came into existence only with the beginning of life) are ever identical fundamental objects (as are postulated in physics). Throughout history, thinkers have argued about whether there is only one, or are many, theories of knowledge necessary for the different scientific endeavors. Because the physical sciences were traditionally assumed to be more advanced than others, it was also assumed that they are paradigm exemplars of both what knowledge is (and should be) and how it is achieved. Many argued that there is only one ideal type of knowledge, exemplified by physics and its hard science cognates, and that nonphysical domains (whether dealing with biology, higher mammals as individuals [as in psychology], or social phenomena as products of living groups) have not made as much progress because they failed to adopt the mathematics and controlled methods of inquiry that have worked so well for physics (as examined in Part I). Part III explores why physics alone is only a beginning. Problems of agency, meaning, and evolution, such as what is an elementary concept (like specifying what is an organism), transcend the laws of nature disclosed by physics. Agents are functional phenomena exhibiting self-determination by internal constraint systems, not physical ones determined by external constraints. There is genuine
1 Preface
7
emergence in the biological as well as the conceptual realms, involving such things as the semantic closure of physical and symbolic components, the inability of laws of nature to explain speciation, the role of downward causation in adaptation, the anticipatory structure of all organismic cognition, the role of ambiguity and ignorance, why rules of behavior are not laws, and more. Part IV looks at domains of essential complexity, arguing that there are fundamental differences in the complexity of the subject matters involved in the human science domains, which therefore require a different sort of explanatory framework to understand their subject matter. The domains of complex phenomena can neither be studied nor be explained in the same manner as the simple domains. First, the domains in question cannot be experimental (as discussed in previous Parts). All human sciences (including the biological, psychological, and social) are indeed empirical domains subject to “scientific” inquiry that, because of their inherent complexity, can never have the sorts of control or measurement theory scaled mathematical underpinnings found in the “hard” sciences. They are thus empirical but not experimental (in the sense of physics). We do demonstration studies to show that general patterns of behavior are found, and cannot expect ratio-scaled point predictions in essential complexity. It turns out that the rules of behavior most capable of guiding the indefinitely extended domains of behaviors are negative in character, since an attempt to specify a potentially infinite list of “positive” actions to be performed cannot be held in any living memory. Evolution has chosen to guide organisms by negative prohibitions to general classes of behaviors. Learning is negative in character in an uncertain and changing world. We learn what mistakes to avoid instead of what specific behaviors we must produce. Ambiguity and lack of understanding surround us at all times in the real world. That is our evolutionary existential predicament. The only resolution to ambiguity requires the provision of more context. In the environment, that resolution is provided by action—by sampling more—walking toward and around an unfamiliar object to see what it is, listening to more of a speaker’s words and “going back over” their context to see which of several alternatives was intended, and so forth. In short, behavior—ours or others—can only be disambiguated and “understood” by supplying
8
W. B. Weimer
the derivational history behind the surface structure linear strings of which it is constituted. That approach is the future of psychology and all the other complex human sciences. Part V explores a problem for scientific inquiry that has become cancerous in the era of “big science” and big government. That problem is not the acquisition of knowledge but rather the suppression of knowledge and its acquisition (and dissenting views) by factors external to the practice of science itself. When research is commissioned and directed by other social forces rather than the intellectual curiosity of researchers themselves—by religion, by political expediency, by momentary “correctness” of opinion—the unwillingness to disagree with powerful funding agencies and fear of loss of job security and one’s place in the research community (being canceled) will force results to be determined by those external demands. The problems posed by free inquiry into a domain will be supplanted by research tailor made to agree with what politicians, funding administrators, special interest groups, or vocal popular opinions or “feelings” demand, instead of by facts and theories resulting from unhampered research. The technology of politics (there is no “science” thereof independent of social psychology, anthropology, sociology, and economics) provides almost nothing except more and more egregious examples of this. Progressivist researchers and funding sources have become cultural Marxist to such an extent that unbiased research (or research that does not support its momentary correctness) is all but impossible to find, and harder still to evaluate. Part VI concludes with a survey of problems in epistemology as it is found in traditional Western philosophy. These views all define knowledge as justified true belief, while the history of philosophy is that of the gradual abandonment of this conception, first in a classic sense, then in the last two centuries as a “neo” version based upon probability, and finally with the “positive” thinkers (who believe genuine knowledge is possible) being overcome by the skeptics who, for one or another reason, give up on the possibility of knowledge and adopt one or another form of conventionalism or instrumentalism instead. Here, we will find the majority of references to currently popular positions, primarily in critical discussion, because they exemplify one or another of these inadequate views. When understood from a non-justificational evolutionary
1 Preface
9
epistemology, they can be put into perspective and in many instances reinterpreted very differently. Speaking of understanding, that is the first issue to address in Chapter 2. ∗ ∗ ∗ This volume stems from over 50 years of study and interest in epistemology, philosophy of science, and the methodology of scientific research. It owes much to discussions with, and the work of, the late professors Wilfrid Sellars, Paul Meehl, Herbert Feigl, David Bohm, Grover Maxwell, Thomas S. Kuhn, William Bartley III, Donald T. Campbell, Sir Karl Popper, Gerard Radnitzky, as well as professors Robert Shaw, Howard Pattee, John Anthony Johnson, William N. Butos, James Wible, Doctors Neil P. Young, Gunter Trendler, and Leslie Marsh (for wanting it in his series on classical liberalism and helping it get there), and many more, but with special thanks to the late professor Friedrich A. Hayek, who, as another marginal man on the border of many disciplines, understood most of the problems addressed in this volume before the rest of us even knew that there were problems.
Note 1. Rosenberg on the philosophy of social science. A comprehensive introductory text by Rosenberg (2016) provides a contrast to what is intended in this book. Like this volume, Rosenberg argues for a historical approach, describing the persistent issues, but in “the new vocabulary” of the social fields for each new edition. I certainly agree that we face “old wine in new bottles, but just as intoxicating as ever” (see p. x) in stating and criticizing these issues. At that point, however, we diverge. Despite characterizing the “one central problem” of the social sciences as “what sort of knowledge they can or should seek,” there is no discussion of the issues this book addresses. Written from the standpoint of a philosopher, Rosenberg’s volume regards epistemology from that standpoint alone rather than as one of the life sciences. As such, it offers no treatment of the evolutionary theory of society stemming from the eighteenth-century Scottish
10
W. B. Weimer
moralist philosophers (historically the first social “scientists,” and also not noted by Rosenberg) through the nineteenth-century continental liberal theorists to the twentieth-century Austrian economists, or the theory of spontaneously organized complexity for the biological and psychological individual, as well as for the social order as the result of action but not design stemming from Hayek and the evolutionary epistemologists discussed in Weimer (2022a, 2022b). Nor is there any discussion of the conceptual connection between that evolutionary approach to epistemology and the social philosophy of classical liberalism. Thus one searches in vain through Rosenberg’s Chapters 7 and 8 (entitled “Social psychology and the construction of society” and “European philosophy of social science”) for anything comparable to the discussions in this book, or the discussions of the fundamental issues of mensuration found in the first seven chapters. Nor is there any understanding of the fact that rationality is not exhausted by any explicit or fully conscious theories thereof. The approach of this volume is that epistemology, as one of the life sciences, is informed primarily from biology and the social domains (such as psychology and economics) rather than from traditional justificationist philosophy of the sort criticized in the appendix chapters. That is why central positions discussed here stem from the philosophy of physics, “origin of life” research, individual and social psychology, economics, and similar areas. In a nutshell, despite large overlap on several topics, the focus of this volume simply is not upon the traditional or “received view” philosophical positions on social science epistemology. In that regard, this volume seeks to replace that traditional viewpoint with a more adequate one to guide future inquiry. Evolutionary epistemology is not traditional philosophy—it points toward its replacement by nonjustificational philosophy, and informs it with recent scientific problems and potential solutions as replacements for traditional ones.
References Godfrey-Smith, P. (2003). Theory and Reality: An Introduction to the Philosophy of Science. University of Chicago Press. Risjord, M. (2014). Philosophy of Social Science: A Contemporary Introduction. Routledge.
1 Preface
11
Rosenberg, A. (2016). Philosophy of Social Science. Westview Press. Weimer, W. B. (2022a). Retrieving Liberalism from Rationalist Constructivism: History and its Betrayal (Vol. 1). Palgrave Macmillan. Weimer, W. B. (2022b). Retrieving Liberalism from Rationalist Constructivism: Basics of a Liberal Psychological, Social and Moral Order (Vol. II). Palgrave Macmillan.
2 Understanding, Explaining, and Knowing
Any fool can know. The point is to understand Commonly attributed to Albert Einstein
What is knowledge, and how is it used? Knowledge is the result of our theoretical understanding of our selves and our econiche—our life and the universe we inhabit. We use our knowledge claims to understand ourselves and our world. Knowledge is functional rather than physical— it functions to aid survival. But what is the “understanding” for which we use knowledge?
The Nature of Understanding Einstein was claimed to have said that any fool can know—the point is rather to understand . Regardless of its source, we need to explore this idea. In both science and ordinary reasoning, as well as logic and mathematics, understanding is the same thing: we reason by classification and analogy to determine identity of apparent disparates through a basic statement of equivalence, literally an equation of the terms. In all cases, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_2
13
14
W. B. Weimer
this equation is symbolized as an equals sign: X equals Y(or X = Y). In ordinary language, this means that something (X) is equal or equivalent to something else (Y). All argumentative claiming, which is what both commonsense and scientific knowledge claims are, is of this form. We are fundamentally analog devices, making analogies—arguing this is like that—in the process of understanding. In all theories, it is a specification of how that which we wish to account for (to explain) is identical to (or equal to, or can be taken to be, represented as, etc.) something else, which is a description that is on the other side of the “equals” sign. Symmetry between descriptions based upon identity is what constitutes human understanding. Explanations in science and ordinary discourse are rhetorical—they are arguments claiming that equality (which means identity in some specifiable form) obtains between two disparate terms or propositional structures: In ordinary language, we are saying “This is equal to that,” or “this isa that,” to use a common “logical” term. We are creatures who are in the business of making equations to relate disparate things, all the way from the most basic judgments of similarity or disparity in nervous system activity through thing-kind identifications of classes to scientific theories. All knowledge claims are arguments. They are arguments in two essential respects. First, they are arguments for the equivalence of what is on each side of the equals sign. We explain by saying: “This” means “that.” And this conceptual equivalence structure entails that knowledge, as part of a subject’s conceptual scheme, can never be identical to that which is known, and in consequence, entails that the knower cannot be the same as that which is known. A second essential sense of argument is that theories and knowledge claims are themselves arguments. We argue—or affirm and attempt to persuade others—that “since my theory is true, the world (of the theory’s data domain) is necessarily the way it is.” Our “logic” is not the traditional material implication or “if—then” reasoning of classical logic, but more the adjunctive implication of the old Stoic logicians, who addressed propositions (not classes), and whose implication statement is “since— necessarily.” We are in the business of argumentative claiming, and our
2 Understanding, Explaining, and Knowing
15
theories are always arguments. In the classic distinction of logic, dialectic, and rhetoric, science is a series of argumentative rhetorical transactions, not just a matter of “logic.” (Whether it is ever dialectical is an empirical issue.) While we are not all mathematicians (or logicians), we all constantly do what they do for a living—make equations between knowns (or, what is the same thing, taken-for-granteds) and unknowns. Thus thing-kind identification, or the problem of stimulus equivalence as it came to be known in psychology, is the fundamental problem at the heart of all human activity and inquiry. Everything we do is basically a matter of judgment —of searching for identities and noting disparities. This leads immediately and inexorably to problems of functionality, and meaning and its manifestations, because (physical) stimuli are equivalent (or are dissimilar) within our conception only because they mean the same (or mean dissimilar) thing(s) to a living (in the functional-semantic realm) subject. We judge things to have the same or different meanings. Understanding is a basic psychological process. It is our attempt to adapt to an ever-changing world. What can we learn in determining identities? If we postulate that, to use a classic example, F = ma, do we learn anything? No. What we do in making such an equivalence statement is literally just that—we postulate something as a starting point. As a knowledge claim, such a postulation may then be seen to occupy a definite place in our conceptual schemes and to entail that certain results must obtain in the universe if our postulation is true or near to the truth. The laws of nature (as our best knowledge claims in physics) have to be tied down to empirical reality to have any empirical meaning. Are the deductive consequences of such schemes instances of learning? Not if learning is taken in its usual interpretation as learning-from-experience. The empirical realm has not yet played any role in determining the correctness of the conjectured postulate or claim. We have to look at the world to see if the postulated and deduced results fit with observed reality. Thus empirical reality enters later, after the fact from an explanatory theory, in the testing of a knowledge claim.
16
W. B. Weimer
From Axiomatics to Hypothetico-Deductive Method With the advent of non-Euclidean geometries in the nineteenth-century physics dropped the pretense (left over from infatuation with Euclid) that laws of nature were somehow justified or true because they were derivations from “true” premises (axioms). Theorists realized that they had simply been postulating (conjecturing, guessing, whatever) what was necessary to have the “laws” work. Those postulations were called theories of the realms involved. Before that era, explicit empirical theories did not play a central role in explanations. Heinrich Hertz is usually credited with beginning this new approach. Boltzmann (in a lecture in 1899) summarized Hertz’s change from axiomatics: Hertz noted that “Especially in the field of physics our conviction of the correctness of a general theory ultimately does not rest on its derivation by means of the Euclidean method, but rather on the fact that the theory leads us to correct inferences about appearances in all the cases then known.… Experience, after all, remains the sole judge of the usefulness of the theory…” (1960, p. 248). Hertz proposed that theories were thought “pictures” of a domain (a view further fleshed out in the mid-twentieth century by, among others, Hanson, 1958, 1970, Bohm, 1965; and Kuhn, 1970, 1977). Boltzmann noted that our task is only to construct what he called “inner representation-pictures”: Proceeding in this way, we do not as yet take possible experiential facts into consideration, but merely make the effort to develop our thoughtpictures with as much clarity as possible and to draw from them all possible consequences. Only subsequently, after the entire exposition of the picture has been completed, do we check its agreement with experiential facts…. We shall call this method deductive representation. (ibid., p. 249)
Thus was born the famous hypothetico-deductive “method” of science that has since dominated positivist and empiricist accounts. From that time on, so-called hard science abandoned the axiomatic method
2 Understanding, Explaining, and Knowing
17
(although pockets of social science have attempted to retain it). This H-D account of explanation provides only “the logic of the finished science report” and deliberately ignores entirely what became called the context of discovery (where learning occurs), leaving discovery and thus the growth of knowledge to “mere” psychology or sociology (which is where David Hume, in the Treatise in 1739, and Adam Ferguson in his Essay in 1767, had already left it).
Learning and the Limited Role of Experience First consider how we learn (not what). What is the role of experience, or of empirical results, to the practice of science? The answer can be stated quite succinctly: very crucial and very limited. The evolutionary approach to epistemology (Campbell, 1974b, 1988, Popper, 1963, Weimer, 1979) teaches us that all learning is a matter of trial and error elimination. Our hypotheses can never be shown to be completely correct (any more than a species can be perfectly adapted to an econiche), since any interesting hypothesis has an infinitude of data that is relevant to its assessment. We can never accumulate, let alone assess, an infinite amount of data. But finding any instances of errors, results that are incompatible with the hypothesis, always suffices to show that the whole picture cannot be correct. As thinkers since Duhem (in 1914/1954) have noted, something must be discarded because of the inconsistency—either the hypothesis is false, the data incorrectly collected or interpreted, or some combination thereof. Advances in understanding consist in clarifying abstract conceptual frameworks (which are by definition transempirical) and creating and assimilating new data to emerging conceptual-theoretical structures. Empirical data are relevant to these processes, but can never totally resolve issues in them. Conceptual issues can be changed (sometimes clarified, sometimes confused) by data, and data can be changed by conceptual frameworks (because facts are always relative to theories), but no resolution in either direction is ever final. Experience in its role in the creation of data is relevant but never decisive. We infer to facts (the
18
W. B. Weimer
theoretical entities we call “data”) rather than from them. All factual attribution is relative to a theoretical (conceptual) framework. As Goethe said so aptly centuries ago, were the eye not already attuned to see it, the sun could not be seen by it. If our nervous systems did not already operate with a theory to specify what constituted a fact we could have no facts at all. Facts are not neutral bits to be mindlessly picked up and thrown into a bucket. They are as conjectural as our other theories. We are co-creators of our knowledge of the “external” world.
Where Does the Illusion of Certainty Come From? Against the very fallible and uncertain factual “basis” of our knowledge, there is a sense of certainty in our conceptual structures. We know (or rather, feel we know) that there are eternal verities (e.g., 2 + 2 = 4) and theoretical truths (e.g., in the kinetic theory, molecules in a state of rapid agitation must of necessity be hot—that statement represents an analytic truth, a definitional fusion of the concepts, a statement of equivalence between the terms). We feel that things must necessarily be related, and since that is so, things are necessarily internally related. Why is this so? Probably the answer is in the over extension of the theory of human linguistic conceptual structures—which are syntactic and semantic definitional fusions or statements of equivalency—from epistemology into ontology, which is to say, from a theory of the acquisition of knowledge into a thesis about the nature of reality itself. It is true that syntactic systems, if internally consistent, yield “deductions” that are “eternal verities” in the sense that they are valid without time limit within that system. Such truths are true ex vi terminorum, literally in virtue of the definition of the terms involved, and not because of any content that we attach to them via a separate empirical or semantic component or identification. The doctrine of internal or semantic necessity over extends definitional fusions from conceptual thought into the “outside realm,” the nature of the dynamical universe itself. For this overextension, there is no empirical evidence at all. The necessitarian position, tenable in empty
2 Understanding, Explaining, and Knowing
19
syntactic systems like logic or mathematics, is untenable when extended to reality—to empirical domains.
Mathematics and Other Notational Forms of Linguistic Precision Mathematical expressions in science are not knowledge or sources of knowledge—they are just notational shorthand devices to express identity relationships. Thus mathematics is propaedeutic to science rather than definitive of its essence. Human understanding is about explicating conjectured identity relationships, and that may or may not involve mathematics. A brief excursus with an apocryphal story concerning a clash between Denis Diderot and Leonhard Euler about a putative “mathematical proof ” for the existence of God illustrates this, as well as the perennial clash between the “hard” heads who attempt to mathematize everything and the “muddle” heads who do not. The fictional confrontation several centuries ago at the Russian Imperial Court had Euler putting on the board the following formula, which was intended to have the explanatory power of a mathematical proof. This showy procedure mirrors the procedure of mathematical: “proof ” as an explanation (or explanatory structure). It begins with producing an equation and then claims to “deduce” certain or true consequences from it: (a + b)n =X n After writing this, Euler solemnly intoned to Diderot “Donc Dieu existe; repondez!”. Let us see what this empty syntactic formula could possibly mean, to determine whether we need safe conduct tickets back to France (as Diderot was alleged to demand from the Russian Court) or not. In plain English what Euler said is: Some unknown quantity, called X, is equivalent to some other quantity which is determined by adding the two quantities denoted by a and b,
20
W. B. Weimer
which are then multiplied by their sum for n number of times, and then the resultant total is divided by the same number n. Thus God exists. Now reply.
Put in this fashion, no one could possibly construe this pronouncement as a “proof ” (which has to “deduce” a conclusion) of anything at all. All this or any other mathematical formula does is provide a shorthand notation for how we are to manipulate specified quantities in order to obtain their equivalence to something unknown, usually designated by “X.” All of mathematics is a matter of putting empty symbols that we agree to have represent something on one or the other side of an equals sign. What the symbols stand for is a shorthand notation for quantities and operations to be performed upon quantities (or on other symbols standing for quantities). The meaning of those symbols in the real world is totally outside the symbols themselves. All of mathematics specifies what is on each side of the undefined primitive symbol =, understood as what is expressed in natural language English as the word “equals.” When we “solve” a problem in math, we specify that the unknown (the x) is equivalent to the symbol manipulation procedure denoted on the other side of the = sign, and plugging in certain quantities to find a solution specifies a particular determination as that result. Mathematics is about performing operations on quantities or variables standing for quantities to produce equivalences. Nothing more, nothing less. Mathematical formulae have no meaning whatsoever—math is about the manipulation of contentless symbols. Syntactic structures are totally meaningless in themselves. We have to add semantic content to the syntax to have anything meaningful. So where is the meaning in mathematical concepts and formulae? The answer is always the same: outside of the symbols themselves, in a theory of what the symbols stand for. Mathematics is a set of symbol manipulation systems according to syntactic structuring devices, or the rules of symbol manipulation (this is why a mindless algorithmic machine can do mathematics). Whether “pure” or applied, mathematics is constituted of nothing more nor less than syntactic rules of determination of equivalences. The study of pure mathematics uncovers systems of syntactic structures that specify certain forms of outcome when the rules of symbol
2 Understanding, Explaining, and Knowing
21
manipulation are applied. Applied mathematics utilizes the empirically empty systems of pure mathematics after independently having postulated their relevance in the description of given empirical domains. Within the realm of the pure or un-interpreted syntactic structures (or calculi), there is no meaning attached to the merely nominal symbols and all that is of concern is the consistency of the structures. Once consistency of symbols is established, results of symbolic manipulation (if correctly carried out) are certain and thus inviolate. Within all applied mathematical systems, “applied” by (non-mathematical) theoretical identification with (idealized) empirical phenomena, there is no certainty whatever even though the empty symbol system is consistent and thus internally coherent and “certain.” Mathematics is nothing but symbol manipulation systems defined by their syntactic rules. There is never a semantic component in pure mathematics—all meaning comes from our interpreting in some outside theoretical framework what those empty symbols represent.
How Does Meaning Relate to Understanding? Human understanding is inherently a matter of relations. The primitive form of relation we employ is judgment of equality—the equals sign in an equation. Equations relate meanings on one side of an equals sign to meanings on the other side. “Equals” is a relation (a relational structure). As such, it is a purely structural or syntactic concept, with no intrinsic meaning beyond the structuring it provides. To say that X equals Y (or with negation, that X does not equal Y) does not put any particular meaning in the “equals,” it only relates the meanings found in specifying X and in Y. So understanding is not meaning. Understanding is only tangentially related to meaning because it states that some given meaning is equal to another given meaning. This applies in both mathematics and natural language (or any aspect of semantic cognition). Meaning is not a relation—in itself it is a predication. All extant “theories of meaning” tend to ignore this (for example, meaning as “use,” or meaning as neural activity, or meaning as XYZ). They are in fact theories (or statements) of the relationships in which meanings participate. What
22
W. B. Weimer
meaning is as a predication—which is what an actual theory of meaning should provide—appears to be beyond our reach except from an evolutionary perspective. Given our present knowledge, meaning appears as a brute fact; it is something that we take for granted as referring to our perceived existential predicament, and let it go at that. One should not confuse “what meaning is” with the issue of how it arose. Here, we have a more adequate picture: it appears to be a concomitant to classification in neural activity, as when an organism must make a rapid classification— a value judgment—as to whether a stimulus pattern is threatening or harmful or not threatening and not harmful. The rapid reptilian “fight or flight”—or even earlier faint response (still used to good effect by the opossum or hognose snake)—appears to be the source of our basic meanings. Mammalian meaning arose in and is based in the primitive judgments of what are traditionally called “passions” and emotions, not in cold, reflective cognition, which is a relational reworking and refining of that initial broad based snap judgment classification in the CNS. What we call reason is a refinement of judgmental passion, not a different, somehow superior, kind of judgment.
The Use of Mathematics in the Social and Physical Domains Looking at an article in a physics journal usually reveals a bewildering welter of equations and little explanatory or theoretical text. Looking at an article in psychology usually reveals dense text and rarely if ever any equation. This is indicative of the differing subject matter of the social and physical domains, and reflects a corresponding difference in the nature and role of mathematics in such disciplines. In the physical domain, the “subjects” are all identical objects in a class, having no unique properties whatever. Electrons and photons, for instance, are all identical instances of their class and are completely interchangeable. In the social domain, all subjects are different, having vastly different knowledge and experience and skill and evolutionary or developmental history. No two human beings (or any living animals) are the same, no matter how carefully they are matched on some dimension by the most meticulous
2 Understanding, Explaining, and Knowing
23
“experimenter.” This fundamental and unavoidable difference between the “subjects” of the social domains and the objects of the physical domains forces us to acknowledge equally fundamental and unavoidable differences in the nature and role of measurement (and thus the use of mathematics) in these domains.1 Physicists make models describing patterns of interaction among conceptually (and, what appears to us as empirically) identical objects. Mathematical formulae (usually algebraic) describe abstract patterns when we do not assume, or are unable to possess, particular information about the specific entities actually involved. A familiar example is thermodynamics, where we know absolutely nothing about a movement or a magnitude of any individual molecule in a cloud of gas. Here, the so-called law of large numbers enables us to make statistical probability statements that do allow predictions of probabilities with a high degree of accuracy, but these predictions can never be about or even involve any single individual entity (e.g., molecule of gas) in any way. Physics makes predictions of ensembles only, never of a single fundamental “object.” Physics is a matter of idealizations about identical members of totally homogenous classes.
Measurement Measurement is always and indispensably an act of classification (and thus inherently a judgment, an assignment of meaning to the flux of non-mental phenomena). Physicists and social scientists assign numbers as “measurements” of things, and assume that there is no problem in doing so. However, the fields are fundamentally different here. Physicists classify by measuring definite magnitudes which have been shown to have powerful generalizability and important mathematical properties in virtue of having been made according to specified scaling theory properties. Social scientists cannot blindly follow this approach because their attachment of “numbers” to living entities does not meet the criteria of scaling and measurement found in physical domains. Consider this:
24
W. B. Weimer
The true reason why the physical sciences must rely on measurements is that it has been recognized that things which appear alike to our senses frequently do not behave in the same manner, and that sometimes things which appear alike to us behave very differently if examined. The physicist,... was often compelled to substitute for the classification of different objects which our senses provide to us a different classification which was based solely on the relations of objective things toward each other. Now this is really what measurement amounts to: a classification of objects according to the manner in which they act on other objects. But to explain human action all that is relevant is how the things appear to human beings, to acting men. This depends on whether men regard two things as the same or different kinds of things, not what they really are, unknown to them. (Hayek, 1983, pp. 23–24)
In social domains such as economics or psychology, one cannot expect to find permanent (or even relatively constant) relations between aggregates or averages. Our populations are never large enough for the law of large numbers to apply—and thus we can never actually ascertain ratio scale based probabilities. The social domain does not actually deal with true mass phenomena, nor with identical objects. It deals with ambiguity. Every living individual subject is different in some crucial aspect that no experimenter can ever hope to “measure” (or often even know about, let alone “control” for). This ensures that there are inherent and fundamental limitations to our possible knowledge and hence to our predictive ability. This cannot be overcome by making finer or more precise measurements. Perfectly fine measurement would merely confirm what we already know—that every subject is unique in detail with respect to every other subject. Hence the differences that cause our lack of knowledge cannot be “measured” away. All we can hope for is to find generalized or abstract patterns of regularity, and we will never be able to predict particulars in any social domain. This is a fundamental conceptual difference between the complex functional and the relatively simple physical domains that science studies. This difference in what our knowledge consists of in these very different domains of physicality and functionality will ramify throughout the discussion that follows. In all cases, there will be only a bare minimum amount of symbolic equations.
2 Understanding, Explaining, and Knowing
25
There is never anything in any of the equations of any science that cannot be said completely and correctly, with no loss of semantic content, in the natural language(s) available to our common sense reasoning. Mathematics provides only a shorthand formulation, literally a shortcut, to be used as an aid in understanding. All of science can in principle be done in a natural language with adequately determinate semantics and concepts to substitute for the mathematical functions and variables. We use math because it is so economical as a shorthand approach to working out the logical consequences of relationships that would be often impossibly long and cumbersome to state in natural language. In no case is there any extra or hidden meaning in the mathematics beyond the syntactic consequences of the symbol manipulations.
Understanding and Knowledge Are Functional Concepts Not Subject to Natural Law Determinism The domain of our understanding is within the conceptual realm of pragmatics. Pragmatics deals with functions and meanings in their contexts. Functional specification is not and cannot ever be physical. The laws of nature that physics can disclose do not apply to the functional domain. But while we cannot become Superman (or Woman!) and leap over tall buildings because we cannot violate those natural laws, our functional specifications and concepts are not determined by the laws. So long as we do not violate natural law constraints, we are free to do (and to think and theorize) whatever and however we want. Thought and theory are constrained by the laws of nature, but they are not determined by them. There is genuine freedom, true novelty, and unpredictable creativity in human conceptualization and behavior. Creativity or productivity is rule governed, but it is not directly determined by natural laws or deducible from prior events. Laws are inviolate and unbreakable. Rules of conduct (which apply to thought and indeed to all behavior) can be broken and changed, corrected and often improved, even refuted. Understanding how this can be so is one of the most important things that
26
W. B. Weimer
an evolutionary approach to epistemology can provide. We must realize that understanding and explanation in an evolutionary framework is not every-where every-when deterministic. Finite organisms in a universe of infinite variety and complexity face immense problems in surviving. To survive requires adaptation to the ambiguous flux, the local environment (the econiche) in which an organism is found. How does this occur? Aside from religion (supernatural determinism or predestination), only one hypothesis has been advanced and explored: that evolution through time has enabled adaptation to an econiche for those populations who survive. We no longer debate this (it is taken as a truism) and now debate only what possible “mechanisms” or factors are involved in the process of evolution. Factors to consider are the structure of the universe and its necessary (law-like) characteristics, the enormous welter of “frozen accidents” constituting the boundary and initial physical conditions, the evolved structures within the organism and their necessary characteristics, the interactions of populations of organisms, and the econiche constructed by the species. Thus theories of evolution must focus on the context of constraints provided by the natural environment (usually regarded as the domain of the “physical” sciences), constraints provided by the evolved structures of the organisms and their capabilities (the biological and psychological domains), and group structure constraints on populations (for humans, cultural or sociological factors). What is human knowledge, and what is an explanation? What is understanding? We should consider proposed answers for how well they address issues from a coherent evolutionary point of view, and how testable the theories they generate can be. Traditional philosophy (as noted in the Appendix chapters), based on the idea that knowledge is justified true belief and the justificationist metatheory of inference and rationality (Weimer, 1979), is simply false and serves no purpose except to provide a series of puzzles and paradoxes for “academics” to publish about in their quest for tenure, promotion, and self-aggrandizement. In terms of explanatory adequacy for basic concepts such as understanding, explanation, knowledge, learning, inference, and dozens more, justificationist accounts are bankrupt, as several epistemologists have argued,
2 Understanding, Explaining, and Knowing
27
and can easily and completely be replaced by evolutionary and nonjustificational conceptions of rationality and the conduct of inquiry (Bartley, 1984, Campbell, 1974a, 1974b, 1988, Weimer, 1979). Such a shift requires re-examination of the entire range of concepts in traditional epistemology. We begin that task by posing the initial problems differently, from an evolutionary perspective rather than a static, aprioristic or deductive one. The fundamental problem of both existence on this planet (survival) and also in epistemology (knowledge and its acquisition) is to determine meaning, or, the same thing, to resolve or minimize ambiguity. Organisms are faced with an infinitely complex environment and must solve the problem of stimulus equivalence to survive. This is an example of the difference between boundary conditions and physical theory (the problem is that there is an infinitude of complexity not subject to physical regularity or determined by “law”) and the determinate laws that classify the recurrent regularities in the flux of existence (arising from the—always functional—determination of equivalence and hence from classification by a subject). Without recognizing sameness of stimulation (situations), the ability to classify instances as the same as others, there is no possibility of adaptation or learning or knowledge. In an environment in which nothing exists except ambiguity, identifiable objects simply do not exist. No life would survive in a world of total novelty and nonrecurrence. The first constraint on survival is thus the requirement of identifying instances of particulars as such. Classification of thing-kinds according to principles of similarity or relatedness that transcend the particulars themselves is necessary for adaptation and for learning. Two lines of inquiry in several domains must be pursued to see how classification of thing-kinds has arisen. First is the manner in which organismic adaptation, and its extension to the individual’s experience in learning, can occur. This is the domain of evolutionary biology, psychology, and other social sciences. The second is the domain of evolutionary epistemology and (primarily) philosophical analysis. These areas must specify what our knowledge can be, and how our knowledge can occur in the first place, i.e., what the constraints on human knowledge and its acquisition actually consist in, to show how we can have any knowledge at
28
W. B. Weimer
all, and how it relates to our regulatory ideals of truth, consistency, predictability, and so on. Classification—which is judgmental—becomes knowledge when embedded in a conceptual system capable of surviving. This makes our world meaningful. The fundamental problem organisms face, the resolution of ambiguity, becomes the problem of meaning and its manifestations in the natural and functional orders.
Pitfalls and Promises of Ambiguity and Ignorance The fundamental problem of the human intellectual existential predicament is that of ambiguity. We are creatures who are in the business of trying to learn how to live within an infinite sea of ambiguity and are trying to detect its meaningful content, literally to make sense out of it. So we are trying to eliminate ambiguity (by both commonsense and scientific thinking) in order to find what is intelligible in the universe. With respect to the problem of understanding the universe, we hope for simple declarative sentences (the 1950’s television detective’s “just the facts, ma’am”) that state what there is and from the functional perspective why it is that way. We want simple equivalence relationships (ideally a single term on each side of our = sign) in which we denote what is an unknown as an exemplification of something else that is a known (or a taken for granted). Humans have almost invariably used a simple strategy to try to do this: differentiation into parts. We take things (whatever that term means or refers to) out of context in a whole (whatever that means) in the hope that by studying it in isolation and taking it apart in finer and finer grained detail, we can eliminate ambiguity and say “this is an instance of that.” This is the method of analysis into basic components: we rip things apart in order to see what they are composed of. It is often described by noting that science gets most of its information by this process of reductionism, exploring the details, then the details of the details, until all the smallest bits of the structure, or the smallest parts of the mechanism, are laid out. Only then can knowledge be extended to encompass the whole
2 Understanding, Explaining, and Knowing
29
system or the history of the system. But this procedure does not explain, it instead increases ambiguity. This approach of taking things apart into their simplest available bits has led to enormous strides in compartmentalizing (really systematizing or “deductively unifying,” or “finding the laws of,” etc.) some of our environment. But it does not reduce the amount of ambiguity we faced initially. The very success of “explanation” increases that ambiguity. How can it be that (successfully) attempting to reduce ambiguity has actually increased it? Understanding how and why this is so is one of the most important things I can convey to you. There must be a supplement to this approach (especially when studying the living organism), looking back over the historical development, to see why the pieces fitted this way. Begin by understanding that a program of synthesis and knowing the whole context has to complement any program of differential analysis. We cannot successfully study anything in total “isolation,” because everything is connected to something else (especially at the quantum level: see Bohm, 1976), no matter how good our job of isolation is. We always make a decision to take something as an isolated focus of our attention. When we do so, we make a closed system that cuts off the effects of what is outside it and fossilizes what we are systematizing and differentiating. This is an aspect of the problem of agency—knowledge presupposes (perhaps better, is only created by) subjects who acquire knowledge by this process, and in so doing, the process separates the knower from the objects and phenomena known. Doing so cuts things out (makes an epistemic cut, as Pattee, 2012, noted) from the open universe and forms a separated closed system. In that artificially closed system input of new information, the possibility of novel discovery, is rendered impossible. But science always attempts to picture the universe as a closed system within a fixed and finalized formalism. Unfortunately, discovery always breaks through the “closed walls” of the system, and then we must quickly try to close the holes back up. Biologist Jacob Bronowski (1978) stated this clearly: You would like yours, of course, to be the last discovery but alas—or, I prefer to say, happily—it is not so. It is in the nature of all symbolic
30
W. B. Weimer
systems that they can only remain closed so long as you attempt to say nothing with them which was not already contained in all the experimental work that you had done.… What distinguishes science is that it is a systematic attempt to establish closed systems one after another. But all fundamental scientific discovery opens the system again. The symbolism of the language is found to be richer than had been supposed. New connections are discovered. The symbolism has to be broadened. (p. 108)
Everything novel in common sense or scientific progress creates new contexts in which what is isolated for study is no longer fixed according to the rules of the old context, and in which the meanings and terms of the new context and its processes are different from the old. The growth of knowledge is only possible in open contexts, never in fixed ones. Fixed contexts explore what is already (or potentially) known or available to be known. In contrast, there is essential ambiguity and ignorance in all open human conception. In the context of scientific praxis, this essential tension—between tradition (exploring the fixed context) and innovation (breaking out of that context into the open)—is what Thomas Kuhn (1970, 1977) argued leads from normal science puzzle solving within a “closed” tradition to scientific revolutions and reconceptualizations. Bronowski emphasized that this essential under determination, between open and closed, tradition and innovation, is why we are not digital computers, and why the information processing-computation metaphor has been and always will be so totally inadequate as a model of human cognition: “The part of the world that we can inspect and analyze is always finite. We always have to say the rest of the world does not influence this part, and it is never true. We merely make a temporary invention which covers that part of the world accessible to us at the moment (ibid., P. 96).” Thus we leave whatever we examine shrouded in ambiguity and uncertainty. This is why we are not fixed or algorithmically programmed digital computing machines. Our sentences have to try to minimize the ambiguity that was left hanging in the previous sentences. Bronowski noted this: we want what we say To be absolutely inherent in the relation between the symbolism of language (that is, an exact symbolism) and the brain processes that it stands for. It is not possible to get rid of ambiguity in our statements,
2 Understanding, Explaining, and Knowing
31
because that would press symbolism beyond its capabilities. And it is not possible to get rid of ambiguity because the number of responses that the brain could make never has a sharp edge because the thing is not a digital machine. (ibid., p. 106)
Ambiguity is an ultimate nemesis of the human existential predicament: we strive to eliminate it at all costs, but cannot live in an uncertain world (or learn anything at all) without it. We must work with it. Exactly the same situation holds for our ignorance. The more our knowledge grows, the greater our total ignorance becomes. We cannot simply abolish our ignorance any more than we can definitively isolate a single object upon which to perform conclusive experimentation. Warren Weaver (1959, p. 3) put this seemingly paradoxical result beautifully, referring to science as working in a forest of ignorance to clear out a circle of knowledge: When that circle becomes larger and larger, the circumference of contact with ignorance also gets longer and longer. Science learns more and more, but there is an ultimate sense in which there is no gain, for the volume of the appreciated but not understood keeps getting larger. As he said, we keep, in science, getting a more and more sophisticated view of our essential ignorance. We must learn to live with ambiguity and ignorance. That is our existential situation as evolved and evolving creatures in an uncertain universe. It could not be otherwise for finite creatures in an infinite flux of events in what we have come to call the space–time manifold. The circle of ignorance is always around the relatively small amount we have come to know. If we are lucky, our knowledge will grow, and with it the circle of ignorance will expand. Our task in coming to know an ambiguous reality can never be to determine what is absolutely or certainly true (the mistaken ideal of knowledge as justified true belief ). All we can hope to do is to eliminate as many incorrect conjectures about the universe and ourselves as possible. Knowledge grows in a context that depends upon weeding out errors, which is finding inconsistencies between our theories and our observations, testing our conjectures and finding that they are incorrect because they are inconsistent, rather than coming to any certain or final conclusion. Our knowledge is a product of evolution, exactly analogously to the species of life that have evolved on our planet. And just as
32
W. B. Weimer
a species that seems to be adapted as well as possible to its environment, and has survived far longer and more successfully than other species, can go extinct very quickly as a result of a change in its environment which its adaptation had not foreseen, our most secure and seemingly “certain” knowledge, in the form of our best corroborated theories, can be overthrown and shown not to be correct when its claims are applied to newly discovered domains, or when old areas are reinterpreted. The evolutionary approach to epistemology shows the superior power of negative rules of order in this attempt to eliminate error in comparison to the classic justificationist quest to achieve perfect adaptation and certain, eternal or timeless truth (as discussed in the Appendix chapters).
A Bucket or a Searchlight? We can fruitfully compare two metaphors for how knowledge is acquired. They exemplify differences between traditional justificationist philosophy and what is required by a non-justificational and evolutionary epistemology. The traditional account, modeled upon the ideal of geometry, proposed that what we do is collect knowledge (according to some procedure called inductive logic or probabilistic confirmation theory) and then that knowledge, once collected, is certified to be genuine or true or certain by some test procedure, and after certification, is ready to be dumped into an ever larger bucket of accumulated knowledge. This model views the acquisition of knowledge as analogous to what a child does with a toy bucket and shovel when taken to the beach—scoop up treasured “bits” and place them in the bucket for safe keeping. According to this model the bits, whatever they are, then constitute the increasing sum of our knowledge. Against this “knowledge is collected and certified forever” approach stands an entirely different conception, stemming from the study of evolution. It proposes that knowledge acquisition is very much like shining a searchlight around in the dark. When we move the searchlight it discloses, as far out as its beam goes, things that are there but were not visible in the dark. This model (exemplified in the writings of Karl Popper—see his 1963, 1972) has no bucket to substitute for cognition (or its artifacts such as books), and says that sometimes, if
2 Understanding, Explaining, and Knowing
33
we are able to construct better flashlights or searchlights, we see that what we thought was there is actually something else, and that often there is much behind it that we had not seen at all. Furthermore, if we use one of the most crucial parts of our visual system, our legs, and move around, we will see that the landscape in which we move changes, in so doing disclosing new aspects of the environment. What was once thought to be there need no longer be there at all when we are in a different position, or perhaps it was something entirely different from what we first thought. The searchlight metaphor emphasizes the fact that the acquisition of knowledge is an active or constructive process and an exhibition of a skill (essentially motoric action rather than passive sensory reception—remember your legs are an integral part of visual perception), and that our theories and the facts which we propose to be relevant to them are our constructions and change over time. Indeed, it proposes that our theories are themselves often in conflict. The only way in which we can attempt to assess them is to allow them to come into conflict, and let those that are best able to survive continue to be explored and tentatively accepted. When they survive, they are no more certified or certain than anything else that is subject to evolution. Survival alone does not certify any sort of ultimate tenability or truth, any more than our evolution and survival to this point indicates that we are always going to be here as some sort of ultimate species. Fortunately, we have evolved far enough that we can allow our theories to die in our stead. Better to learn from a theory that dies than to die ourselves. That is our Lamarckian advance over purely Darwinian evolutionary processes which would require our death when we failed.
Note 1. One might attempt to circumvent this by arguing that, say, astronomy is a “hard” or “physical” science that deals with individual, unique entities. After all, the planet Neptune is unique, as are all the others in our solar system, yet astrophysics deals with them in terms of every-where, everywhen laws of nature and the specification of initial conditions, just as
34
W. B. Weimer
physics does with Galileo rolling balls down an inclined plane. So the distinction between unique subjects and identical objects is wrong. What gives this game away is the fact that the planets are incorrectly given proper names. Proper names denote unique and “nonrepeating” entities. There are no proper names in physics (see Chapter 9 below). The different balls rolling down an inclined plane in Galileo’s experiments were not named individually because there was nothing to differentiate them from any others with the same physically measurable properties. A ball of XYZ specification is identical to another one of the same exact specifications made centuries later, and even by different methods. When studied by science “Neptune” qua planet is identical to any other planet with the same specifications in another star system. Such entities are not living agents capable of self-construction and movement, and dependent upon unique individual histories. Their behavior is totally passive, determined by the constraints of physical laws. Living entities are, in addition to laws, constrained by higher-order rules of determination, as later Parts discuss in detail.
References Bartley, W. W. III, (1984). The Retreat to Commitment. Open Court Bohm, D. (1965). The Special Theory of Relativity. W. A. Benjamin Inc. Bohm, D. (1976). Fragmentation and Wholeness. The Van Leer Jerusaalem Foundation. Bronowski, J. (1978). The Origins of Knowledge and Imagination. Yale University Press Campbell, D. T. (1974a). “Downward Causation” in Hierarchically Organized Biological Systems. In F. J. Ayala and T. Dobzhansky (Eds.), Studies in the Philosophy of Biology. Macmillan & Company. Campbell, D. T. (1974b). Evolutionary Epistemology. In P. A. Schilpp (Ed.), The Philosophy of Karl Popper (pp. 413–463). Open Court. Campbell, D. T. (1988). Methodology and Epistemology for Social Science. The University of Chicago Press. Kuhn, T. S. (1970). The Structure of Scientific Revolutions. University of Chicago press (Rev. Ed.).
2 Understanding, Explaining, and Knowing
35
Kuhn, T. S. (1977). The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press. Hanson, N. R. (1958). Patterns of Discovery. Cambridge University Press. Hanson, N. R. (1970). A Picture Theory of Theory Meaning. In M. Radner & S. Winokur (Eds.), Minnesota Studies in the Philosophy of Science, IV (pp. 131–141). University of Minnesota Press. Hayek, F. A. (1983). Knowledge, Evolution and Society. Adam Smith Institute. Hertz, H. (1960). Two Systems of Mechanics. In A. Danto and S. Morgenbesser (Eds.), Philosophy of Science (pp. 349–365). Meridian Press. Originally in H. Hertz, The Principles of Mechanics. (1894). Translated by D. E. Jones and J. T. Walley, Dover, 1956. Popper, K. R. (1963/2014). Conjectures and Refutations. Harper & Row. Now Rutledge Classics. Weaver, W. (1959). A Scientist Ponders Faith. Saturday Review, XLII (1), 3. Weimer, W. B. (1979). Notes on the Methodology of Scientific Research. Erlbaum Associates.
Part I Knowledge as Classification, Judgment, and Mensuration
Knowledge is produced by the activity of the nervous system. Since it is always active, producing patterns of activity, the problem of knowledge begins with determining when a new pattern of activity is significantly different from the ongoing background level. To make such a determination, the nervous system must classify and evaluate new patterns against that ongoing background level. The fundamental activity of the nervous system is to classify patterns, and in so doing evaluate them against the memory of the nervous system. The detection of novelty is simultaneously classification, mensuration, and judgment. Any epistemic activity inevitably involves all three. In order to classify, the system must make a judgment. In order to judge, it must measure whether the new activity is the same or different. Higher organisms are based upon simultaneous classification, judgment, and measuring. This is the basis of meaning. Both science and commonsense are built upon these three fundamental activities. In science, the problems of measurement are of paramount significance. If we do not measure—and measure correctly— we cannot know. Measurement—and experimentation as a means of constraining nature in order to make accurate and reliable measures of the same things—is fundamental to the generation of scientific knowledge. Thus Part I examines fundamental problems of mensuration and
38
Part I: Knowledge as Classification, Judgment, …
experimentation as they occur in both the human sciences and the physical sciences. We shall see that the problems of measurement are both similar and different in these disparate domains—that we cannot blindly ape the procedures that have worked relatively well in physics, and that neither experimentation no ratio scaling of measures can be obtained automatically. Chapters 3 through 8 detail the limits and constraints upon knowledge in the sciences of the living—from biology through psychology to the social domains such as economics. We will not find laws of nature and deductions of particulars in the behavior of living organisms, but rather rules and explanations as general principles of order rather than the deductive entailment of particular behaviors or data points. The human sciences rely upon demonstrations rather than experiments, and the nature of our knowledge of these intrinsically functional realms is usually quite different from our knowledge in the “hard” or physical sciences.
3 Problems of Mensuration and Experimentation
If a researcher collects data made up of numerical scores and then manipulates the scores by, say, adding and dividing (which are necessary operations in finding means and standard deviations), he is assuming that the structure of his measurement is isomorphic to that empirical structure known as arithmetic. That is, he is assuming that he has attained a high level of measurement. Sidney Siegel
The universe is ambiguous. It is only human conceptualization that renders it relatively intelligible and unambiguous to us. That intelligibility and clarity is the result of the mammalian nervous system operating according to abstract classificatory rules that create order. Without that order, we could do nothing at all—we would be unable to classify, and thus could not have developed our primitive fight or flight response capability, or anything more advanced. The farthest back (or down) that we can trace the operation of these rules is to the functioning of the orienting response in detecting a deviation from an ongoing pattern of activity in the CNS. This detection, functionally both the first measurement and simultaneously the first assignment of meaning performed © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_3
39
40
W. B. Weimer
by the nervous system, requires two opposed responses: orientation to novelty, and recognizing no change in ongoing activity (habituation or nonresponding). This creates a fundamental distinction between no change, which we designate as the background activity or patterns of ongoing neural activity in the living system, and any deviation from that background. This is an inherently relational and negative rule—deviation exists only relative to a background expectation, not as a positive occurrence in itself. The nervous system must know of that deviation from baseline expectation for any possible function it can perform. This “knowledge” is contained in its first functional activity: mensuration. By comparing input to expectation, the nervous system performs a measurement. This measurement is a one-to-many mapping that creates the first equivalence class. The orienting response measures novelty (more properly, the class of all novel patterns) by declaring its presence. In this process, the concrete physical gives way to the abstract functional: the “physical” response simultaneously gives rise to meaning and knowledge in the conceptual realm, and both arise in unison with the act of mensuration. At this point, the subject (at least in the form of a proto-agency) differentiates from the object. This dynamical “physical” process (separating a fixed and finite “measure” from a dynamical and indefinitely fluid flux) freezes out a meaning that is timeless and rate-independent, in the realm of conception or cognition (and it is also inherently uncertain and by definition incomplete, or subject to error ). Thus, one major task is to trace problems involved in mensuration and experimentation in different disciplines to see how they arise and then constrain possible theory construction in those fields, and the nature of the knowledge that can be achieved within the fields given the constraints of and upon measurement. In so doing, we can see the gulf between the functional and the physical, as well as the gulf between generic objects and individual subjects, as they unfold within the fields. We begin by recapping the “measurement problem” (always present, only recently of real importance) in recent physics. Next is the issue of what experiments actually are, and what they can do to help create knowledge. Then we can discuss similarities and differences in “hard science” physics on the one hand, and biology, psychology, economics, and finally philosophy and even political activity, on the other.
3 Problems of Mensuration and Experimentation
41
Physics and the Cat Erwin Schrödinger did not like the consequences of the phenomenalistic interpretation of the quantum level that most researchers had unquestioningly adopted from Bohr and Heisenberg. This view held that the act of observation by the researcher removed the probabilistic superposition of possible states of the quantum situation (the only information available in the wave function) and froze out or determined a particular result (“collapsing the wave function” into a definite state) when an observation (which is conceptually a measurement) occurs. This view asserts that before the act of observation measures (and therefore records or permanently determines or fixes) an “objective” result, one cannot assert anything meaningful about the exact state of the quantum system. All that is available is the wave function, a higher order abstraction, which is a linear superposition of the probabilistic distribution of all possible states of the system. But when a measurement is made, it somehow freezes out a deterministic answer, and that “frozen” answer somehow determines the future state of the quantum system. How is this possible if all the information that is available from quantum mechanics was a probabilistic wave function instead of an exact state specification? The quantum problem of measurement is to specify how the wave function of probabilities is “collapsed” into a single measured value. How does the quantum statistical realm interact with (or slip into) our molar or classical and seemingly deterministic conception of reality when provided with a measurement? This is usually discussed today as the problem of quantum “decoherence.” Against this background,1 Schrödinger proposed his cat thought experiment to show an apparent intolerable ambiguity—one cannot ascertain whether the cat, as part of a quantum ensemble sealed in a box with his proposed experimental apparatus, is either dead or alive until you make a measurement on the system by looking inside. This seemingly paradoxical result has been extended by Wigner in 1961 with his “Wigner’s friend” thought experiment, and later by Wheeler (1978) and Miller & Wheeler (1984) with the extension of the “friend” argument to the “delayed choice” situation in a modified two-slit experimental
42
W. B. Weimer
paradigm. The result is always the same: quantum coherence or essential statistical indetermination gives way or collapses into decoherence (which is molar level determination) after classical level measurement occurs, but not before it. It is as if a classical or molar level observer determines a quantum state description. No one has the faintest idea how or why this could happen. All we can presently do is restate the problem more clearly—as what happens when a subject of conceptual activity assigns meaning by the measuring process of observation to what then becomes a “result” or an observation.
Another Fundamental Problem: Experimental Science Requires Classical Level Apparatus This problem (or is it another aspect of the same problem?) does not deal with the individual who observes but with the machines and apparatus that constrain ambiguous reality into an “observable” situation. What makes physicists so successful in gaining knowledge? How do these boundary conditions wrest secrets about fundamental regularities from reluctant nature? The answer is that physicists do experiments to help discern the underlying regularity in the welter of chaotic events. Determining causal regularity—the theoretical “glue” that our theories use to explain how events are related—always involves experiments, which always set up some apparatus to render disturbing factors as small as possible. Experiments manipulate or control factors (causes), and observers can thereby begin to control for disturbances that originate in the immense complexity of the universe. Our ideal is to control for random disturbances by reducing their occurrence to the point that some statistical approach (usually based on an “error” distribution such as a Gaussian curve) can handle them. We just assume that random disturbances are adequately dealt with by a theory of error. James Clerk Maxwell (1890/2011) summarized this approach to using experimental apparatus:
3 Problems of Mensuration and Experimentation
43
In designing an experiment, the agents and phenomena to be studied are marked off from all others and regarded as the Field of Investigation. All agents and phenomena not included within this field are called Disturbing Agents, and their effects Disturbances; and the experiment must be so arranged that the effects of these disturbing agents on the phenomena to be investigated shall be as small as possible. (p. 505)
To consider a particular practical application, Maxwell added: in experiments where we endeavor to detect or to measure a force by observing the motion which it produces in a movable body, we regard Friction as a disturbing agent, and we arrange the experiment so that the motion to be observed may be impeded as little as possible by friction. (ibid., p. 506)
Experimentation, which always involves human artifacts, called apparatus, to constrain the extraneous, switches us from being Aristotelian or passive observers of nature to active manipulators who gain knowledge by changing an ensemble consisting of an observer, the experimental situation and its apparatus, and the phenomenon being studied. This requires moving from “just looking” into the horse’s mouth to count teeth (the empirical approach of Aristotle) to constraining and manipulating the situation (Archimedes making levers to study the principle of leverage). There are at least two kinds of disturbances to account for. Random ones can (at least ideally) be dealt with by error theory. Others are systemic, and, turn out to be part of the problem to be explained (or an equally real problem), requiring later, separate experimentation to tease them out. The importance of this will be noted later. Experimentation has refined our knowledge by active participation in the determination of experimental outcomes. Indeed, experimentation can be defined as active participation in the determination of outcomes. About the limit of activity for the Aristotelian observer was opening the horse’s mouth to count its teeth. We have subsequently built apparatus that greatly extend the power of our senses (and also can do far more than just open horses’ mouths). These machines have extended and refined our senses-as-vicars (in religious parlance a vicar is one who ventures far “out” to hear the word of god, and then brings that “truth” back to the
44
W. B. Weimer
faithful flock) into realms we literally never dreamed of, because before their construction we could not “sense” in those realms at all. Until our apparatus extended our senses (our instrumentation becomes our vicars), we had no awareness of these realms. Our procedure in constructing machines for observation follows exactly the same procedure Maxwell just described. For example, our measuring meters and gauges are made as frictionless as possible by using the smallest possible means of moving a needle (again, the needle itself made as frictionless as possible in a jeweled movement) for a meter reading to occur. Newer digital electronic instruments are even more precise because they disturb even less. This process worked beautifully up until the limit of uncertainty imposed by the Heisenberg relations. Here, when we reach the quantum level, a new problem arises: classical machines and apparatus are absolutely necessary for the conduct of any experimentation. But we can never construct classical apparatus to control disturbances at the quantum level without them simultaneously constituting further such disturbances in themselves, and in their application and our interpretation of the results. No possible experimentation controls for quantum level disturbance, nor is there any possible measurement in a quantum realm of coherence in which, as Bohm (1976) noted, everything is connected to everything else. Experiment and measurement require principled separations between entities or events—they are freezing a finite measure out of the infinite coherence. Since that state does not exist at this quantum coherence level, neither can experiments nor measurements. No matter what else their differences, all theories of measurement available to us are classical in this sense. All our experimental determinations, even when we record only the apparent quantum effects, remain classical. In this important sense, all our knowledge of quantum or any other phenomena remain classical. Any quantum phenomena themselves are far removed from our vicarious senses and the multipliers of them we have constructed.
3 Problems of Mensuration and Experimentation
45
Historical Excursus: The Nature and Role of Experiment in Classical Science Why are we so “hung up” on experimentation? For Aristotle and the Greek empiricists, science was empirical, but not yet experimental . Do you want to know how many teeth a horse has? Don’t waste time with some apriori rationalist theory divorced from fact, just open a horse’s mouth and observe how many teeth there are. So science was empirical and observational. But was it experimental? Not yet. The modern view of requiring experimentation stems only from the era of Galileo, who realized that passively observing nature was not enough— in order to minimize the effects of accidental factors (such as an old horse with some teeth lost over time) and other boundary conditions (to use Newton’s later distinction), we have to actively construct artificial constraints. All experimental apparatus, these artificial constraints, are designed to minimize extraneous factors so that we can ascertain the laws of nature without worrying about, or being stymied by, the indefinite flux of momentary variability that obscures law-like regularity. Galileo (and all subsequent physicists) actively intervened in the course of events presented by nature with artificial or constructed experimental situations to constrain nature to its essential regularity by minimizing the effects (literally holding them at bay) of accidental irregularity. The Galilean experiment is an operational specification of the distinction between what Newton later called laws, boundary conditions, and initial conditions. Experimental design always tries to constrain boundary conditions in order to get at laws more easily and directly, without any disturbances, and thus in turn minimizes the existence of initial conditions as much as possible. It is only extremely rarely, in what is literally the fortuitous accident, that nature arranges the situation such that the interfering factors are really minimal, so that we can see a “law of nature” without constructing an experimental apparatus. The most famous such case is planetary motion, which can be discovered simply by looking (if you can properly construe what it is that you’re looking at).
46
W. B. Weimer
Change Is Inevitably Scale-Dependent, and Theoretically Specified There is a problem in epistemology that is so obvious that it is amazing that it is not studied or even mentioned. It is the problem of change— specifying what would constitute a change in a domain, and, as a result of being a change, require an explanation. Theories in all domains require specification of two things: what is a change (within the field in question), and what scale (measurement scale) provides appropriate measures of change? There is a famous quotation from Novalis that states that when examined from afar, the whale is the biggest of the fishes, but when examined up close, the whale is no fish at all. This is a problem of scaling—if whales are considered (scaled as) fish, then they are the largest. If, on the other hand, one is not concerned with size (in the traditional sense of length), then other considerations mitigate against putting fish and whales on the same scale for any comparison purposes. All science faces the problem of selecting the correct scaling procedure for the domain in question, and the entities within it. And the determination of what if anything constitutes a change is entirely dependent upon the scale that one employs. To take a contemporary politically charged issue (climate change) as an example, it is not at all clear a priori how to scale the issues of anthropogenic factors in climate change. It is a triviality that Earth’s climate has changed every single day since the planet formed. But there is no theoretical motivation for discussing climate change in terms of the immense number of days in the history of the planet or the history of humanity. Our usual time scale for such considerations would be to use years (or perhaps decades or centuries) to note historical comparisons. But the scale choice—the preferred time interval to be used as the unit of analysis—determines what the data are and how the data are interpreted . Those who believe that human beings have caused climate change focus on the “hockey stick” graph with a relatively flat period (the handle) followed by a short and very rapid rise at the end (the head). Those who believe that human beings are terrible, and are causing disaster, scale the graph as short as possible, so that the handle covers a few hundred years at most and the head upturn covers a few decades at most. Those who are more moderate will look back over human history
3 Problems of Mensuration and Experimentation
47
to the ice ages, and comment on a relatively flat handle with a much smaller in proportion uptick. Those who take a longer view still, such as the 200 million years of mammalian evolution, will see a wavy stick whose handle goes up and down in several places, and which does not even really have an upturned head at all. From this latter perspective, the recent warming of the globe is only beginning to bring the average temperature back up to where it has been many times in the past. One cannot possibly begin to answer the scientific questions of whether or not there are anthropogenic factors in climate changes without determining which scale it is appropriate to use. In the interim, politics, not science, reigns. To take another example, consider the issue of whether or not science is a cumulative record pointing upward to the present or a series of relatively cumulative periods punctuated by small revolutionary changes. The same point can be made: when one looks at the history of the growth of human knowledge over the long-term, the graphical representation approximates a straight line “up” to the present, with only small changes apparent in some places. But from a shorter-term perspective (say, from Galileo’s time to the present), the change is up and down and often even discontinuous, as when new fields of inquiry emerge, or significant revolutions or reformulations happen. If one were to look at the history of physics from the year 1900 to 1930, one would see two significant “revolutions” due to Albert Einstein—the special and general theory of relativity, and quantum mechanics. So if one is asked the question “Have there been revolutions in scientific history?” it is not possible to give an answer unless one specifies the timescale involved. Revolutions are always scale-dependent. Similarly, the issue of whether or not the evolution of species is slow and relatively continuous (e.g., Pinker) or occasionally saltatory or punctuated [Gould], cannot be answered without clearly delimiting the timescale in which the evolutionary events occurred. Any domain with an evolutionary history—even that of the earth (geology)—faces this fundamental problem of scaling: we cannot tell what change is or whether or not it has occurred in the absence of a theoretically motivated choice of scales. Indeed, we will see in Chapter 4 that we cannot even
48
W. B. Weimer
define what an organism is without taking into account the historical dimension of its dynamic existence.
Note 1. The specter of phenomenalism has haunted epistemology since Berkeley, and became especially pernicious in physics at the end of the nineteenth century in the philosophy of Ernst Mach. Niels Bohr took up Mach’s pre-evolutionary view as an essential aspect of the so-called Copenhagen interpretation of quantum phenomena. While Mach and Bohr made great contributions to science, both retarded progress in their fields as a result of the power, prestige, and pervasiveness of their views, and their very dominant personalities. Mach’s phenomenalistic approach to sensory physiology was refuted by Wundt and his student Külpe, Bühler and his heirs (Hayek, Popper, Lorentz), the Gestalt psychologists, critical realists in philosophy, neo-Kantians such as Cassirer, and many others. But because of Mach’s personality and dominance, his views, even opposition to atomism, continued to be cited long after the atomic theory was wellestablished by Planck, Einstein, and even Bohr in the research program of the Copenhagen school. Only the early twentieth-century logical positivists ever held the extreme parsimony of his “economical” approach to theories and the paucity of entities it allows. Nevertheless, Mach’s currency extended over 50 years more than the actual refutations of his views should have allowed. Bohr was an even more venerated personality, and even his critics were enthralled by the man. Consider Popper, a lifelong opponent of virtually everything Bohr said. As Popper (1982) told us, when “I first had the great opportunity, in 1936, of talking to Bohr, he impressed me as the most wonderful person I had ever met or would be likely to meet. He was everything a great and good man could be. And he was irresistible (p. 9).” So Popper left his presence feeling that he had to be wrong about quantum mechanics, even though he could not say that he now understood it as Bohr did. With this much veneration from a vocal opponent, one can imagine what the cardinals and bishops of the Copenhagen Church must have felt about their God. Even acts of sheer stupidity on Bohr’s part were treated as done by one who walks on water. Consider George Gamow’s (1966) portrayal of this incident. One evening Gamow was in Copenhagen with Bohr and his
3 Problems of Mensuration and Experimentation
49
wife, and another young physicist named Casimir, at a university function dinner. Afterward: At that late hour the streets of the city were empty.... On the way home we passed a bank building with walls of large cement blocks. At the corner of the building the crevices between the courses of blocks were deep enough to give a toehold to a good alpinist. Casimir, an expert climber, scrambled up almost to the third floor. When Caz came down, Bohr, inexperienced as he was, went up to match the deed. When he was hanging precariously on the secondfloor level, and Fru Bohr, Casimir and I were anxiously watching his progress, two Copenhagen policeman approached from behind with their hands on their gun holsters. One of them looked up and told the other: “Oh, this is only professor Bohr!” And they went quietly off to hunt for more dangerous bank robbers. (p. 57) Little wonder then, that the Copenhagen framework view, stating that quantum theory is complete (the end of the road for physics) and that the theory of statistical physics is to be explained by an inherent subjective lack of knowledge on the part of observers, has held sway for so long. Clearly anyone who disagreed must have been an “old guard” defender of outmoded (and of course, therefore unscientific) ideas. Thus popularizations of quantum “theory” in the second half of the twentieth century (e.g., Gribbin, 1984) portrayed realists and objectivists such as Einstein, Schrödinger, Bohm, Bell, and others who dared to resist, as “old fossils” who had not been able to make the leap to the new quantum paradigm, as proponents of an outmoded and pre-Machian philosophy of science. This was the stage upon which logical positivism, instrumentalism, operationalism, and repudiation of metaphysics came to dominate philosophy and physics, and blackbox behaviorism replaced substantive views in psychology, even among those who, like Tolman, acknowledged the cognitive capacity of psychological subjects. The pernicious influence of the subjectivist and phenomenalistic views of Mach and Bohr retarded progress in science and in philosophy for several generations worth of research. In that effect, they join others, important examples of which in other fields Young and Weimer (2021) examined in more detail. The extent to which Bohr and others (especially the early Heisenberg) moved the scientific quest away from the nineteenth-century perspective back into the eighteenth-century-speculative metaphysics is amazing. Not only did they repudiate realism and endorse phenomenalism, but
50
W. B. Weimer
also they proposed that Berkeley’s mentalism, now without “God in the quad,” was the only tenable epistemic stance for science, and that it simultaneously determined the only ontological one. The Copenhagen interpretation not only abandons the quest to understand external reality, but it also proposes that nothing exists at all unless it is observed or “measured.” They went back to the introductory philosophy course question of “How do I know the room exists if I shut my eyes?” and answered confidently “It doesn’t.” Not surprisingly, Bohr vacillated considerably when attempting to deal with new developments in quantum physics. Even Russell Hanson (1961), normally quite sympathetic to Copenhagen, described Bohr’s philosophy as ambiguous, holding incompatible positions, by repeating this time worn old joke: Baseball umpire A. “I calls ’em as I sees ’em.” Baseball umpire B. “I calls ’em as they are.” Baseball umpire C. “Until I calls ’em, they just aren’t.” Bohr and Copenhagen took a step off of Mach’s shoulders back to portraying scientific researchers as “gods in the quad.” By putting the intentional or teleological functional concept of measurement into the center of the quantum stage, it was necessary to conclude that, if measurement provides our only source of knowledge of what is “real,” then nothing exists that is not measured. It is from this perspective that the measurement problem became so central—it went from the problem of not having exact knowledge of a particle’s situation (as in Heisenberg uncertainty) to the thesis that the particle isn’t known to exist at all (and therefore does not exist) except through an act of measurement. So the experimenter was elevated to the position of God in the quad proposed by Berkeley—the ultimate arbitrator not only of knowledge but of existence. Hence, we are all gods, constantly “collapsing the state vector” in Copenhagen theory. This is the framework from which Bohr directed his students: “Don’t think about it, just shut up and calculate the quantum corrections to classical results.” That was all there was to be done in the science of physics.
3 Problems of Mensuration and Experimentation
51
References Bohm, D. (1976). Fragmentation and Wholeness. The Van Leer Jerusaalem Foundation. Gamow, G. (1966). Thirty Years That Shook Physics. Doubleday & Company Inc. Gribbin, J. (1984). In Search of Schrodinger’s Cat: Quantum Physics and Reality. Bantam Books. Hanson, N. R. (1961). Comments on Feyerabend’s “Neils Bohr’s Interpretation of the Quantum Theory” or Die Feierabendglocke fur Copenhagen? In H. Feigl & G. Maxwell (Eds.), Current Issues in the Philosophy of Science (pp. 390–398). Holt, Rinehart and Winston, Inc. Maxwell, J. C. (1890/2011). The Scientific Papers of James Clerk Maxwell . Reprinted by Cambridge University Press. https://doi.org/10.1017/CB0978 0511698095 Miller, W. A., & Wheeler, J. A. (1984). Delayed-Choice Experiments and Bohr’s Elementary Quantum Phenomenon. In S. Kamefuchi et al. (Ed.), Proceedings of International Symposium of Foundations of Quantum Mechanics in the Light of New Technology, Tokyo, 1983 (pp. 140–151). Popper, K. R. (1982). Unended Quest: An Intellectual Autobiography. Open Court Press. Wheeler, J. A. (1978). The “Past” and the “Delayed-Choice” Experiment. In A. R. Marlow (Ed.), Mathematical Foundations of Quantum Theory (pp. 9–48). Academic Press. https://doi.org/10.1016/B978-0-12-473250-6.50006-6 Young, N., & Weimer, W. (2021). The Constraining Influence of the Revolutionary on the Growth of the Field. Axiomathes. https://doi.org/10.1007/ s10516-021-09584-1
4 Problems of Measurement and Meaning in Biology
Scientific observations, of course, are not the contemplation of “empirical truth” nor “empirically real” (whatever this may mean), as they include the observer’s choice of what and how to observe and measure, where to set the contours and how to quantify the objects of analysis. A. Islami and G. Longo
Our utilization of and familiarity with measurement and experimentation arose in the classical approach to physical science. Physicists “measured” things that were objects thousands of years ago, with elementary arithmetic and geometry (such as inclined planes and triangles and circles), recorded the flooding of the Nile, and did all the familiar trappings of today’s elementary school presentations of physics. The nature of measurement was largely intuitive, and the idea of requiring a theory of mensuration never seemed to arise (surprising, considering the Greeks’ intense scrutiny of the nature of logic and the validity of inference). When attention turned, as with Aristotle, to the nature of the biological realm it was only natural to just borrow the conception of measurement (in fact, it was the only conception available) from physics to apply to biology. So we applied, with no critical thought at all, the math of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_4
53
54
W. B. Weimer
ordinary number theory and basic arithmetic to living things. To this day, most biologists do not study or utilize the results of the theory of measurement that has developed over the last hundred or so years, and they just “measure” (i.e., assign numbers to) what they are interested in by intuitive reference to physical units, without thought about the nature of scales of measurement or whether the numbers and mathematical procedures can actually apply to what they are interested in. In comparison, psychology was much more aware of the problems posed by attempting to measure the functional domain of concepts its field uses, and discussion of measurement in biology, in the rare instances that it occurs, usually borrows extensively from developments in psychology (as has economics). We need to look at some issues that have arisen in these two “nonphysical” domains with regard to scaling and measurement. From the standpoint of epistemology, concerned with what we can know in these fields, the results are disheartening.
The State-of-the-Art (Isn’t the Best Science) In all fields measurement theory has to do with the assignment of numbers to conceptual entities (theoretically specified entities such as traits [psychology], clades or organisms [biology], prices and profits [economics], quarks [physics]) so the relations between the numbers reflect empirical relations observed to obtain in those conceptual entities or attributes. When what is true about the relations of the numbers is true about the theoretical attributes, the conclusions we can draw about the numbers mirror meaningful conclusions about reality (the conceptual entities of the theory then have an empirical dimension—they are “real”). So the issue is how to assign numbers to what we stipulate to be real, so that manipulations of those numbers will be “real” (mirrored) in actual empirical results: we attempt to determine what mathematical procedures can be correctly performed on the numbers in order for them to be meaningful numbers. At issue is what can we legitimately conclude from an “experimental result.” Legitimate conclusions can come only from correctly measured data (so that it can be meaningful when we assess our theories). As Houle et al. (2011) noted, that measurement
4 Problems of Measurement ...
55
theory provides a language for discussion of a critical aspect of scientific practice that is frequently done quite poorly. Measurement theory concerns the relationship between measurements and reality, and its goal is to ensure that inferences about measurements reflect the underlying reality our research is to represent. In biology, these authors note that any connection between concepts and measurements is often lost during the measurement process. While quantitative measurements flourish in biology, they are often used or manipulated in ways that render conclusions drawn from them meaningless: see Houle et al., especially pp. 5, 4). The key is the issue of the relationship between measurement and reality. Numerical assignment to (measurement of ) functional concepts or variables hinges upon the type of scale employed. Scale differences denote the types of information that is provided when numbers function at the different scale levels. So, by definition, only correctly scaled and assigned numbers can provide unambiguous information about reality. Various scales have been proposed, the most famous being the often cited Stevens (1946) specification of nominal, ordinal, interval, and ratio. Others have proposed more differentiations, for instance, up to 10 in Chrisman (1998): nominal, gradation, ordinal, interval, log-interval, extensive ratio, cyclical ratio, derived ratio, counts, and absolute. We can make the points necessary for this discussion by considering only the more familiar limited range of NOIR (nominal, ordinal, interval, and ratio) and the absolute scales. Consider Table 4.1. The type of numerical scale can be characterized by three properties— (1) permissible transformations that preserve the relationships among measurements using this scaling technique; (2) the number of arbitrary parameters that must be employed to establish the number system; and (3) the domain of applicability to which the numbers or numerical concepts can apply. Thus the common scale of physics, the ratio scale, allows multiplication of numbers, ranges over all positive numbers, has one arbitrary parameter, and meaningfully permits designations of order, difference, and ratio. The most common social science scale (ordinal) allows adding a constant to a number, ranges over real numbers, has two arbitrary parameters, and provides information only on order and equivalence.
56
W. B. Weimer
Table 4.1 Commonly discussed measurement scales in social domains, with brief defining relations, appropriate statistics, and type of test (Siegel, 1956)
Scale
Defining relations
Nominal
(1) Equivalence
Ordinal
(1) Equivalence (2) Greater than
Interval
(1) Equivalence (2) Greater than (3) Known ratio of any two intervals
Ratio
(1) Equivalence (2) Greater than (3) Known ratio of any two intervals (4) Known ratio of any two scale values
Examples of appropriate statistics Mode Frequency Contingency coefficient Median Percentile Spearman vs Kendall Mean Standard deviation Pearson product-moment correlation Multiple product-moment correlation Geometric mean Coefficient of variation
Appropriate statistical tests Nonparametric statistical tests
Nonparametric and parametric statistical tests
Why does any of this matter? Why can’t we just use the darling of physics, the ratio scale, especially with an added as necessary absolute zero point, for everything? Because the whole purpose of measurement in science is to learn about the properties of reality, and doing so requires that the scale of measurement be appropriate to the data collection that the research provides. Otherwise the “research” is literally meaningless because of its ambiguity—on a par with the “information” of an oracle who answers “A great army will be destroyed” when asked by the ruler who did not realize that it could be his army and not, as his vanity assumed, the opposing army. It is absolutely meaningless, for example, to use ordinal scale data, for which the appropriate measure of central tendency is the median, to calculate the mean. To calculate a mean requires an interval or ratio scale for the numbers provided by
4 Problems of Measurement ...
57
the research. It is nonsensical, even if all too commonly done, to just assume that it is okay to ignore such fundamental scaling constraints, and go ahead and do it anyway. Failing to use appropriate statistical procedures always loses information at best and makes results nonsensical at worst. And it doesn’t matter at all that Thurstone (1927) and later Rasch (1980), for example, have attempted to “justify” using mean calculations on ordinal data (on the basis of comparative judgment or probability considerations respectively). As Houle et al. (ibid.) noted, measurement theory puts severe constraints on statistical manipulations that can correctly be done without the loss of some or all of the meaning available in the data (see ibid., p. 17). To use their example, biologists accept P-values (probability values) as measures of effects, but they commonly neglect to report and consider units and transformations that were involved. And the skill of interpreting numbers is neither taught nor practiced in biology courses or graduate training. Usually, the only quantification is window dressing for qualitative arguments. But measurement theory is a description of what meaningful quantification entails, so the upshot is that biologists need to learn what can give measures actual meaning. Their article concludes with the lament that they have never come across a course in meaningfulness or even measurement theory in academic training in biology. Instead they find substantial effort being spent on how to quantify research conclusions, but little if any analysis of what those conclusions could actually mean. The theory of measurement and meaning should have its place in scientific education, and be a partial counterweight to, blind use of statistics (see ibid., p. 29).
Probability Absolutes and Absolute Probabilities It is obvious that biology and the social domains make extensive use of probability statements. But probability requires very powerful scaling to produce actual knowledge—a ratio scale with the further addition of a known absolute zero point for the dimension to which the “probability” is assigned (to range from zero to 1 or certainty). One wonders how many of the populations to which probability statements have been
58
W. B. Weimer
attributed actually possess what is required to actually be measured by absolute scales. Is there an absolute scale for sex (any aspect thereof )? Body size? Speed of locomotion? The properties of alleles that are frequently cited? What about corvid birds or amino acid sequences? These questions are not easy to answer. And one must guard against starting out with a purely physical dimension (e.g., length in mm, weight in kilograms) and sliding into functional domain “conclusions” (e.g., sexual attractiveness or suitability). But probability numbers have been blithely assigned, directly or indirectly, to all the above at one time or another. What was “measured” in so doing? Where is the assurance that the underlying trait or property possessed ratio properties? Since it is rarely clear from the research report what was measured according to what scale, it simply is not possible to determine whether reported results are actually meaningful or not meaningful. From the standpoint of epistemology, such studies are simply in limbo and are useless until that specification of relevant information is provided. Such studies simply do not do that.
Replicability Is Scale Dependent It depends on measures, and all measurement is scale dependent (as noted in Chapter 3). Studies in methodology in biology (as in all the social domains) are reporting a “crisis” where, as Montévil (2019) states, “systematic attempts to reproduce experiments published in reputable journals fail in the majority of cases (Baker, 2016; Begley & Ellis, 2012).” This issue is especially prominent in medicine and biology when “standardized” samples of organic compounds or specimens of organisms cannot be determined (or simply were not available) to be equivalent in the relevant respects. Biological specimens are not “physical” objects alone—they are not “generic” or object like—they always depend crucially on their history as subjects. As Montévil (ibid.) noted: Biologists manipulate objects which are understood theoretically as the result of a history and continue to produce a history: diachronic objects.
4 Problems of Measurement ...
59
With these ideas stemming from the theory of evolution in mind, experimental reproducibility is not a straightforward notion. Biological objects tend spontaneously to vary whereas perfect reproducibility, even statistically, would require fixed physiology and development, at least at an abstract level. (pp. 2–3)
Without thorough specification of the history of the “specimens” involved there is no hope of experimental replicability. This is an inevitable byproduct of the duality of subject and object. Subjects always are dependent on their generational history and subsequent individual development. As Ernst Mayr constantly argued, physics is no longer adequate to explain a domain when one cannot avoid incorporation of an historical dimension to explain the differences between the behaviors of seemingly “identical” subjects. There is never a case in which a subject is totally generic, like the nameless molecules in a cloud of gas, or the electrons circling an atomic nucleus. Thus biological measurement requires what Montévil calls commensurability of subjects. Individuals vary to incredible degrees, and reproducible results will not obtain (except by chance) because of that. A study with Charles River lab rats will not necessarily show compatible results in an “identical” study to what are supposed to be “identical” or even “similar” rats supplied by Jackson Laboratory or Taconic Farms. Commensurability is an empirical issue— it cannot be determined any other way. To determine this, we need to have well corroborated theories of what organisms are, and how their populations can vary. There are two ways to try to minimize this type of problem. The first is to do constraint or “control freak” studies that try to force subjects into situations in which they can behave only as objects. If successful this would eliminate the effects of individual variability. Psychologists have faced these issues (earlier and more obviously than the biologists) when dealing with human individuals, and a well-known example of the control-is-to-eliminate-individual-differences approach is Skinnerian methodology, where the control characteristics of physics are aped by requiring all “scientific study” of organisms to be in the rigid experimenter defined confines of a Skinner box with experimenter, not subject, defined responses. The two-fold failures of this approach have to do with
60
W. B. Weimer
the ability of subjects to escape from Skinnerian prisons (discussed in chapters below), and the theoretical inability of behaviorism to address the fact that this is due to their cognitive abilities and behavioral capacities. Biology and medicine may have a better prospect of success with rigidly imposed controls when they are dealing with the more “physical” problems of, e.g., standardizing cell cultures, bacterial strains, and purified organic compounds, which all serve analogously to chemical reagents and compounds that must be completely purified and packaged into exact standard amounts or potencies in order to be of any experimental use. The problem here is that the suppliers or packagers rarely disclose sufficient information about the requisite history of their “samples” to determine compatibility. In most cases, the packagers have absolutely no way of knowing about their own product. And living subject suppliers not only face insuperable “historical determination” problems but fail to tell their customers about them, or about their lack of knowledge thereof.
What is an Organism? The second way to deal with immense historical complexity and spontaneous change is to admit it and develop “non-physical science” research methodologies or test procedures that are not totally dependent on measurements requiring ratio and absolute scales. Biological organisms, unlike physical objects, are not described by physical invariants and symmetries (invariant preserving transformations) that characterize generically equivalent entities. Organisms exist as and are characterized by a context of dynamically changing constraints. The time variable is indispensable and inevitable in the study of living systems. We have to have measures over time that address changes in the physical “invariants” governing the integrity of a biological entity. Information on this history is absolutely necessary in biological and social research. Indeed, as Montévil and Mossio (2020) argue, it is not even possible to identify what constitutes an organism unless one can integrate the time-dependent or historical with the time-invariant or relational conception (in our theories) of organismic identity. Organisms are dynamical, ever-changing entities, not static ones.
4 Problems of Measurement ...
61
Longo and his associates (Longo et al., 2012) have gone in a complimentary direction to the functional versus physical distinctions developed throughout this book, arguing (as I do throughout) that this evolved emergent unpredictability in principle, characteristic of all life, requires us to abandon the physical science modeling approach stressing inexorable laws of nature in favor of a concept of enablement, in which life has arisen without any “entailing” law having caused its appearance. Life is not inexorable in this sense. This relates to Polanyi’s (1969) emphasis upon “life harnessing physicality,” emphasizing that while all life has an indispensable physical substratum (whose “laws” it cannot violate or suspend) but which has not “caused” life, it is in turn controlled by the higher order constraints of living functionality. Both theorists emphasize that that “control” is not anything that can be explained by a notion of physical invariance and inexorability, no matter how complicated or “complete” it may be. No account of purely physical causality can “explain” how life is “enabled” by the web of co-occurences which surround it. No “law” or laws physically entail the biosphere’s evolution. Further, the process of evolution enables the emergence of classes of new organism econiche spaces (emergent and thus available empty niches for life to move into and inhabit) without having any obvious Dawinian selection “causes” for doing so. These new adjacent “possibles” are the unintended or indirectly caused results of action of organisms, analogous to Ferguson’s (1767) results of action but not design “enabling” the social realm to come into existence from our behavior. As a result, we face a fundamental difference between the kinds of models that we can have in biology (and all other sciences with living subjects) as opposed to those in physics. Due to its complexity, the models available in biological theory are quite impoverished when compared to the phenomena themselves. In an organism literally everything is correlated to almost everything else—de-correlation analyses, as those done in physics, are either extremely hard or just impossible to accomplish. Sooner or later researchers will acknowledge that there are “hidden variables” not taken into account in the proposed explanatory model. As Islami and Longo (2017, p. 9) state, “Thus, in contrast to Physics, Models in Biology are always poorer than phenomena, or
62
W. B. Weimer
more precisely, the powerful correlations between the ‘genericity’ of the mathematical and of the physical object (and ‘specificity’ of trajectories) is lost (is actually reversed) in biology, whose objects are specific (historical) and trajectories are ‘generic’ (possible ones).” Models being “poorer than the phenomena” is characteristic of all areas of essential complexity, as von Neumann (1966) and Hayek (1967) argued for the biological, psychological and economic realms. But in all cases, replicability issues hinge on the meaningfulness of statements that can be made about the entities involved. This meaning depends upon the scale used to numerically state something about the numbers assigned to the empirical systems which the study employs. When disparate measurement theory scales are involved in different subsystems or parts of an overall data set (the usual case in the life sciences) it will not be possible to interpret the situation clearly. Data utilizing nominal or ordinal scaling information cannot, for example, support any conclusion stated in terms of probabilities (which require absolute scaling). The conceptual framework and substantive theory we utilize has to be compatible with the scale (or scaling procedures) we employ in collecting data. It simply can’t be “analyzed” at all without this compatibility. In many cases, we are forced to “lose information” that could have been provided if we had been able to correctly employ a ratio scale. If we are to be able to meaningfully extract any information (semantic content relevant to our theory) at all from a complex jumble of varied scale based data we must be able to show that the statistical manipulations used match the scaling requirements of the data that has been collected.
Phenomenalistic Physics is Incompatible with the Facts of Biology and the Nature of Epistemology In addition to the problems of measurement discussed above, there is another problem that needs to be noted. Physicists, with only a few exceptions such as Schrödinger and von Neumann, never have seemed
4 Problems of Measurement ...
63
to actually study biology (or psychology), and presume there is no need for them to ever do so. The problem here is too obvious: physics is largely irrelevant to these domains, and the philosophy of physics is often in conflict with their results. Occasionally someone will point this out, only to be totally ignored and shunned by the physicists and their philosophers. Consider Mayr: Each of these volumes [he was discussing] is a philosophy of physics, many physicist-philosophers naïvely assuming that what applies to physics will apply to any branch of science. Unfortunately, many of the generalizations made in such philosophies of physics are irrelevant when applied to biology. More important, many of the generalizations derived from the physical sciences, and made the basis for a philosophy of science, are simply not true for biological phenomena. Finally, many phenomena and findings of the biological sciences have no equivalent in the physical sciences and are therefore omitted from philosophies of science that are based on physics. (1949, p. 197)
What happens when the empirical results of the biological and social sciences offer refutations of physics (in rare cases, in substantive theory, more frequently in philosophical interpretations)? Having long ago been crowned Kings of the Intellectual Firmament, the physicists simply ignore them and cease to consider those data domains. A classic case occurred at a conference in which Karl Popper clashed with John Archibald Wheeler (see Bartley, 1982; Medawar & Shelley (eds.), 1980). An eyewitness reported that after Wheeler’s presentation, Popper said: “What you say is contradicted by biology” and then the biologists present, including Medawar, applauded. Wheeler, unfazed, ignored them entirely, soon going on to his famous “it from bit” and participatory anthropic principle in his cosmological speculation. Because he was arguably the most influential theorist in physics in the latter half of the twentieth century, with many students going on to Nobel honors, we need to look at how phenomenalism rendered his views unacceptable, since they are in direct conflict with both evolutionary epistemology and the biological-psychological sciences. If we follow the reasoning from Mach through Bohr, that all our knowledge of (quantum) phenomena comes from subjective effects of
64
W. B. Weimer
manipulations performed by experimenters, then the it from bit thesis is a confusion of epistemology with ontology. It is a confusion of the issue of how we come to know with the issue of what actually exists. When we look at the acts of experimenters in their observations, it is obvious that the attempt to disambiguate the phenomenal flux is the attempt to find as unambiguous a specification of a “result” as possible. We attempt to clarify the nature of an external reality with the constrained situations we call experiments. But we need to go beyond “just clarifying,” because an ambiguous situation always presents incompatible alternatives. The most definitive result we can have is a definite “yes or no” answer to an experimental inquiry. That recorded interpretation of events is the maximally unambiguous information that an experiment can communicate to an observer. That explains why Wheeler (in 1982) would state as incontrovertible fact that no elementary quantum phenomenon is a phenomenon until it is a recorded phenomenon. Then the slide into Shannon communication theory, and its concept of yes or no bits, is seamless and seemingly inevitable: what the results are is “bits” of “information.” So physics is about our phenomenal presentations, and its results, about “bits” of information, are all that we can know. From this the further slide into ontological phenomenalism is equally inevitable: there is nothing more to existence than bits. So this is the Emperor Phenomenalism resplendent in glorious informational clothing: It from bit. Otherwise put, every it—every particle, every field force, even the space time continuum itself—derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatus-elicited answers to yes or no questions, binary choices…, bits.… It from bit symbolizes the idea that every item of the physical world has at bottom—at a very deep bottom, in most instances—an immaterial source and explanation; that what we call reality arises in the last analysis from the positing of yes–no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe. Giving us its as bits, the quantum presents us with physics as information (Wheeler, 1989, pp. 310–313).
4 Problems of Measurement ...
65
Wheeler saw no rock-bottom of existence, no “firm foundation” that was independent of our knowledge He argued for no structure, plan of organization, no framework of ideas underlaid by another structure or level of ideas, underlaid by yet another level, by yet another, ad infinitum. There is no alternative evident but a loop: a loop such that physics gives rise to observer-participancy, and observer-participancy gives rise to information; and information gives rise to physics. So he abandoned continuum-based physics in favor of information-based physics (see ibid., pp. 313–314, 315). It follows that there could not possibly be anything objective to which other commonly used concepts, such as the concept of probability, correspond. “Probabilities exist ‘out there’ no more than do space or time or the position of the atomic electron. Probability, like time, is a concept invented by humans, and humans have to bear the responsibility for the obscurities that attend it. … Probability in the sense of frequency has no meaning as applied to the spontaneous fission of the particular plutonium nucleus that triggered the November 1, 1952 H-bomb blast (ibid., pp. 317–318).” Probability thus is nothing but personal and subjective, as in the Bayesian approach. (Wheeler used the objectivist conception of probability as inapplicable to single cases [something is or is not, not “is probable”] to argue against objective probability anywhere—to argue against objectivist conceptions on the basis of an objectivist conception). Such an obvious commission of what has rightly been called the “operationalist fallacy” of confusing the evidence with that which is evidenced is completely refuted by the arguments noted in Chapter 3 above in discussing classic phenomenalism. There is no acceptable motivation to leap from a discussion of the epistemic predicament of how our knowledge is gained to any conclusion about the nature of reality. Epistemology can constrain ontological speculation, but it can never delimit in advance what does or does not exist. How we know is one thing; what it is that is known is quite another. The ontological thesis of phenomenalism cannot be supported by an analysis of the nature of epistemology. Wheeler’s position (as with many of his contemporaries and students) fails because of another problem: meaning. Communication theory can never address meaning. Wheeler thought that communication theory
66
W. B. Weimer
provides an account of meaning. This was his idea: “What is an observerparticipant? One who operates an observing device and participates in the making of meaning, meaning in the sense [of ] Follesdal,… “Meaning is the joint product of all the evidence that is available to those who communicate (ibid., p. 319).” Thus meaning is a “product” of gathering together Shannon bits. Shannon himself was quite clear that there is no meaning of any kind in communication theory. Shannon’s theory was about syntax— the structuring of messages so they would not be overcome by noise and degradation in a communication channel in which a sender who knows a meaning sends a message to a receiver who already has meaning and can thus interpret the “message.” Bits are based on differences. But differences are meaningful only to a subject of conceptual activity who already possesses meaning—who has semantics as well as syntax.. Whether or not a communication theory bit has a determinate meaning can only be specified by how it occurs within a theory, which is always going to be cast in terms of the domains of pragmatics and semantics, where syntax only occurs as its representation. To turn Wheeler’s argument on its head, bits have no meanings at all until it is given to them by conceptual subjects who already are in possession of meaning. Knowledge is found only in the functional domain, in argumentative claiming (see Chapter 17), and that domain determines whether semantic “information” exists or not. Wheeler simply assumed that when fully developed some sort of communication theory account would be adequate as a theory of semantic meaning: He celebrated the absence of a clear-cut definition of the term “bit” as elementary unit in the establishment of meaning, and alleged that if and when we learn how to combine bits in fantastically large numbers to obtain what we call existence, we would know better “what we mean” by bit and by existence (see ibid., p. 322). Commenting on this paper realist J. P. Vigier (1989) presented the obvious objection: “The idea (Einstein et al.) of the reality of fields has led (assuming that ‘particles’ are field singularities) to the only known justification of the geodesic law. To contest it is to make the meaning of dynamical behavior purely observer-dependent, i.e., to kill the reality of the physical world (ibid., p. 323).” Wheeler’s published responses never
4 Problems of Measurement ...
67
bothered to address this “small” problem. He simply went on inside his mental world.1
Note 1. Wheeler’s student Tegmark (2014) took the bit as a mathematical concept embodying the binary choice of zero or one as implying that since all “its arise from bits,” that must mean that there is nothing in the universe except mathematics. This is simply an extension of the phenomenalistic ontological doctrine. Where Mach had sense data as the ultimate and only existents, Bohr had the act of measurement, and Wheeler had bits in some undefined communication theory sense, Tegmark now proposes that bits simply are mathematics. Since this approach simply defines the nature of reality as ultimately mathematical, it is not an empirical proposition at all. As a haunted universe metaphysical doctrine—one which is seemingly supported by anything at all that happens, and refuted by nothing—it is compatible with all possible observations, and therefore of no use for ongoing science. Taken as a utopian “completed science” thesis, it must address the ultimate dualism of the rate-independent realm (which is the only location in which mathematics is known to exist) not being identifiable, except by postulation, with the rate-dependent realm of physical reality. Thus far no one has succeeded in identifying mathematics, as a purely formal realm whose content is only syntactic structuring, with anything, let alone all that exists in the empirical-dynamical-physical realm. Indeed, whether it is fruitful to indulge in such purely formal redefinitions remains an empirical question, and thus outside the realm of mathematics entirely. And as Bertrand Russell so trenchantly observed, this sort of postulation has “all the advantages of theft over honest toil.” How did it become so easy to substitute metaphysical flights of fancy for serious scientific inquiry? A difficulty with Hertz’s move from axiomatics to hypothetico-deductive approaches for the scientific systematization of reality is that it makes it so easy to slip into a pure instrumentalism or conventionalism with regard to the nature of the ultimate postulations involved (as Wheeler and Tegmark have done). If one assumes that theories are “nothing but” flights of fancy that are merely human conventions (as proposed by, for example, Duhem (1954) and Poincare in France, American pragmatist philosophers influenced
68
W. B. Weimer
by James and Dewey, and contemporary hermeneutical approaches to science such as Rorty and Hickey) then there is no reason to consider theories as being in any way “attached” to reality. This effortless slide into the idea that theories are “just” calculation devices coupled to the phenomenalistic view that existence is exhausted by the deliverances of our senses, is what undergirds such “modern” views as Wheeler and his associates have proposed. Since theories are ultimately purely postulational, it is indeed necessary to make a “metaphysical leap” to either some form of idealism or to some form of realism, and these theorists have opted for the easy and obvious way into phenomenalistic idealism. Since idealism and realism are, at bottom, metaphysical rather than empirical theses, one can choose between them only on the basis of their support from, and compatibility with, other theoretical frameworks and available empirical domains. Viewed from the standpoint of evolution (as undoubtedly the most thoroughly empirically corroborated “theory,” far more corroborated than the speculations of cosmology) some form of realism and historical-temporal development through time becomes inescapable. The question then becomes one of formulating an adequate evolutionary account of realism that encompasses both our position as evolved organisms and the nature of our scientific and philosophical theories about reality.
References Abel, D. L. (2010). Constraints Versus Controls. The Open Cybernetics and Systematics Journal, 4, 14–27. Baker, M. (2016). 1,500 Scientists Lift the Lid on Reproducibility. Nature, 533, 452–454. Bartley, W. W. (1982). A Popperian Harvest. In P. Levinson (Ed.), In Pursuit of Truth: Essays in Honour of Karl Popper’s 80th Birthday. Humanities Press. Begley, C. G., & Ellis, L. M. (2012). Raise Standards for Preclinical Cancer Research. Nature, 483, 531–533. https://doi.org/10.1038/483531a. Chrisman, N. R. (1998). Rethinking Levels of Measurement for Cartography. Cartography and Geographic Information Systems, 25 (4), 231–242. https:// doi.org/10.1559/152304098782383043
4 Problems of Measurement ...
69
Duhem, P. (1914/1954). La Theorie Physique: Son Objet, sa Structure: Translated as The Aim and Structure of Physical Theory. Princeton University Press. Ferguson, A. (1767). An Essay on the History of Civil Society. Public Domain Text, Available from Online Literature of Liberty, Liberty Fund. Hayek, F. A. (1967). Studies in Philosophy, Politics, Economics, and the History of Ideas. University of Chicago Press. Houle, D., Pelabon, C., Wagner, D. P. and Hanson, T. F. (2011). Measurement and Meaning in Biology. The Quarterly Review of Biology, 86(1). https://doi. org/10.1086/658408 Islami, A., & Longo, G. (2017) Marriages of Mathematics and Physics: A Challenge for Biology. Progress in Biophysics and Molecular Biology, 131. https:// doi.org/10.1016/j.pbiomolbio.2017.09.006 Longo, G., Montevil, M., & Kauffman, S. (2012). No Entailing Laws, but Enablement in the Evolution of the Biosphere. Genetic and Evolutionary Computation Conference (pp. 1379–1392). ACM. https://doi.org/10.1145/ 2330784.2330946 Mayr, E. (1949). Discussion: Footnotes on the Philosophy of Biology. Philosophy of Science, 36 , 197–202. Medawar, P., & Shelley, J. (Eds.). (1980). Structure in Science and Art. ExcerptaMedica. Montévil, M. (2019). Measurement in Biology is Methodized by Theory. Biology and Philosophy, 34 (3):35, 1–14. https://doi.org/10.1007/s10539019-9687-x Montévil, M., & Massio, M. (2020). The Identity of Organisms in Scientific Practice: Integrating Historical and Relational Conceptions. Frontiers of Physiology, 11, 611. https://doi.org/10.3389/fphys.2020.00611 Polanti, M. (1969). Knowing and Being. University of Chicago Press. Rasch, G. (1980). Probabilistic Models for Some Intelligence and Attainment Tests. Nielsen & Lydiche. Siegel, S. (1956). Nonparametric Statistics. McGraw-Hill. Stevens, S. S. (1946). On the Theory of Scales of Measurement. Science, 103(2684), 677–680. Tegmark, M. (2014). Our Mathematical Universe: My Quest for the Ultimate Nature of Reality. RandomHouse/Knopf. Thurstone, L. L. (1927). A Law of Comparative Judgment. Psychological Review, 34 (4), 273–286. Vigier, J.-P. (1989). Two Problems (Discussion comments on Wheeler, J. A., Information, Physics, Quantum, reference 1989 below), 323
70
W. B. Weimer
von Neumann, J. (1966). Theory of Self-Reproducing Automata. In A. W. Burks (Ed.). University of Illinois Press. Wheeler, J. A. (1989). Information, Physics, Quantum: The Search for Links. In Proceedings III International Symposium on Foundations of Quantum Mechanics (pp. 354–358). https://doi.org/10.1201/9780429500450-19 Whewell, W. (1840/2014). The Philosophy of the Inductive Sciences. Online publication 2014. Cambridge University Press. https://doi.org/10.1017/ CBO9781139644662
5 Psychology Cannot Quantify Its Research, Do Experiments, or Be Based on Behaviorism
Psychology faces three insuperable problems when looked at from the stand point of epistemology. First, to continue the discussion of measurement, it cannot quantify, and so therefore cannot ever perform experiments that measure things in the manner in which physics does. Second, it cannot perform experiments at all—it is not possible to control or constrain living subjects to the extent that what a physicist would call an “experiment” can be performed. Third, it cannot be based upon naive or phenomenalistic epistemology, so behaviorism is ruled out from the beginning. We can deal with mensuration and experimentation as a continuation of the previous chapters. Behaviorism, as a variant of the epistemic thesis of phenomenalism, requires separate discussion. Beyond our scope is the immense research on unraveling the functioning of the nervous system that is exponentially accelerating with better and better technology1 .
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_5
71
72
W. B. Weimer
A: Psychology Has Neither Ratio Measurement Nor Experimentation Psychology consists mainly of muddle, twaddle, and quacksalving Charles Dunbar Broad
Broad was arguing against the then new metaphysical research program called behaviorism. But behaviorism was then just the latest and brashest step in the Cartesian-Comtean “social physics” attempt to make the “soft” science domains into hard physics with absolute scaling, real measures, and actual experimentation. Unfortunately, nothing much has changed in the intervening century, and Broad’s caustic appraisal still stands. Psychological description does not entail quantification. What psychological research does is record and arrange events. This is description, but not in itself measurement according to the requirements of any scaling theory. Psychologists make records, and they treat those records of events as though they were measurements. In actuality the field is empirical, but not experimental, and is not now and never will be quantitative in the sense of absolute and ratio scale based physical research. This has been pointed out and then studiously and carefully ignored for at least 180 years. The problem is that, despite the efforts of Skinner with intact but shackled organisms, the psychophysicists like Stevens, and the physiologists with sophisticated post operation “preparations,” we can never constrain psychological subjects enough to consider the “disturbances,” to use Clerk Maxwell’s term, as due to nothing but random error variance. When we attempt to do so we find our results are due to physical constraints imposed by an “experimenter” and not the functional choices and intentions of our subjects. We are studying individuals after we have tied (at least) one of their hands behind their backs, and we are assuming that the “results” we record are relevant to how they would act with both hands free. We are doubly cursed in this situation: we cannot successfully apply the Galilean experimental model of constraint to living subjects, and even when we do “water down” our constraint procedures to perform “demonstration” studies, we are unable
5 Psychology Cannot Quantify Its Research, Do …
73
to specify what a subject would choose to do if left unconstrained by our artificial situations. This latter point is the matter of internal versus external “validity” of results. It applies equally in hard science experimentation, but in that field it has been universally accepted that it is “valid” to ignore the internal-external issue, and regard “laboratory” results as absolutely identical to those controlling the outside world. That is the distinction between physics as the study of absolutely identical objects and the life sciences as dealing with absolutely unique subjects.
The Psychology of Robots Has Nothing to Do with the Psychology of Subjects This point is worth emphasizing, because it is of crucial importance in understanding the history of American psychology. Years ago the biologist Bertalanffy stigmatized this approach as robot psychology, in which the active nature of the subject was deliberately and completely ignored in favor of studying changes in the stimulus situation (as Skinnerians and the Gibsonians still do). This is how Bertalanffy (1967) summarized the situation from an outside perspective, There is (pp. 116–117): First, the animal experiment as basis for psychological “laws” and presumably understanding human behavior, the oft-quoted brigades of rats working Skinner boxes and other contrivances. The naïve biologist (the present author) is often led to wonder whether investigators using elaborate apparatus and sophisticated statistical techniques never had pets when they were children, or never looked at their animals outside the machines. Has the question been answered whether “tortured rats” or cats in the “surrealistic universe” of Thorndike boxes (Koestler, 1964) permit conclusions as to normal behavior? To what extent are regularities and “laws” so found not only laboratory isolates but straightforward laboratory artifacts? Are classical and operant conditioning, learning nonsense syllables, etc., at all applicable to normal learning processes when “structuring of perceptions,” “meaning,” “understanding the situation,” “symbolic processes” come in; and what theory is necessary adequately to deal with them?
74
W. B. Weimer
Recognition of the artificiality (and consequent implausibility of determining the actual rules of behavior for subjects—as opposed to objects) has been obvious to critics of behavioristic and phenomenalistic approaches of all stripes (e.g., Howarth, 1954, quoted extensively by Bertalanffy just after the passage cited above). But there has been virtually no recognition of this fact by the stalwart and steadfast practitioners of “social physics” (such as the Skinnerians) in the psychological domain.
No One Has Ever Discovered a Natural Law in Psychology John Stuart Mill noted this problem in his philosophy-methodology of science classic, A System of Logic: Ratiocinative and Inductive, in 1843. Mill noted that discovery of laws of nature requires an empirical component or “basis” that is only found in either observation or in experiment. Observation requires waiting for the happy accident of someone happening to see law like behavior somewhere. If not this accidental process, we must make an artifact , which is to say, construct an experiment to constrain the situation in order to discover lawful regularity. Then he clearly stated that if it is not possible to constrain nature in one of these two ways then it is not possible to discover any natural law. As an example of the impossibility of the discovery of laws he specifically mentioned psychology, and said (Mill, 1843) that in every instance in which we see the human mind developing itself, or acting upon other things, we see it surrounded by and obscured by an indefinite multitude of unascertainable circumstances, rendering the use of the common experimental methods almost delusive (found on p. 384, 1974 Toronto Press edition). Subjects have physiological and learning histories that are constantly growing and changing. Unlike objects, they cannot be somehow “frozen out” of their context in order to put them into a physics-like experiment. Nothing has changed to obviate this conclusion in nearly two centuries. Yet this conclusion has been totally ignored (when it has not been vehemently denied) in all the social “sciences.” Perhaps this was ignored initially because psychology did not yet exist as an organized “science” until the 1870s in Germany. But when, 30 years
5 Psychology Cannot Quantify Its Research, Do …
75
after Mill, Brentano (1874/1974) and Wundt (1874) got psychology “off the ground” as a serious research endeavor they both strongly argued that the field could not be experimental . Brentano held that the subject matter of psychology should be the functional domain of higher mental processes, such as language use and ability, concept formation, reasoning, and problem solving, and therefore could never be experimentally studied. Wundt, who coined the term experimental psychology, used it to refer only to the lower mental processes of sensation and (neuro-) physiological function, which he held to be amenable (because of their proximity to the physical) to some experimental research. He held that the higher mental processes, which were capable only of empirical study, required a separate field which he called the Volkerpsychologie. But by historical tradition “scientific psychology” stems from Wundt’s establishment of laboratory study alone, according to behaviorist and objectivist histories of the field such as Boring (1950), who initiated the myth that the Volkerpsychologie was an unimportant and somehow failed afterthought compared to the “scientific” physiological psychology. Only 100 years after the fact I retrieved some of Wundt’s actual view on the Volkerpsychologie (Weimer, 1974), as did other competent historians at that time (such as Blumenthal, 1970, 1977), who were influenced by Chomsky’s revolution in linguistics. None of what we said then had any impact on the idea that psychology was a firmly established “experimental” science. Brentano (1974) fared little better when he noted that “exact” laws would require a completed physiological-physical basis: These laws {of psychological phenomena] are not the highest and ultimate laws in the sense in which we would characterize the Law of Gravity and the Law of Inertia as such. This is due to the fact that the mental phenomena to which these laws apply are entirely too dependent upon a variety of physiological conditions of which we have very incomplete knowledge. They are, strictly speaking, empirical laws which would require for their exact explanation an exact analysis of the physiological states….There are limits which cannot be exceeded in our attempt to explain nature; and, as John Stuart Mill quite rightly states, we run up against one of these limits when the transition from the mental realm to that of physical phenomena is involved. (1843/1974, p. 47)
76
W. B. Weimer
This agreement with Mill and Wundt was misinterpreted by later researchers as meaning that we just need to know more details about physiology for psychology to become an “exact” science. But we cannot ever control enough of the immense complexity surrounding any psychological subject for us to do actual experimentation. Nor can we satisfy even the basic requirements of measurement theory as it applies in physics. Consider Trendler (2009): Most psychologists will agree that motivation can be manipulated by means of different amounts of reward (e.g., amount of money)….In order to begin solving the scientific task [of determining whether psychological attributes are quantitative], we would have to test the first condition of quantity: that is, we would have to determine motivation in test subjects in such a way that we obtain equal levels of motivation in the same subject (or in different subjects) for over the time of the experiment…. We cannot take for granted that equal levels of an observable necessarily correspond to equal levels of the associated hypothetical construct. That is, equal amounts of reward might not automatically lead to equal levels of motivation. In order to identify equal levels of motivation, we would have to make use, for instance, of the method of concomitant variation, as exemplified by means of Ohm’s (1826) experiment. If we assume in addition that there is a causal relation between motivation and the reaction time for some test items, then the result we must ultimately aim at is: if the same amount of reward is applied and if no systematic disturbances interfere, then the resulting reaction time must be equal in value, in the limits of random errors, over experimental replications. Only if this criterion is empirically satisfied can we confidently conclude that equal amounts of reward generate equal levels of motivation. (pp. 590–591)
Clearly, this utopian result is not going to be found. Psychologists do not in fact even try to do these empirical checks (in fact, being methodologically naive, they have no idea that they should do them). While Stevens (1946) and more recent writers such as Kranz (1971) vociferously claimed that they had “satisfied” the issue of quantification in psychophysical research, only a moment’s reflection on their actual procedures (see Trendler, 2013) shows that they did not even come near the empirical corroboration necessary for such a claim. No one else has
5 Psychology Cannot Quantify Its Research, Do …
77
improved upon this situation, and it is obvious that the “conseilience of inductions” or “bootstrapping” approaches to construct “validation” cannot address the fundamental issue. But if we are unable to satisfy even the first condition of quantity, we certainly are not going to be able to satisfy the others. So what we call measurement in psychology is never going to “measure up” to what constitutes measurement in physics. Subjects simply are not billiard ball objects, or subject to ratio scaling. Perhaps Wundt believed that the physiological underpinnings of psychological constructs could be experimentally shown to satisfy conditions of measurement theory. But the most exhaustively, “physical” neurophysiology would be of absolutely no help now. It would require sufficiently precise anatomical localization of function to be found for all psychological attributes and across all subjects. While in Wundt’s day that might have been thought to be possible, it is now no longer conceivable. Localization went out the window in 1949, with Moruzzi and Magoun’s discovery of the ARAS (the ascending reticular activation system), the first functional neurophysiological system that is not dependent on localization of anatomical function. The primate brain is dynamically functional rather than anatomically or statically functional: we cannot identify specific levels of magnitudes of psychological attributes just by looking at brain activity in individual neurons or anatomical sites.
Social Science Is Just Fine with Demonstration Studies So where is the field without natural laws and physical science measurements? Where we have always been. Psychologists, for instance, are good at making records, and when they assign “numbers” to records, will continue, albeit so very misleadingly and incorrectly, to call them measurements. But we must realize that the field is empirical and demonstrational rather than experimental. What we do in psychological research is set up demonstration situations (call them demonstration “experiments” if it makes you feel better) in which we look for happy accidents, i.e., “clear cases” that make manifest (with minimal constraint
78
W. B. Weimer
imposed by our apparatus and research situations), what the regularities or rules seem to be. We need to understand and come to grips with Trendler’s (ibid., p. 593) conclusion: The application of measurement theory, irrespective of whether it is construed as deterministic or probabilistic, is also not relevant to achieving substantial progress in psychology. Other, more suited methods for the domain of psychology must be found. It might therefore be wise to seriously reconsider Johnson’s recommendation: “Those data should be measured which can be measured; those which cannot be measured should be treated otherwise. Much remains to be discovered in scientific methodology about valid treatment and adequate and economic description of non-measurable facts. (Johnson, 1936, p. 351)
Many competent thinkers have reached this conclusion from different approaches. Non-positivist philosophy of 100 years ago was well aware that there would be no comparable “scientific revolution” for the study of psychological phenomena. For example, these remarks by C. D. Broad, published in 1925 in his classic The Mind and Its Place in Nature: Descartes’ greatest achievement was to show that the various causal characteristics of physical things can be connected with each other by correlating them all with characteristic forms of spatio-temporal structure and a few very general and pervasive causal characteristics. No one has succeeded in connecting the various mental “powers” in any analogous way, and that is why psychology at present hardly deserves the name of a “science.” It is, I think, quite certain that psychology will remain in this unsatisfactory state unless and until someone succeeds in doing for it what Galileo, Descartes, and Newton did for physics. (p. 440)
There will never be any “Galilean revolution,” one that requires genuinely quantitative mensuration in psychology, or in any of the other social domains. The situation in the social sciences is as Hayek reminded us: in physics all individual phenomena are regarded as exactly alike and totally interchangeable. One electron is exactly the same as any other electron. But when dealing with social phenomena that is never the case: no two biological or human individuals are ever exactly the same and are
5 Psychology Cannot Quantify Its Research, Do …
79
usually completely different in terms of their prior experience in learning history, their values and needs, and in general all the variables we wish to study. We will never be able to measure these differences. That does not mean that serious scientific study of these domains is not possible. But it certainly is not as easy as it is in physics, and we need to study what demonstrations can and can’t do. If we do not, we will remain where C. D. Broad left us back in 1933: Poor dear Psychology, of course, has never got far beyond the stage of medieval physics, except in its statistical developments, where the labors of the mathematicians have enabled it to spin out the correlation of trivialities into endless refinements. For the rest it is only too obvious that, up to the present, a great deal of Psychology consists mainly of muddle, twaddle, and quacksalving, trying to impose itself as science by the elaborateness of its technical terminology and the confidence of its assertions. (1949, p. 476)
Economics is in exactly the same situation, especially in the macro domain where the “measures” are doubly removed from any grounding in quantification (because the economic and social “wholes” studied are not even empirical phenomena at all). Before detailing that set of problems let us further examine the extent to which the Comtean social physics model has sent the study of psychological phenomena on a wild goose chase into phenomenalism and scientism.
B: Epistemic Fads and Fallacies Underlying Behaviorism The book itself [Skinner’s Beyond Freedom and Dignity] is like Boris Karloff ’s embodiment of Frankenstein’s monster: a corpse patched together with nuts, bolts, and screws patched together from the junkyard of philosophy. Ayn Rand
80
W. B. Weimer
To understand what is wrong with behaviorism, we must first review what is wrong with phenomenalism. Current from the 1880s in physics due primarily to the influence of Ernst Mach, the doctrine of phenomenalism (which is sometimes called sensationalism, but is properly called presentationalism, standing for what is “presented” to the senses), is an overextension of the doctrine of empiricism, which is the thesis that all knowledge is a deliverance of our senses.
The Failure of Phenomenalism The phenomenalist moves from the empiricist assumption in epistemology that knowledge is a deliverance of sense contents (whatever it is that is presented in experience), to the ontological assumption that since that is so, all that exists is sense contents. The move is from epistemology to ontology: from “all we can ever know is sense contents” to “therefore, all that exists is sense contents.” From that perspective no conceptual entities (e.g., mathematics, postulational theories, and our ideas) are real: they are just convenient fictions to be employed so long as they provide observed outcomes. Because they are not in sense experience, Mach thought that atoms or fields—two of the most important concepts in modern physics—were not real, that they were just convenient calculational fictions. This mistaken extreme form of empiricism leads inevitably to instrumentalism (theories are just instruments, not about “truth”) and pragmatism (which says, “if it works, steal it and use it”) and abandons realism, as the quest to understand the real world. It manages, as Herbert Feigl succinctly put it, to throw out the baby with the bathwater. This view cursed the then “new” physics through the pernicious influence of Mach’s philosophy, especially upon Niels Bohr and the Copenhagen “operational” physicists (“don’t think--shut up and calculate”), the then budding philosophy of logical positivism, and still affects current cosmological speculation. The rise of behaviorism was also a straightforward consequence of phenomenalism: the idea that nothing “exists” for a science of psychology except the overt behavior of organisms.
5 Psychology Cannot Quantify Its Research, Do …
81
Excursus: Consciousness Alone Is Not the Issue Phenomena require symbols in the language of description, they do not require consciousness. Conscious “experience” is not in fact the basis of phenomenal awareness. All “phenomena” are the result of the interaction of the organism with some physical property through the intermediary activity of its sensory and cognitive apparatus. That much of the thesis of presentationalism is acceptable. All organisms that possess nervous systems are “presented” with phenomena, and there is no evidence on which to attribute consciousness to the “lower” ones. Instead, phenomena come into existence when symbolic description— for some functional purpose—occurs. Symbols are always in the (Russellian) language of description (see Chapter 9), never in acquaintance per se. Functionality (or intentionality in higher organisms) requires agency—functions are “for” (and are performed by) subjects of conceptual activity. The symbols themselves are arbitrary physical structures that are interpreted by (or inside of ) organisms as standing for or meaning something separate from those physical events or structures themselves. All functional “information” is stored in memory, communicated (to a self or to others selves), and interpreted by symbols (or symbolic structures). Some form of representationalism is thus inescapable. The agent (the knower) is on one side of a conceptual divide, and that which is known (the meaning of the sign or physical structure) is on the other. Phenomenalists lapse immediately into an ontological idealism (the doctrine that only symbols exist) as the alternative to what is to be “represented” in representationalism. Realists propose one or another form of representational realism for the universe of non-mental character. Phenomenalism provides a false theory of the knower and attributes the properties of reality to that false theory. Realism, on the other hand, faces the task of providing a better theory of both the knower and the known.
The Spell of Ernst Mach Mach himself was a sophisticated theorist who made fundamental contributions to both physics and to psychology. One must distinguish Mach’s
82
W. B. Weimer
actual views from the impact Mach’s views had on science, philosophy, and on other writers. It is the impact of Mach that has been so very deleterious to both physics and psychology. That impact is primarily from his book The Analysis of Sensations, which was an attempt at a “scientific” phenomenalism. But Mach made a fatal error—confusing the evidence with that which it evidences. For Mach, “nature is composed of sensations.” As he said in his daybook, “colors, tones, etc. These are the only realities.” By the Analysis it had become “colors, sounds, spaces, times, … Are provisionally the ultimate elements, whose given connection it is our business to investigate. It is precisely in this that the exploration of reality consists (1959, pp. 29–30).” This neatly sums up the idealistic and totally subjectivistic epistemology with the addition of an ontology of nothing but what is presented to the senses. From this perspective, no theoretical entity (such as, in Mach’s time, atoms) is real, since it is never presented to the senses as a sense content. Scientific theories are just shorthand data summaries: pure intervening variables rather than hypothetical constructs, and the purpose of science is to most economically put together our sense contents, since there is no external reality of which they are representative. On this view, representationalism (realism of some sort) is a metaphysical step away from empirical science, and it is not economical to take that “unnecessary” extra step. Presentationalism (what is given in sense content) is all there is to any science. Anything else is mere speculative metaphysics. Machian phenomenalism completely abandons the scientific quest for explanation of a real world in favor of the very restricted task of the description of sense contents. This “anti-philosophical” empiricism is actually the most anti-scientific of all philosophies, since its aim is to restrict the range and importance of all science. Presentationalism limits the role of science to that of an instrument to link together, in the simplest (defined as the most instrumentally efficient) way, our appearances. As Bartley (1983) put it: In presentationalism, the subject matter of science is then not in external reality independent of sensation. The subject of science is our sensory perceptions. The collectivity of the sensations is renamed “nature” (thus rendering the account idealist in fact—whatever it may happen to be
5 Psychology Cannot Quantify Its Research, Do …
83
called). The game of science is seen not as the description and explanation of that independent external reality but as the efficient computation of perceptions. (p. 840)
Any form of presentationalism is based upon a false theory of the mind. These theorists have assumed that the nature of mental functioning is unproblematic, and that the only thing that is at issue is how we get to knowledge external to the mind. As Aune noted: For hundreds of years, empiricists have regarded the foundation of empirical knowledge as essentially psychological. Whether they were scientific atomists, phenomenalists, or even, like Berkeley, subjective idealists, they all tended to agree that what is known immediately, intuitively, and with maximum certainty, is for each observer his own psychological states. Not only was a theory of mind largely unnecessary—to know what mind is, you have only to take a closer look—but no purely scientific considerations could ever shake a man’s conception of what he directly experienced. The subject that was intrinsically problematic for these thinkers was the nature of the alleged external world. (1967, p. 264)
But the mind is not a blank slate written upon by unproblematic and “given” sensory experience. It is the activity of the nervous system which constructs its sensory “input” as a result of classifying and reclassifying patterns of activity that have evolved, in conjunction with the evolutionary history of the species, as it has had to adapt to ever-changing external conditions. Were it not for those external conditions (and the changes they continually undergo), the senses would not have developed as they in fact have. The senses are our vicars—they go out beyond the surface of the nervous system and the body to bring back information about what is “out there.” They function analogously to a vicar in Protestant religion—as one who goes out in order to find and receive the “word of God” and then bring it back to the faithful flock. The nervous system (analogous to the “flock”) likewise receives information about “what is out there” from its vicars, the sensory systems that have evolved in a given species. There has been a general progression from close-in perception, such as touch or pressure sensitivity, through smell (which brings in information about a nearby environment) to hearing
84
W. B. Weimer
and vision, which enable the pickup of information farther and farther away from the nervous system itself. All our sophisticated scientific information gathering devices—such as sonar and microphones for audition, and telescopes and microscopes for vision—have enabled us to learn more and more about what is “out there” by extending our available perceptual systems beyond their original capabilities. Technology creates new vicars for us. We can now “see” the ultraviolet and infrared spectrum and “hear” the vocalizations of bats and porpoises. Perhaps the strongest argument for realism and against idealism and instrumentalism is the fact that the senses, all so different in their mode of operation and in the information they disclose, all point uniformly to a coherent picture of a reality independent of the subject. If sense “data” or experiential contents were all there is in the universe there would be no reason for them to converge upon a unified view of reality. From an evolutionary point of view, this is exactly what one would expect if organisms have to survive in an ever changing and often harsh independent reality. What we continually find by utilizing their information is that the deliverances of sensory experience in all modalities provide an incomplete (and thus incorrect) picture of what is external to our nervous systems. They have indicated very clearly that the “phenomena” in awareness, which were taken to be the only class of existents, are neural or mental constructions (indeed highly theoretical constructions) rather than what is “out there.” Phenomenalism provides a false theory of the knower and attributes the properties of external or non-mental reality to that false theory. This is why Hayek (1952), after studying how the nervous system had to be organized in order to function, concluded “I suddenly realized how a consistent development of Mach’s analysis of perceptual organization made his own concept of sensory elements superfluous and otiose, an idle construction in conflict with most of his acute psychological analysis” (p. vi). The thesis of behaviorism in psychology (and other social domains) is a direct extension of the philosophy of phenomenalism. It likewise has epistemic (usually called “methodological” behaviorism) and ontological (radical behaviorism) variants. What is somewhat amazing is
5 Psychology Cannot Quantify Its Research, Do …
85
that the phenomenalistic basis of behaviorism has virtually never been mentioned in social science literature. But the extension from classical phenomenalism is straightforward. Mach said that all that is presented is the contents of phenomenal acquaintance—and the behaviorists, from Watson on up, interpreted this dictum straightforwardly and literally to mean that all that is “presented” when we look at subjects is overt behavior, never any occult “mind” or mental experience of others that would be internal or causal of the organism’s overt behavior. All that exists is physical movement and patterns of behavior controlled by observable, physically defined stimulation that is external to and impinges upon the “black box” organism. So the “science” of behavior is just the “economical” correlating of given and unquestioned “physical” behaviors with “given” molar level “causes” such as reinforcement schedules. Under the mantle of the logical positivist conception of science (as exemplified by P. W. Bridgman’s (1927) operationalist interpretation of physics) there is nothing except naïve or direct “realism” that is appropriate for science and its explanations. The behavior of organisms is due to overt or “molar” level physically specifiable stimulation, and the responses observed are equally physically specifiable. Behaviorists sincerely believe that their “direct” realistic approach is the only tenable philosophy and methodology of research for any science, and that any “representational” realism is just woolly metaphysics on a par with Berkelian idealism. They did not (and do not today) realize that their approach is a word magic variant of exactly that phenomenalistic mentalism that they revile.
The Haunted Universe Doctrine of Behaviorism As a metaphysical worldview behaviorism is the thesis that all that exists is physically defined (or definable), and that when applied to organisms all that exists is the behavior they exhibit. The strong ontological claim is that nothing mental exists (as a “cause” of behavior)—there is no mind substance or stuff, and the mental realm is “nothing but” subvocal speech (Watson, Skinner) or neural firings (Hull) directly “under” that overt behavior (except that in this case it is then covert and internal,
86
W. B. Weimer
and that in in itself presents an ignored problem, because the lip service is to the “directly observable”). When behaviorists look inside the Cartesian or Leibnizian machine they find only physically specifiable wheels and cogs (now interpreted as the “physical” nerve firings and anatomical loci in the brain, or the movements of muscles). In point of fact, few writers have ever actually held this position, and then only for polemical purposes. In practice, they immediately waffle over to the methodological (epistemic, no longer ontological) position that regardless of whether the mental realm exists it could not (or at least should not) be the subject of any science, which after all, should be “experimental” (a further term that is invariably never defined past allusions to physics), and billiard ball causal. This vacillation is found in the writings of first generation advocates such as J. B. Watson (1919) and B. F. Skinner (1938, 1957), the now forgotten J. R. Kantor (1958, 1981) (who borrowed the concept of fields from relativity physics and Gestalt psychology), as well as prominent adherents like Clark L. Hull (1943), who, after saying that “ideally” behavior should be defined only as “colorless” physical movements, immediately began talking in terms of functional concepts that include inherently mentalistic referents (as E. C. Tolman’s [1932, 1959] purposive behaviorism—another borrowed “field” theory—had done from the very start). Philosophical (or analytical) behaviorism was defended by the original phenomenalistic logical positivists (such as Carnap in the 1930s) and a mixed bag of “analytical” or “language analyst” philosophers, and then morphed into the study of artificial intelligence through the study of computer programming (as exemplified by Simon, 1969, and his associates). Characteristic of this artificial intelligence-computer simulation approach is the assumption that the organism can be conceived for research purposes as a black box, and that whatever the internal workings in the organism may be, all that we need to look at (or for) is supposedly “objective” programs that exhibit the “objective” behaviors we observe organisms to make. Invariably, the idea is that the organism as a “relatively simple” black box that appears to exhibit complex behaviors only because it is responding or resonating to the complexity of the environment (which is presumed to be the true source of complexity) in relatively direct fashion. Thus Simon told us a man “viewed as a behaving
5 Psychology Cannot Quantify Its Research, Do …
87
system, is quite simple (1969, p. 25),” and similarly J. J. Gibson (1966, 1979)” proffered the “affordance structure of the ambient array” as the direct or unmediated determinant of behavior without even mentioning putative “internal” psychological processes such as learning or motivation or memory or cognition or even the epistemic role of the history of the organism. One should note this latter crucial feature: behaviorism is actually entirely ahistorical . Even when what is observed is due to a “learning history” the account of observed behavior is always in the present (literally, it is about that which is presented). While the account is always considered to be deterministic, little mention and no actual use is made of the prior events that led up to the situation studied, that had in the past “caused” the subject to become what in fact is “observed.” Internal states such as hunger are ignored for externalized “schedules of deprivation,” and then those schedules are presumed to be “objective” outside causal agents in themselves.
Control at All Costs The failure of this approach is its implicit reliance upon the physics model of its subject matter. All “subjects” for behaviorism are actually nothing but identical physical objects—which is to say, essentially identical with respect to any possible scientific study. One rat or one pigeon is identical to any other, and each is experimentally identical to any one college sophomore. Hence, it ought to be possible to approach the extremes of physical theory experimental control. The limit in psychology of this approach was reached, as noted, in Skinner’s attempt to externally constrain subjects into physical objects by imprisoning them within the totally delimited confines of the Skinner “box” (shades of Schrödinger’s thought experiment with its sealed box imprisoning the cat). Similarly, neurophysiological “preparations” (note that the term itself serves to remove any sense of living-intact organisms) are restrained in place and subjected to uniform environments (both internally via sedation as well as externally in terms of totally controlled stimulus
88
W. B. Weimer
input) in order to impose as much physical control and repeatability as possible. But even this much imposed control never succeeds in eliminating all individual differences, or allowing for experimentation as it is found in the physical sciences. Living subjects have histories, both in their genetic makeup and in their individual experiential history (even “identical” twins are extremely different—starting with the fact that one had to be born first). Here, we need to recall the situation in biology, emphasized by Houle et al. (2011) and Montévil (Longo & Montévil, 2017, Montévil, 2019), discussed in Chapter 4 on mensuration and the nature of experimentation. Living subjects are literally never identical because there histories always differ. This brute fact of life (literally a fact of life!) is what prevents the social and psychological domains from ever being studied equivalently to the “simple” (in a descriptive, not a pejorative, sense) physical study of objects. Life is unique in the sense that, as Polanyi (1969) emphasized, living agency harnesses physicality (the physical realm and “laws” of nature) and as a result, becomes creative or productive of the novel and non-repeated (which behaviorism cannot even allow to exist). At this point, adequate analysis must move from classical determinism to determinate theories and give up the physics model of point prediction in favor of attempting to discern patterns of behavior characteristic of unique subjects who are similar only in following the same abstract rules of order. The functional domain of the living can never be reduced to the physical domain of the inanimate universe. The physical is simply not the functional. Points are not patterns. Subjects are not objects. Meanings are not just Shannon bits. Skinnerian black boxes cannot allow or explain behavior found in unconstrained living subjects.
Note 1. Neuropsychology is an essential topic for any discussion of evolutionary psychology, but in this volume it is beyond our scope, and discussion must be limited to the basic necessities of the manner in which the central nervous system functions in an intact higher primate (such as
5 Psychology Cannot Quantify Its Research, Do …
89
we happen to be). In terms of topics discussed in this chapter (issues of “experimental” control and whether or not research on individual “preparations” approximates that of “experimentation” in the physical domains are paramount) it is clear that what we are doing, especially in the attempt to tease out “localization of function” in the myriad brainstem to cortical pathways, is setting up demonstrations to ourselves of the likely circuitry involved. As noted below in this chapter, the issue of anatomical localization is now supplanted by functional or “dynamic” localization of impulse traffic in often shifting patterns involving many anatomical structures. Perhaps of more interest to most readers of this book would be the “applied” field of clinical neuropsychology, where Morgan and Ricker (2018) and Watson and Breedlove (2020) join Koutstall (2012) as a “cognitive” neuropsychology, and also Porges (2011) as a study of the ANS (autonomic nervous system) in being especially relevant to issues of meaning and knowing (I wish I could also say they were easily readable). For research following up Hayek’s insight that we must approach the nervous system as an instrument of classification, see Fuster (2003, 2013). Some of this more applied material in relation to stress and the “malaise of modern society” is treated in Chapter 6 of Weimer (2022).
References Aune, B. (1967). Knowledge, Mind, and Nature. Random House. Bartley, W. W. III. (1983). The Challenge of Evolutionary Epistemology. In Absolute Values and the Creation of the New World (Vol. II, pp. 835–880). The International Cultural Foundation Press. Bertalanffy, L. (1967). Robots, Men and Minds. George Braziller. Blumenthal, A. (1970). Language and Psychology: Historical Aspects of Psycholinguistics. Wiley. Blumenthal, A. (1977). Wilhelm Wundt and Early American Psychology: A Clash of Two Cultures. Annals of the New York Academy of Sciences, 291(1), 13–20. Boring, E. G. (1950). A History of Experimental Psychology. Appleton-CenturyCrofts. Brentano, F. (1874/1974). Psychology From an Empirical Standpoint. Routledge.
90
W. B. Weimer
Bridgman, P. W. (1927). The Logic of Modern Physics. Macmillan. Broad, C. D. (1949). The “Nature” of a Continuant. In H. Feigl & W. Sellars (Eds.), Readings in Philosophical Analysis. Appleton-Century-Crofts (Originally in Examination of McTaggart’s Philosophy, Vol. 1. Cambridge University Press, 1933). Fuster, J. M. (2003). Cortex and Mind: Unifying Cognition. Oxford University Press. Fuster, J. M. (2013). The Neuroscience of Freedom and Creativity. Cambridge University Press. Gibson, J. J. (1966). The Senses Considered as Perceptual Systems. Houghton Mifflin Company. Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Houghton Mifflin Company. Hayek, F. A. (1952). The Sensory Order. University of Chicago Press. Howarth, E. (1954). A Note on the Limitations of Externalism. Australian Journal of Psychology, 6 , 76–84. Houle, D., Pelabon, C., Wagner, D. P., & Hanson, T. F. (2011). Measurement and Meaning in Biology. The Quarterly Review of Biology, 86 (1), 3–34. Hull, C. L. (1943). Principles of Behavior. Appleton-Century-Crofts. Johnson, H. M. (1936). Pseudo-Mathematics in the Mental and Social Sciences. American Journal of Psychology, 48, 342–351. Kantor, J. R. (1958). Interbehavioral Psychology. Principia Press. Kantor, J. R. (1981). Interbehavioral Philosophy. Principia Press. Koestler, A. (1964). The Act of Creation. Macmillan. Koutstall, W. (2012). The Agile Mind . Oxford University Press. Kranz, D. H., Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundations of Measurement: Vol. 1. Additive and Polynomial Representations. Academic Press. Longo, G., & Montévil, M. (2017). From Logic to Biology via Physics: A Survey. Logical Methods in Computer Science, 13(4:21), 1–15. Mach, E. (1959/2002). The Analysis of Sensations, and the Relation of the Physical to the Psychical . Forgotten Books Reprint Series. Mill, J. S. (1843/1974). A System of Logic: Ratiocinative and Inductive. University of Toronto Press Publication in 1974 is current reference text. Montévil, M. (2019). Measurement in Biology is Methodized by Theory. Biology & Philosophy, 34 (3), 1–25. https://doi.org/10.1007s10539-0199687-x Morgan, J. E., & Ricker, J. H. (2018). Textbook of Clinical Neuropsychology. Taylor and Francis.
5 Psychology Cannot Quantify Its Research, Do …
91
Ohm, G. S. (1826). Bestimmung des Gesetzes, nach welchem Metalle die Contaktelektricitat leiten, nebst einem Entwurfe zu einer Theorie des voltaischen Apparats und des Schweiggerschen Multiplicsators [Determination of the Law in Accordance with Which Metals Conduct Contact Electricity, Together with an Outline of a Theory of the Voltaic Apparatus and of Schweigger’s Multiplier]. Journal Fur Chemie Und Physik, 46 , 137–166. Polanti, M. (1969). Knowing and Being. University of Chicago Press. Porges, S. W. (2011). The Polyvagal Theory: Neurophysiological Founations of Emotions, Attachment, Communication, and Self-Regulation. Norton. Simon, H. (1969/2019). The Sciences of the Artificial (3rd ed.). MIT Press. Skinner, B. F. (1938/1976). The Behavior of Organisms. Appleton-CenturyCrofts (Now B. F. Skinner Foundation). Skinner, B. F. (1957). Verbal Behavior. Appleton-Century-Crofts (Now B. F. Skinner Foundation). Stevens, S. S. (1946). On the Theory of Scales of Measurement. Science, 103(2684), 677–680. Tolman, E. C. (1932). Purposive Behavior in Animals and Men. Century/Random House UK. Tolman, E. C. (1959). Principles of Purposive Behavior. In S. Koch (Ed.), Psychology: A Study of a Science (Vol. 2, pp. 92–157). McGraw-Hill. Trendler, G. (2009). Measurement Theory, Psychology and the Revolution That Cannot Happen. Theory & Psychology, 19 (5), 579–599. Trendler, G. (2013). Measurement in Psychology: A Case of Ignoramus et Ignorabimus? A Rejoinder. Theory & Psychology, 23(5), 591–615. https:// doi.org/10.1177/0959354313490451 Watson, J. B. (1919). Psychology From the Standpoint of a Behaviorist. Lippincott. Watson, N. V., & Breedlove, S. M. (2020). The Mind’s Machine: Foundations of Brain and Behavior. Sinauer Associates (Oxford University Press). Weimer, W. B. (1974). The History of Psychology and Its Retrieval from Historiography: Part 1. Science Studies, 4, 235–258. Weimer, W. B. (2022). Retrieving Liberalism from Rationalist Constructivism: Basics of a Liberal Psychological, Social and Moral Order (Vol. II). Palgrave Macmillan. Wundt, W. (1874). Principles of Physiological Psychology. Engelmann.
6 Taking the Measure of Functional Things
In deductivist style, all propositions are true and all inferences valid. Mathematics is presented as an ever-increasing set of eternal, immutable truths. Counterexamples, refutations, criticisms cannot possibly enter....Deductivist style hides the struggle, hides the adventure. The whole story vanishes... doomed to oblivion while the end result is exalted into sacred infallibility Imre Lakatos
To reemphasize yet again: the actual dynamical universe is uncertain, and, as such, must be ambiguous to our cognition. It is statistical rather than deterministic. The apodictic is not found in rate dependence. The conception of certainty is only found in, and is only meaningful within, the rate-independent realm that is known to exist only in human conceptual thought. So how are we to use mathematics, the paragon exemplar of that rate-independent realm, to help us understand the non-mental realm? How do we undertake the classic quest of “taking the measure” of reality, of using ratio in its original sense of the measure of things? How do numbers relate to reality, and how do numbers relate to human understanding? The epigram from Lakatos indicates that certainty is © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_6
93
94
W. B. Weimer
only after the fact, and does not exist in the dynamical realm of empirical phenomena, including the processes of mathematical invention and discovery. Certainty is window dressing after the fact of human inquiry. The functional realm of behavior is dynamic, subject to error and uncertainty, and always dependent upon factors beyond the inexorable laws of nature. Our theories of both the physical and the functional have got to reflect this. The two greatest achievements of the twentieth century with respect to such issues were, first, recognition of the separate but complementary nature of rate independence and rate dependence; and second, recognition that the model of knowledge and rationality based on ancient Greek mathematics as the only ideal form of science, the justificationist metatheory, is wrong from start to finish. We need to abandon the idea that mathematical certainty was or is an ideal form of knowledge (it is only applicable to contentless syntactical systems and never to semantic or pragmatic domains), and stop regarding statistical approaches as based on extant justificationist conceptions of probability and inference. Knowledge acquisition is a never-ending process of assessing conjectures as best we can against all our background knowledge and assumptions. When we do so, it is obvious that statistical analyses of ill-defined populations are never capable of delivering any sort of probable knowledge (or probable truth). All statistical procedures, if they are applicable to a data domain at all, are descriptive of a population rather than being any type of an inferential source of knowledge. Knowledge is conjectures held in check by the winnowing forces of other conjectures (whether theories, laws, or accepted factual propositions), and statistical description of a population can only serve to sharpen our view of the population, never to constitute “definitive” interpretation of a knowledge claim. Let me repeat—if knowledge is viewed from the evolutionary perspective of tentative and inherently fallible conjectures, then the distinction between inferential and descriptive statistics simply cannot be made. If methodology is unfused from the attempt to certify knowledge claims (as either proven, disproven, or proven to be probable), then all statistics are descriptive. Thus, there is no such thing as an inferential “test” of “significance,” because one can never infer that something is or is not significant by any algorithmic test procedure. The only significance
6 Taking the Measure of Functional Things
95
test in science (or anywhere else) was aptly characterized long ago by statistician Joseph Berkson as the “inter-ocular traumatic test”—if the data leap out and hit you between the eyes, then the result is significant. If the data do not, then no amount of computations or number of asterisks can “add” something glorified as significance to a test result. If instead of attempting to test for significance we are concerned to “statistify” the acceptability of potential scientific claims in research practice, then we need to shift focus to what constitutes acceptable behavior on the part of researchers within the (taken for granted) confines of a research community. In that case, the Neyman-Pearson decision theoretic approach (which is purely conventional and can be divorced from the attempt to justify) is worthy of continued development. And if the justificationist overtones are removed, many Bayesian “liberations” and “flexible approaches” can be utilized, especially because the Bayesian approach is already oriented toward description of data rather than toward justification (of a knowledge claim).
The Role of Statistical Inference in Contemporary Physics To what extent are statistical or other inferential tests utilized in physical theory experiments? Are there any “tests of significance” employed? What is the role of the immense mathematical sophistication found in physics research? Does physics do experiments or demonstrations? We can begin to answer such questions by looking at the actual or “written up” results of studies in physics. Consider what is perhaps the most discussed and cited paradigm study (in Kuhn’s exemplar sense) in physics—the Young two-slit experiment. Clearly this is experimental in Galileo’s sense, because it artifactually constrains a stream of something (photons in a light beam, electrons emitted from some source, etc.) to pass through a barrier to their progress except by going through small slits in the barrier, with their presence on the other side of that barrier recorded on a surface at some distance from that slit arrangement. The purpose of the study is to determine whether the “stream” consists of entities or processes (neutrally, events) that are
96
W. B. Weimer
particle-like or wave-like in their behavior. This situation is considered, a priori, to “test” whether, say, photons of light are particles or waves. The reasoning is material implication: classical logical if–then: if light is a particle, there will be two separated “clumps” of impacts behind the slits, each representing one or the other slit as a passageway for a particle. If the beam is a wave, then the impacts will show a distribution of peaks and valleys characteristic of waves interfering with one another, as when one drops two rocks into still water. So the logic is binary choice—yes or no, wave pattern or particle pattern. The consistent result is the wave interference pattern. The test of significance employed: the interocular traumatic test of Joe Berkson—the result leaps out and hits you between the eyes—it is either wave-like or particle-like. This is the case no matter what the level of technological sophistication of the study. For example, a paper by Tavabi et al. (2019) reports the results of a modern “precise” experimental version, a Young-Feynman study (now involving the most recent electron-beam microscopy based upon holographic theoretical principles), in nothing but a series of visual images of graphical results of predicted and actual results for the apparatus manipulations involved. Great mathematical and theoretical sophistication is involved in the description of the situation, and the construction and implementation of the apparatus, but there is not even a hint of a test of “significance” in assessing the results. Just a basic “Here it is, look for yourself.” So much for the mythology surrounding physics. I have never encountered a physicist who would want to do any sort of statistical procedure to “certify” such results. On the other hand, practitioners in the social domains such as psychology and economics are amazed at how “lax” and (from the standpoint of those steeped in “sophisticated” experimental “design” and data “manipulation” courses), seemingly “unscientific” much physical theory research design actually is. Why this disparity? The issue (or perhaps culprit) is as simple as it is obvious: control . The physicist, whose subjects are nothing but objects which are all totally identical and interchangeable, concentrates on maximum apparatus sophistication to provide a context of constraint that eliminates all possible interpretations save the alternatives the theorist wants to examine. The only issue remaining concerns the robustness of the arrangement—which
6 Taking the Measure of Functional Things
97
is “tested” by exact repeatability of the study. Photons, electrons, etc., are not subjects—they are identical objects. If you have seen a result with one of them, you know what will happen with all of them, provided only that the initial study was well enough constrained to limit the result to the one observed. This is never so for the life sciences. The biological and social domains never have identical objects. They study subjects, who, as agents, are never identical in all respects. All experimental design and statistical testing is an attempt to make subjects as object-like as possible, in order to see if there really are “objective” or rule governed constraints on their behavior. The social domain is so immensely more difficult to study than the physical that it is not at all surprising that the “Galilean experimental revolution” cannot be expected to occur. Since this inevitable (read: essential) complexity can never be circumvented; it should not be at all surprising that the socalled replication crisis is so widespread in the social domains. It must be realized that this is not solely the result of “incompetent” researchers or those who “cheat” in one form or another because of the all too well-known “publish or perish” situation in academia (and indeed in any competitive market for scarce resource allocation). The failure of exact or “nearly exact” replication happens to the best research and to the best researchers, simply because of the historical component—the inevitable differences—of all subjects. Methodology must admit this, and concentrate on utilizing situations that produce the clearest demonstrations possible to achieve. It will certainly not be easy. And it won’t be experimental. One thing we must constantly remain aware of is the extent to which both physics and, say, psychology, are and have always been, really doing “demonstration” studies. What sets the physicist apart is that he or she has available a tremendous arsenal of controls that can constrain a situation down to the point where it is often possible to formulate a simple binary, yes–no question to which the experimental constraints can provide an answer. With respect to the social domain, no such control is either ethical or even possible. Social studies must attempt to discern the rules governing behavior by means that are far less restrictive, literally lacking in constraint, when compared to the physical studies. Statistical procedures in such domains function only to check upon the reliability
98
W. B. Weimer
and robustness of the theorists’ qualitative conclusions. Their import is to aid in the description of the situation involved, not to be “inferential” in and of themselves. Thus, the concept of a test of significance is in fact a contradiction in terms. What these “tests” in fact can do is limited to ascertaining the adequacy and reliability of the descriptive terms or concepts that are being “experimentally” manipulated in a study.
How Shall We Study Co-occurrence Relationships? The complex phenomena of life that we are interested in do not allow us to make experiments. Studying the problems of measurement makes it obvious that in such fields we simply cannot measure in the sense in which that term is meaningful in the physical sciences. No matter how sophisticated they appear, the statistical procedures we have available can do no more than describe populations: they can never be inferential in themselves. So that leaves us with a problem that is easy to state and all but impossible to resolve: what is the best manner in which we can study the co-occurrence relationships in the domains of interest? After all, to take the example of comprehending psychological behavior, all of the actual “data” we can find is instances of things that co-occur. Our database does not result from any attempt to construct experiments. Instead we are awash with instances of behaviors to which, for better or worse, numbers are being attached. Neglecting the problem of scaling for the number attachments, we often find ourselves asking if we can determine intelligible and meaningful patterns of interaction in the corelations of behaviors. There is, as Cronbach (1957) forced the field to see, a second culture of psychological “research” (one on individual differences) that studies correlations rather than attempting to do “controlled” experiments. Do we have a possibility of explaining things from the study of surface structure correlations—the observed co-occurrences? Or is this, like programs of dispositional analysis (Weimer, 1984, 2021), a pretheoretical anticipation of the later emergence of explanatory theories? And are the statistical procedures of the various correlation coefficients
6 Taking the Measure of Functional Things
99
subject to the same limitations from measurement and scaling theory as the “experimental” approaches? Consider first Cronbach’s classic formulation of the divides between the two “disciplines” within psychology: While the experimenter is interested only in the variation he himself creates, the correlator finds his interest in the already existing variation between individuals, social groups, and species. ... “Correlational psychology”... is the study of correlations presented by Nature....The well-known virtue of the experimental method is that it brings situational variables under tight control. It thus permits rigorous tests of hypotheses and confident statements about causation. The correlational method, for its part, can study what man has not learned to control or can never hope to control (Cronbach, 1957, pp. 671–672).
Examine these points in more detail. This harks back to the GalileanNewtonian separation of boundary conditions from the domain of laws in science, i.e., the realm of laws of nature versus the initial conditions of experimentation. It goes back to John Stuart Mill, who pointed out that “waiting” for “natural experiments” takes far too much time and requires luck–that one be in the right time and place to observe the result. The correlational approach denies this entirely—suggesting that “correlations presented by nature” are readily available—at least after the fact, in psychology. It confidently asserts that correlation can study— and obviously “scientifically”—what we cannot hope to control. Is this a middle road between the laws of nature and the boundary and initial conditions? Between experiments and demonstrations? A real way to study individual differences in life and the human realm? While there is every reason to broaden the field of study from a single manipulated treatment effect (the experimenter’s idealized focus) upon a uniform object of study (any given white rat or any college sophomore) to the correlation or co-occurrence search for constructs that have some “validity” as causing underlying disparate behaviors in disparate subjects—after all, that is what I have argued for throughout in pointing out that subjects are never uniform objects—there are still major problems in the use of “numbers” as measures and the amount of information
100
W. B. Weimer
we can correctly get from correlational procedures, and there are problems with respect to the type of “constructs” we can postulate to account for the correlations we find. The issues of measurement we have already addressed in sections above. The issues of construct “validation” would take volumes, but we can point out some representative ones here. This was Cronbach’s claim: The correlational psychologist discovered long ago that no observed criterion is truly valid and that simultaneous consideration of many criteria is needed for a satisfactory evaluation of performance. The same principle applies to experimentation.… Theoretical progress is obstructed when one restricts himself to a single measure of response (Miller, 1957). Where there is only one dependent variable it is pointless to introduce intervening variables or constructs. Where there are many response variables, however, it is mandatory to subsume them under constructs, since otherwise we must have a separate set of laws for every measure of outcome (ibid., p. 677).
Cronbach argued that experimenters have no way of classifying and integrating results from different tasks (or reinforcement procedures), and that “We depend wholly on the creative flair of the theorist to collate the experiments and to invent constructs which might describe particular situations, reinforcements, or injunctions in terms of more fundamental variables” (ibid., p. 678). He held that correlational techniques can be employed to determine what factors are actually involved in the constructs, exactly as Paul Meehl (1954) had argued for the superiority of actuarial data over clinical intuition in determining clinical treatments and outcomes. The argument is that these procedures can let us see more adequately what is actually involved in the data domains: “Factor analysis by substituting formal for intuitive methods, has been of great help in locating constructs with which to summarize observations about abilities. It is reasonable to expect that multivariate treatment of response measures would have comparable value in experimental psychology” (ibid., pp. 677–678). If this could be shown to be the case, we would have a (relatively) algorithmic procedure for determining common constructs in otherwise disparate and unrelated co-occurrence relations. This would give us a procedure for achieving
6 Taking the Measure of Functional Things
101
“sausage machine science.” But at an obvious cost: it can never suggest hypothetical constructs, only supply pure intervening variables or “data summaries.” An algorithm can summarize, but it cannot think. For that reason it can never disclose a hypothesized “real” underlying entity with surplus theoretical meaning and independent existence apart from the data, one that is capable of being a causal factor in the behaviors in question. This is both the strong point and the weakest possible point to such an approach. While it could, qua algorithmic procedure, effectively “bunch” disparate facts together, it has never been possible to substitute a machine for a scientist. It would be useful to actual science only in “data mining” a domain to pick out things that empirically do in fact co-occur and were, on that basis alone, worthy of continued scrutiny. But it could never create an explanation of phenomena and it could never lead to any scientific revolution in our thinking. It always takes the insight of a researcher—an agent—to determine what is worth studying (or collecting into bunches) in the first place, and that is dependent upon possession of a theory—albeit often only no more than a rudimentary “hunch” or guess—that transcends any algorithmic calculation formula. Likewise, it is not an explanation of “banana” that this one here in my hand is one of a similar bunch of things. An adequate explanatory theory always is required to move from mere intervening variable data summaries to constructs with surplus meaning that are relevant to the explanation of the phenomena.
In Defense of Miss Fisbee Let us continue for a moment on the theme that one must always have an adequate explanatory theory rather than relying upon daisy picking according to some statistical or algorithmic procedure. At the beginning of a famous American Psychological Association presidential address, Meehl (1956) recounted the following anecdote about an introductory student and Meehl’s colleague, the Skinnerian “hard science” radical behaviorist Kenneth MacCorquodale. It presents an all too familiar occurrence in the social domain:
102
W. B. Weimer
MacCorquodale was grading a young lady’s elementary laboratory report on an experiment which involved a correlation problem. At the end of an otherwise flawless report, this bobbysoxer had written “The correlation was seventy-five, with a standard error of ten, which is significant. However, I do not think these variables are related.” MacCorquodale wrote a large red “fail” and added a note: “Dear Miss Fisbee: the correlation coefficient was designed expressly to relieve you of all responsibility for deciding whether these two variables are related” (Meehl, 1956, p. 263).
Concerned as he so often was with the question of “When should we use our heads instead of the formula?”, Meehl contrasted this anecdote with one in which a counseling psychologist (in this case the famous “dustbowl empiricist” Donald G. Patterson) had told a busy young man—that he could have his secretary take the test for him, since Patterson really didn’t care whose “data” it was, as long as he had “quantitative data” to evaluate. (Talk about making subjects into identical objects, here it is). Such an exceptionally bleak choice—between the devil and the deep blue sea (Miss Fisbee actually “thinking” for herself ) and any (even fake) quantitative data at all will do—was all too common during the period when I was trained eons ago (at Minnesota, by Meehl among others). Consider the framework in which this forced choice dichotomy arose. As hardheaded scientists (i.e., the ones who assumed that the “social physics” model was the only scientific approach to psychology), these authors were concerned with how to put numbers on psychological “data,” and have those numbers mean something, i.e., have them actually reflect psychological concepts. They were looking for facts in the psychological domain without any acknowledgment that one cannot have a fact without a theory to tell one what constitutes a fact. They were attempting to do science according to both a presupposed and thoroughly inculcated normal science puzzle solving tradition (in this case behaviorism in one or another variant), and according to then current logical empiricist canons of inductive inference procedure, in which one accumulated facts and then fitted them into law-like generalizations. They were using correlation procedures to gather facts. When a sufficient quantity of facts had accumulated it was assumed that “laws of behavior” would emerge
6 Taking the Measure of Functional Things
103
when one inspected them. Thus, science was an algorithmic procedure so long as one had the right facts. The only excuse one would ever have for “using one’s head” was in determining which of the welter of facts were psychological and which should be discarded as not “facts” at all. These theorists, who made the initial distinction between hypothetical constructs and intervening variables, treated everything from a methodological perspective as though it were a pure intervening variable. There was no possibility of surplus meaning since the statistical procedures exhausted what was involved in bunching together (or correlating) that to which numbers have been assigned. So the disparaged “bobbysoxer” was not allowed to think but they, as professors, were. They—the anointed experts—had to think when to use their heads instead of the formula. It was not conceivable that Miss Fisbee might have had a new theory according to which those variables, no matter what the correlation coefficient in this experiment was, were not related in a theoretically motivated fashion. It was inconceivable to MacCorquodale that he should first have asked Miss Fisbee what theory she held that determined those variables were not related, and failed her only if that theory was untenable on other well-corroborated grounds. While MacCorquodale had an excuse for this attitude—having been trained in Skinnerian scientism—Meehl had less of an excuse. As a professed libertarian, he should have remembered the attitude of an often quoted classical liberal and “conservative” thinker, Edmund Burke. It was Burke who told us centuries ago that he had never seen any plan which could not be “mended” by the observations of those who were “much inferior in understanding to the person who took the lead in the business.” The next time you encounter a Miss Fisbee, I would recommend examining her position before dismissing it due to any alleged superiority of some statistical technique. She may be the next Einstein. Correlation techniques work, if at all, only in a presupposed normal science framework, not in revolutionary periods when the facts change. The coefficient of correlation is certainly an algorithmic assessment procedure, but it was not in fact designed as a substitute for thinking. Use of a statistical correlation procedure does not in fact “relieve one of all responsibility” for making decisions. It relieves one of no responsibility at all. This should have been obvious from the first necessary
104
W. B. Weimer
responsibility—your theoretical decision or choice—of which statistical procedure to employ in a given situation. Science is functional, not physical. There is no purely physical concept of measurement. It always depends upon a choice of what counts as a measurement, and the choice of what (and when) to measure. Algorithms can never make a result be a “measure,” or into something meaningful. Good, and thus meaningful, science depends upon the adequacy of your choices in determining what questions to ask and how to ask them. There is no choice here at all—you always have to use your head first. The formula is never a substitute for theory, only something to use in the service of theory.
References Cronbach, L. J. (1957). The Two Disciplines of Scientific Psychology. American Psychologist, 12(11), 671–684. https://doi.org/10.1037/H0043943 Meehl, P. E. (1954). Cliniucal versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. University of Minnesota Press. https://doi.org/ 10.1037/11281-000 Meehl, P. E. (1956). Wanted—A Good Cookbook. American Psychologist, 11(6), 263–272. https://doi.org/10.1037/h0044164 Miller, N. E. (1957). Experiments on Motivation; Studies Combining Psychological, Physiological, and Pharmacological Techniques. Science, 126 , 1271– 1278. Tavabi, A. H., Boothroyd, C. B., Yucelen, E., Gazzadi, G. C., DuninBorkowski, R. E., & Pozzi, G. (2019). The Young-Feynman Controlled Double-Slit Electron Interference Experiment. Scientific Reports, 9, 10458. https://doi.org/10.1038/s41598-019-43323-2 Weimer, W. B. (1984). Limitations of the Dispositional Analysis of Behavior. In J. R. Royce & L. P. Mos (Eds.), Annals of Theoretical Psychology (Vol. 1, pp. 161–198). Plenum Press. Weimer, W. B. (2021). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 1. Cosmos + Taxis, 9 (11+12), 1–29.
7 Statistics Without Measurement
The principles governing the measurement of time and temperature and similarly of all the other magnitudes referred to in physical theory represent complex and never definitive modifications of initial “operational” criteria; modifications which are determined by the objective of obtaining a theoretical system that is formally simple and has great predictive and explanatory power: Here, as elsewhere in empirical science, concept formation and theory formation go hand in hand. Carl G. Hempel
Hempel’s (1952) classic monograph on concept formation just acknowledged that measurement is a theoretical endeavor analogous to all science. As such, it behooves us to ask what science would look like without measurement as it is used in the hard sciences. If we realize that, strictly speaking, there is no actual quantification that can be empirically demonstrated in the social domains, we can go on to ask what role statistical procedures could and should play in a domain without actual quantitative measurement. Is there any role for statistics if we are limited to what is usually (following Stevens, 1951) called nominal and ordinal scaling rather than the physical domains’ interval, ratio and absolute © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_7
105
106
W. B. Weimer
measures? How much information can we gain about either the individual subject or a population of (almost certainly incompletely defined) subjects? What can we do without multiplication and division, when limited to (at best) addition and subtraction? And by the way, what is “information” anyway? To begin to answer, consider some problems of scaling (as mentioned in Chapters 3 and 4, the theory of assigning numbers to “data”), when exemplified in the description of individual differences. Traditional textbook accounts start with the distinction between qualitative variation, differences in kind , and quantitative differences, being differences in terms of frequency, amount, or degree (see Ghiselli, 1964). This is actually the distinction between physical and functional. Qualitative (functional) description is termed classification, and quantitative (physical) description is termed measurement, where measurement means the use of numbers that can be manipulated in some way to provide further information about the individuals and their differences. What we have indicated above is that the generic conception of “measurement” is incomplete and ambiguous, that it is crucial to subdivide quantitative variables into ranked , providing an ordering, and scalar, providing information about frequency, degree, or amount. Ranking does not imply equality of intervals or increments of something, but at best more or less. Scalar variables are either ratio, with known equal intervals and an arbitrary point of origin, or absolute, in which an absolute or definite zero is known, or the much weaker interval scale, in which there are just equal intervals between the numbers. The problem for psychological scaling is that no psychological variable has a ratio scale (arguments to the contrary by Stevens and the psychophysicists being incorrect because the requirements of quantity cannot be ascertained), and the best we can find is an occasional interval variable, and even then there are usually many qualifications and caveats. Because of these problems, most psychological scaling “information” is actually ordinal. This in turn makes most, e.g., trait description, very hard to “measure,” and the problem of construct validity extremely difficult to tackle.
7 Statistics Without Measurement
107
Nonparametric Statistical Procedures Work with Nominal, Ordinal, and Some Interval Data Absolutely any statistical test is usable with a ratio scale with a known zero. But there are no real ratio scales in the social or behavioral domains. That is the message of many, as summarized by Trendler above. Thus, it is not actually correct to use parametric statistical procedures which require a true zero point and the applicability of multiplication and division. Textbooks point this out, and then immediately forget all about it by saying, “Most researchers are willing to assume.…”. That leaves nominal, ordinal, and interval. Data “measured” by nominal and ordinal methods must use nonparametric procedures for statistical analysis—it is never permissible to use parametric tests no matter how many sugarcoated assumptions are made (or better, prayed for). For interval data, parametric tests can be applied if and only if relevant assumptions can be satisfied. Table 7.1 summarizes some major aspects of the situation. Backup for a moment to cover some basic definitions. A nonparametric test does not specify conditions about the parameters of the population from which a sample has been obtained (or “drawn”) by some “measure.” They are parameter free or parameter non-specific. The assumptions underlying such procedures are thus weaker than those of parametric tests. This weakness of assumptions, combined with the advantage of applying to nominal and ordinal scales, is why they are of crucial importance to the social domains. You are not cheating by violating underlying assumptions that cannot ever be fulfilled, as you are when incorrectly applying standard parametric procedures. A parametric statistic requires strong and definite assumptions about the parameters of the population from which a given sample has been drawn. These requirements (about the parameters of the population) for correct usage include: that the individual observations be independent, i.e., they do not affect any other possible observation that could be included in the sample; that the underlying distribution is normal; that the populations have the same variance; and that the measurement must be in at least
Positive real numbers Real numbers
x ≥ x +a
x ≥ ax
x ≥ ax
None
Difference
Ratio
Signed ratio
Absolute
Defined
Positive real numbers Real numbers
x ≥ ax b , a, b > 0
Log-interval
Interval
Ordinal
Any set of symbols Ordered symbols Real numbers
Domain
Any one-to-one mapping Any monotonically increasing function x ≥ ax + b
Permissible transformation
Nominal
Scale type
0
1
1
1
2
2
Countable
Countable
Arbitrary parameters
Any
Order, ratios, differences Order, ratios, differences
Order, differences
Order, differences Order, ratios
Order
Equivalence
Meaningful comparisons
Log-transformed ratio-scale variables Length, mass, duration Signed asymmetry, intrinsic growth rate (r) Probability
Dates, Malthusian fitness Body size
Social dominance
Species, genes
Biological examples
Table 7.1 Common scale types, permissible transformations, domains, arbitrary parameters, and meaningful comparisons (after Houle et al., 2011)
108 W. B. Weimer
7 Statistics Without Measurement
109
an interval scale. These requirements are virtually impossible to satisfy with living subjects, because unlike objects, they are never identical due to their evolutionary and individual histories being different. Standard criticism of nonparametric procedures is that they “throw out” information available in the sample, compared to parametric methods. This criticism is untenable. The only real issue is which sort of test actually utilizes (at least some) information from the sample in an appropriate manner. The drawback to blanket use of parametric procedures is that their use incorrectly adds in too much information that is not actually known to apply to the situation. Siegel (1956) put it aptly: If the measurement is weaker than that of an interval scale, by using parametric tests, the researcher would “add information” and thereby create distortions which may be as great and as damaging as those introduced by the “throwing away of information” which occurs when scores are converted to ranks. Moreover, the assumptions which must be made to justify the use of parametric tests usually rest on conjecture and hope, for knowledge about the population parameters is almost invariably lacking (p. 32).
So, it is crystal clear that if the “weak sister” actually applies, it is more useful than the “strong” procedure that does not apply at all . So, there are genuine advantages to using these weak sister nonparametric procedures. Some of them: they can be correctly used with an N as small as 6; they can use samples from different populations; they can deal with ranks as well as numerical more-or-less data; they can deal with purely numerical or nominal data; and probability statements pertaining to nonparametric tests denote exact probabilities regardless of the shape of the population distribution involved. Given that numerous reference sources for nonparametric procedures are readily available (e.g., Wasserman, 2006, Hollander et al., 2013), there is no excuse for not employing them for the many cases of nominal and ordinal scaling that the biological, behavioral, and social domains must face.
110
W. B. Weimer
Generalizability, Robustness, and Similar Issues Most practitioners in psychology and other social domains will attempt to ignore the implications of the discussion thus far by utilizing one of two avoidance strategies: first, they will claim that although they would love to use nonparametric statistics, the problem is that such tests are simply not generalizable to the populations they wish to address, and second, none of the arguments against parametric statistics carry any weight because parametric tests are known to be very robust, i.e., they will claim that they are applicable even though well-known assumptions are violated. Both are arguments from ignorance, and represent forms of wishful thinking. While it is true that nonparametric, statistical results are “less generalizable” than parametric tests, one must ask which of two unpleasant choices you prefer. The first choice is a correctly utilized nonparametric test that does not have “everywhere-every when” generalizability, or the second, an incorrectly utilized parametric test that in fact has no generalizability at all . That in a nutshell ends the “generalizability” issue: restricted generalizability that is legitimate triumphs over unrestricted generalizability that is always illegitimate. The issue of robustness runs afoul of empirical problems that are exactly analogous to satisfying the conditions of quantity. Robustness is an entirely empirical issue—it cannot be decided by any a priori arguments in a pure mathematical statistics textbook. One cannot know that a test procedure is robust without doing an immense amount of empirical determination. Since there is little or no literature supporting the empirical study of robustness for parametric tests, we are in the same position with the concept of robustness as we are with the conditions of quantity. Adequate empirical research simply has not been done. Since this is so, it is merely wishful thinking to assume that parametric tests are somehow more “robust” and therefore are satisfactory to employ instead of nonparametric ones.
7 Statistics Without Measurement
111
Back to the Drawing Board, at Least for a While Are we ready to embrace Chicken Little, running around shouting that the sky is falling, or do we need to put a new roof on our presently leaky house? We know that we have a leaky roof, we do not know that the sky is falling. We definitely need to go back to the drawing board and learn how to construct better roofs, but we do not need to take up our posters that read “The end of the world is here.” It would be lovely if we could frame the problem in terms of a simple binary choice: studies of this sort are acceptable, studies of any other sort must be thrown out or done over again. But we are not so lucky: the problem is that we do not know which studies are meaningful, and which are not (or are less than optimally meaningful). Measurement requires meaningful scaling procedures. When we do not have adequate scaling procedures, we cannot employ certain statistical procedures without either losing information or rendering the results unintelligible. We are faced with the task of sifting through the studies we think are most relevant to our fields and particular interests and attempting to determine whether they are meaningful (i.e., that the scaling and measurement is meaningful for the statistical procedures employed) or not.
Testing a Theory in Psychology is Paradoxical for Those Who Do not Understand Problems of Scaling and Mensuration A much cited paper by Meehl (1967) proposed that there was a paradox with respect to the testing of theories in psychology versus physics. This is his formulation of how testing a theory in psychology is paradoxical: In the physical sciences, the usual result of an improvement in experimental design, instrumentation, or numerical mass of data, is to increase the difficulty of the “observational hurdle” which the physical theory of interest must successfully surmount; whereas, in psychology and some of the allied behavior sciences, the usual effect of such improvement in
112
W. B. Weimer
experimental precision is to provide an easier hurdle for the theory to surmount (Meehl, 1967, p. 103) (originally all italicized).
Meehl came to this conclusion by contrasting point prediction in physics with common ordinal rather than ratio differences in psychology: The usual situation in the experimental testing of a physical theory at least involves the prediction of a form of function (with parameters to be fitted); or, more commonly, prediction of a quantitative magnitude (point-value) (see ibid., p. 112). Clearly this is a statement that physics involves ratio scaling. But he does not point out that in psychology, neither actual measurement nor ratio scaling is ever available, nor does he explore the actual problems posed by the fact that subjects are never the same and thus cannot be treated as physical objects. He simply says that: In psychology, the result of perfect power (i.e., certain detection of any non-zero difference in the predicted direction) is to yield a prior probability p = 1/2 of getting experimental results compatible with T (the tested theory), because perfect power would mean guaranteed detection of whatever difference exists; and a difference (quasi-) always exists, being in the “theoretically expected direction” half the time if our substantive theories were all of negligible verisimilitude… (Ibid., p. 113).
That the attempt to employ directional null hypothesis testing in psychology would yield this result is obvious if one understands prior sections on the nature of what actual mensuration requires and how “experiments” are not available in the biological-psychological-economic domains. The solution, of course, is to recognize the limitations of Fisherian statistical procedures (especially its underlying logic), and to construct more adequate demonstration studies to replace the misguided idea of hard science experimentation. Not much will change until the hard-science-inductive-inference-by- statistical-procedure conception is abandoned. More recent commentators (e.g., Eronen and Bringmann, 2021) rehash Meehl with no understanding (or improvement!) over Meehl’s initial paper.
7 Statistics Without Measurement
113
Back to History for a Moment Consider two well-known positions as a commentary on the contents of this chapter. First is a remark of the eighteenth-century thinker Edmund Burke that is so well-known that it is now a commonplace aphorism: those who do not know history are doomed to repeat it. That is what I have indicated has happened repeatedly in attempting to force the functional fields, especially psychology, to fit the Procrustean bed of the “social physics” model of what science is, and how measurement fits within it. Here is the same point from historian of science Thomas Kuhn: “How could history of science fail to be a source of phenomena to which theories about knowledge may legitimately be asked to apply (Kuhn, 1970, p. 9)?”. The physical or “hard” sciences made progress early on because of a fortuitous accident: the happenstance that commonly used measures satisfied restrictive (usually ratio) scaling requirements, and thus at the time readily available mathematics, such as ordinary arithmetic and Euclidean geometry, were applicable to those measurements in meaningful fashion. The measures used did indeed provide an informative mapping or modeling of the theoretical constructs developed by early science into an empirically observed domain. No one thought about this as an accidental coincidence, and indeed it would initially have been difficult to conceive that it could be other than the only way measurement can occur. Centuries later, when gamblers found Bayes with his uniform balls in uniform urns to be helpful in their wagers (on uniform cards or dice), no one had any idea that probability required an even more restrictive scale, a ratio scale with an actual absolute zero point, in order to be meaningfully applied. It is thus not at all surprising that the mensuration procedures that had worked so well with uniformly indistinguishable objects of physical domains such as billiard balls should have been uncritically applied to the infinitely variable subjects of the biological and social domains. After all, there is just one scientific method, right? Even later statistical researchers who recognized that subjects were different, such as R. A. Fisher (Young & Weimer, 2021), honestly felt that their version
114
W. B. Weimer
of methodology was “the” scientific method (in his case, null hypothesis testing for “probable” significance) that applied everywhere, to all science. When problems arose—obvious disconnections between functionally specified theoretical entities and empirical domains to which they were to apply—it was at first assumed that indeterminism had reared its ugly head and complicated the Laplacean picture of ideal knowledge in inexorable lawfulness in favor of “probable” regularities that just needed more powerful statistical procedures (such as tests of significance of the probability of results) to tease out or unlock the secrets of nature. Thus, as with the transition from phenomenal thermodynamics to statistical mechanics, it was thought that what was required was the development of inductive methodologies (logics) based upon the “laws” of probability and early study of quantification (as in Whewell, 1840, who, incidentally, also coined the term “scientist”), perhaps aided by the method of multiple working hypotheses (à la Chamberlain, 1890). The Cambridge School work on inductive “logic” was then incorporated into logical positivism in the 1920s (and is noted in Appendix chapters). But the point is that at first no one, least of all these methodologists, seemed to realize that a theory of mensuration was needed to complement a theory of inference. It was a milestone recognition of problems when, in 1932, the British Association for the Advancement of Science established a committee (de facto headed by physicist N. R. Campbell) to investigate the very possibility of genuine (i.e., physics like) scientific measurement in domains that studied subjects instead of objects. Their conclusion, written by that physicist, said “No.” That is the context in which Smitty Stevens changed the debate by redefining what constituted the theory of measurement into the study of scaling theory, differentiating between ratio and other scales for the average researcher in psychology and (indirectly) all the other social domains. This approach, building on a sentiment already clearly expressed at the time (by, e.g., Johnson (1936), said that much can be learned without measurement in the physicist’s sense. N. R. Campbell (1920, 1928) was indeed correct: psychology does not measure—it can’t satisfy even his first “law,” which requires equivalence between magnitudes (see Trendler, 2009). Subjects are not, and cannot be, constrained like the objects of physics, and one
7 Statistics Without Measurement
115
can never empirically corroborate the requirements of quantity for them. We cannot impose the constraints that physicists do in the inanimate realm upon the “objects” which we study. The life and living sciences— biological and social domains—are in this sense fundamentally different from the so-called hard sciences in which what is studied are identical and totally interchangeable objects. The fact that this fundamental difference (that subjects can never be reduced to objects) requires a very different conception of the nature of science, the methodology of scientific research, and acceptable procedures of mensuration and statistical inference, is undoubtedly one of the most important results of the last hundred years’ worth of research in the philosophy of science and in the methodology of scientific research, and also the origin of life. What is amazing is that it is almost always ignored in the fields it should have transformed.
References Campbell, N. R. (1920). Physics, the Elements. Cambridge University Press. Campbell, N. R. (1928). An Account of the Principles of Measurement and Calculation. Longmans. Chamberlin, T. C. (1890). The Method of Multiple Working Hypotheses. Science, 15 (366). https://doi.org/10.1126/science.na-15.366.92 Eronen, M. I., & Bringmann, L. F. (2021). The Theory Crisis in Psychology: How to Move Forward. Perspectives on Psychological Science, 16 (4), 779–788. https://doi.org/10.1177/1745691620970586 Ghiselli. (1964). Theory of Psychological Measurement. McGraw-Hill. Hempel, C. G. (1952). Fundamentals of Concept Formation in Empirical Science. Foundations of the Unity of Science (Vol. II, No. 7). University of Chicago Press. Hollander, M., Wolfe, D. A., & Chicken, E. (2013). Nonparametric Statistical Methods. John Wiley & Sons. Houle, D., Pelabon, C., Wagner, D. P., & Hanson, T. F. (2011). Measurement and Meaning in Biology. The Quarterly Review of Biology, 86 (1). https://doi. org/10.1086/658408
116
W. B. Weimer
Johnson, H. M. (1936). Pseudo-Mathematics in the Mental and Social Sciences. American Journal of Psychology, 48, 342–351. Kuhn, T. S. (1970). The Structure of Scientific Revolutions. University of Chicago press (Rev. Ed.). Meehl, P. E. (1967). Theory Testing in Psychology and Physics: A Methodological Paradox. Philosophy of Science, 34 (2), 103–115. Siegel, S. (1956). Nonparametric Statistics. McGraw-Hill. Stevens, S. S. (1951). Handbook of Experimental Psychology. Wiley. Trendler, G. (2009). Measurement Theory, Psychology and the Revolution That Cannot Happen. Theory and Psychology, 19 (5), 579–599. Wasserman, L. (2006). All of Nonparametric Statistics. Springer Science. Whewell, W. (1840/2014). The Philosophy of the Inductive Sciences. Online publication 2014. Cambridge University Press. https://doi.org/10.1017/ CBO9781139644662 Young, N., & Weimer, W. (2021). The Constraining Influence of the Revolutionary on the Growth of the Field. Axiomathes. https://doi.org/10.1007/ s10516-021-09584-1
8 Economic Calculation of Value Is Not Measurement, Not Apriori, and Its Study Is Not Experimental
Economists make predictions only to show we have a sense of humor. If we knew the future course of the economy we’d all be rich. We’re not. Walter E. Block
The field of economics has not miraculously escaped the measurement predicament that plagues the biological and other social domains. Despite intense desire and lobbying efforts on the part of “social physicists” from Auguste Comte on up, the field has experienced no more of any “Galilean revolution” of quantification than psychology. Attempts to quantify economics have a long history—after all, Sir William Petty, founder of econometrics, was a senior fellow in the Royal Society to a young upstart named Isaac Newton. The two fundamental issues have always been, first, the nonphysical nature of functionality, which has never allowed the positivistic attempt to “measure” and do “experiments” upon economic variables to succeed; and second, the unavailability of scaling (ratio and absolute) comparable to that available in domains which deal only with uniform, invariant objects. The economic subject is no more an identical object than the psychological one. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_8
117
118
W. B. Weimer
Complicating this, the history of the field shows a series of splits between opposing camps: “Objectivists” who continue to pursue rigorous measurement and actual experimental manipulation, on one hand, and on the other, so-called “subjectivists” who argue that subjects with unique histories and hence unique individual values can never be measured but at best rank ordered. At somewhat cross purposes to this fundamental split is another one, over whether or not the field can ever legitimately move from “micro-” economic theory and systematization to “macro-” economic theory and prediction. Still a third overarching division is superimposed upon the above dichotomies: whether to take a prescriptive approach to policy considerations, based largely on macroeconomic phenomena and the preference for government interventions, or instead to consider the complexity of social and economic organization to be spontaneously organizing and thus not in need of (or equally, not benefiting from) such supervision and intervention. Understanding the requirements of quantification and the nature of experimental constraint is crucial to all these (and other) issues. Here, we concentrate upon the limitations on measurement and experimentation, and leave the superimposed issues for later discussion. Let us begin by noting how one prominent school of economic thought has consistently emphasized the limits of mensuration in the domain.
Austrian “Subjectivism” Begins with the Impossibility of “Physical” Mensuration Ludwig Mises’ (1966) monumental Human Act ion summarized in a few brief remarks much of what we have concluded in prior sections about what can be measured in human subjects. This is as directly to the point about quantification in economics as one can get. It “must not be confused with the quantitative methods applied in dealing with the problems of the external universe of physical and chemical events. The distinctive mark of economic calculation is that it is neither based upon
8 Economic Calculation of Value …
119
nor related to anything which could be characterized as measurement” (p. 209). Mises correctly argued that this is because economic action is purely functional and goal-oriented. When used, numbers and calculations don’t refer to measured quantities but only to functional concepts— expected outcomes (exchange ratios) based upon individuals’ understanding of expectations of future market transactions, which is what all economic action is directed to achieving. “We are not dealing at this point of our investigation with the problem of a ‘quantitative science of economics,’ but with the analysis of the mental processes performed by acting man in applying quantitative distinctions when planning conduct. As action is always directed toward influencing a future state of affairs, economic calculation always deals with the future” (ibid.). Following an earlier approach to scaling theory, Mises distinguished between ordinal and cardinal numbers. Ordinal refers to rank (ordering), denoting greater or lesser than, and cardinal refers to quantity (thus usually ratio) if the conditions of quantification are satisfied . Every action can make use of ordinal numbers. For the application of cardinal numbers and for the arithmetical computation based on them, special conditions are required…. Economics is essentially a theory of that scope of action in which calculation is applied or can be applied if certain conditions are realized. No other distinction is of greater significance, both for human life and for the study of human action, than that between calculable action and non-calculable action. (ibid., p. 199)
Economic activity depends upon an individual actor valuing a certain end and being confronted with varying means to achieve (or at least to strive for) those valued ends. All that can be “calculated” are relative orders, or the ranking of one means in comparison to others. Thus values are not measurable by ratio or most interval scaling procedures—they are ordinal only. Economics is about the disparity of values which individuals attach to means to their ends: “People buy and sell only because they appraise the things given up less than those received. Thus the notion of a measurement of value is vain.… Values and valuations are
120
W. B. Weimer
intensive quantities and not extensive quantities. They are not susceptible to mental grasp by the application of cardinal numbers (ibid., p. 204).” With remarks such as these by Mises, and the systematic reiteration, clarification and elaboration of the points by Hayek and other prominent Austrians in many publications, one must inquire as to why this fundamental limitation has been ignored by other “schools” of economists. No doubt one reason has been the failure to separate the obvious truth of the limits of measurement and experimentation from the stigma of “subjectivism” (which we will see below, is for the Austrian approach not “subjective” at all, but rather individual and inter subjective, and is properly called individualism—or methodological individualism instead—detailed in Chapter 10). Probably a more important reason was Mises’ attempt to construct an axiomatic and a priori conceptual framework in which all of economics was alleged to be contained. This confusion of the ancient Greek (Euclidean axiomatic) approach to geometry as an ideal instance of science, when the rest of the scientific universe had long since abandoned axiomatics for postulational and hypotheticodeductive theorizing, made it easy for opponents to view the Austrian approach as hopelessly outdated, as an attempt to go back to an abandoned conception of “Euclidean” science. In such an atmosphere, it was all too easy to assume that statistical sophistication was all that was required to make economics “an experimental science.” But it should be obvious that in the domain of the activity of agents, there is nothing that is constant except changing patterns of behaviors and expectations, and thus there can never be any determination of constancy to underlie actual measurement. All that we can hope to achieve is an overview of the historical development of these patterns, and, as Hayek emphasized, we must be content with trying to discern the overall patterns of individual behavior, and abandon the “social physics” attempt to have measured point predictions. One can certainly collect data that is “economic,” but it is the raw material for pattern discernment (and hopefully, a theory providing general pattern prediction), and not the basis for a priori “laws” of economic behavior. Objects may follow laws but subjects are instead bound by rules, and rules can be discerned only in patterns of activity, not in point predictions or a priori specification.
8 Economic Calculation of Value …
121
Behavioral Economics Is Just Applied Social Psychology As such, it is obvious that it is subject to the same lack of corroboration of any of the empirical conditions of quantity, and should thus equally be limited to nominal and ordinal scales, and thus also to nonparametric statistical procedures. In many cases, interval scaling may actually be available, and in such instances parametric procedures would probably be appropriate. But since the field is almost totally based upon behaviorism, it is subject to all the conceptual and empirical refutations of extant behaviorist theories (explored in other locations). Furthermore, it is often subject to the obvious criticism that overzealous application of constraints (analogous to those of the Skinner box) may produce “replicable” results but only at the expense of failing to be relevant to how human agents behave if left unconstrained. While the addition of (and application of ) results from psychology to economics should certainly be beneficial (to both disciplines), economists should be careful that they do not adopt an outmoded conception of psychology that denies the concept of agency upon which their entire field is based. They need to understand what Jerry Fodor told behaviorists back in the 60’s who thought that “simple” experiments based upon reinforcement theory could explain language. What Fodor said was that what and how something is done by a subject whose hands are tied behind their back (as the constraints of behavioristic theories require) is neither what they are likely to do with both hands free, nor will they even decide to do it the same way if they are free.
What Has Been Called “Experimental Economics” Is Actually Constrained Demonstration Studies There has been a recent revival in the attempt to do experiments in economics. By placing small groups of subjects in artificially constrained situations, researchers have attempted to delimit how actors choose to
122
W. B. Weimer
pursue goals in economic situations. Aping the “social physics” model, the Nobel Prize committee has given Nobel prizes to researchers in bounded rationality and game theory such as Selten (1988, 1999; Geigerenger & Selten, 2002), and to “experimentalists” like Smith (1976, 1982, 2003) for constructing simplified and constrained (i.e., laboratory) situations to assess the performance of particular economic “institutions” (actually, laboratory simplified models of social institutions). These attempts to put economic actors into Skinner box situations in order to impose enough constraint to discern “basic laws” of economic behavior are likewise just applied social-psychological research paradigms, and (when well done) constitute “clear cases” (Joseph Berkson “hit you between the eyes” situations) or demonstrations of phenomena instead of real experiments. If their authors were to simply acknowledge this fact it would be wonderful. Then we could perhaps be spared the silliness of attempts to make economics “experimental” and “scientific” and looking for “laws” of economic behavior. Until that occurs, the specter of scientism tarnishes all such research, especially when authors approvingly cite quantification and the search for “laws” as a desirable approach to follow. Smith is a case in point. His empirical research (which is interesting and valuable because it shows the falsity of Mises’ approach, since Smith shows cases in which prior “certain” assumptions about human action turn out to be empirically falsified—and thus they cannot be apodictic or axiomatic at all) consists in going into a laboratory and looking at what a sample of people actually do when confronted with an economic task or “decision.” This “going into a laboratory” is inevitably characterized as “experimental”—when it is instead correctly described as an empirical situation demonstrating an outcome. Like psychology, economics is empirical (thus certainly not apriori or axiomatic or apodictic) but not experimental. Lab demonstration is essential—but that alone does not make it experimental (or rational—1 ).
8 Economic Calculation of Value …
123
This Is Your Problem as a Consumer of “Scientific” Knowledge Thus far, we have discussed the problems of mensuration from the standpoint of the researcher or would be experimenter. But in the marketplace of ideas that is the scientific endeavor, as in the economic order, there are both producers and consumers. It is clear that producers, i.e., the researchers, must be more aware (for many, at least begin to be aware) of the problems of scaling and measurement, and must adjust their research strategies and designs to accommodate the lack of actual measurement and the possibility of the hard science type of experimentation being available. But the problem also belongs to you, the consumer of “human science” research, in interpretation of the results of the studies in the nonphysical domains. In order to determine the extent to which the results are meaningful, you the scientific consumer must also be aware of the limitations and caveats that these sections have discussed. You cannot simply assume that because a study has been published that the researcher(s) whose work you are reading have based it on an adequate understanding of the nature and limitations of mensuration and experimentation. You must examine all aspects of the study: what questions it attempted to answer, the design of the data collection, the extent to which the data collected are or are not subject to measurement, and the extent to which the constraints of the experiment (the extent of the control of the subjects) have an effect upon the way the subjects would behave if they were acting normally, when unconstrained. Just as we lose measurement information when we do not correctly scale the numbers we employ, we lose information (about how individuals would behave if they were behaving in the natural environment) when we impose too many and too restrictive constraints in order to attempt to do an “experiment.” It is up to the scientific research community to interpret the results of our studies. Just because a well-known “experimenter” says his or her results are wonderful and scientific does not mean that they are. And just as important, we as readers must require more sophistication and understanding on the part of the researchers, and we must require the same of ourselves in order to interpret their studies.
124
W. B. Weimer
Scaling Procedures Crucially Influence the Progress of Science The literature of both the hard and soft sciences is constantly filled with laments about the seemingly slow progress of fields other than the traditional hard “physical” sciences. Positivists from Comte and Rousseau through the logical positivists of the 1930s to the “hardheaded” experimentalists in the social studies today have lamented the slow progress of their fields and have constantly attempted to render them “scientific” on the “social physics” model of aping the methodology of physics. Other sections detail major reasons for why this difference occurs, centered around the differences between spontaneously ordered complex phenomena, for which only explanation of the principle and not explanation of the particular can be obtained. To conclude discussion of mensuration, it is necessary to add a second major difference, seemingly invariably unnoticed by practitioners advocating either hard or soft approaches. This is the fact that the scaling procedures available for the data within a domain impose fundamental limits on the extent to which the deductive unification of the field (to use the terminology of Körner, 1966) can be realized. The idealization of science as the deductive unification of experience upon the basis of the experimental determination of its facts and regularities can apply only to fields in which all the empirical results comply with the requirements of ratio scaling. Physics from the Renaissance on up has been able to take advantage of the happy accident (and it must be stressed that it is indeed just an accident) that the spatial domain is amenable to the forms of mathematical analysis based upon ratio scaling, and that that scaling technique was chosen to study what we usually call the physical realm. Galileo could have rolled 10 million balls down the same number of inclined planes and gotten very little knowledge from his studies had the physical domain been limited to scaling techniques that were only nominal or ordinal. And one should note that this would have prevented him from discovering even adequate explanations of the principles involved, as well as being unable to determine the prediction of particulars. Scaling procedures, which determine the type of information that can be extracted in meaningful fashion from
8 Economic Calculation of Value …
125
data, determine the nature and extent of progress in a domain. When we become “empirical” and assign numbers, we have no knowledge at all unless the numbers are used within the framework of proper scaling. No one has done that in the social sciences. Having ratio scales and the mathematics applicable to them enables physicists to come back to long dormant issues or problems when new theory or technique makes them amenable for mathematical exploration and solution. Consider the currently hot topic of blackholes in cosmology. Initially proposed by Einstein in 1916 as singularities (in a strictly mathematical sense), they have progressed from a mathematical curiosity in the general theory of relativity to being the basis of the holographic approach proposed by ‘tHooft and Susskind, and the exotic stuff of current science fiction, along with the “wormholes” of time travel (themselves misinterpretations of the Einstein-Rosen bridge at the subatomic level). The progression from the original Einsteinian singularity (as a limit of his initial equations, through to the 1950s notion of collapsed stars that captured and recalled their own light, from which J. A. Wheeler, always on the alert for a catchy turn of phrase, took the term of “blackhole” from a student, through to the discovery of quasars and pulsars, into the Bekenstein bound and then Hawking radiation, then into an amalgamation with string theory and recent speculations on quantum loop gravity) has been made possible by refined observation techniques providing more and more ratio scaled data from which to derive, with the aid of more and more refined theory, a whole new range of potentially testable observational outcomes. This is a remarkable achievement. Nothing like it is to be found in any social domain, nor will it ever be. Limited to ordinal and interval data, with only a hint of ratio data transformations available from (often questionable) techniques, psychology and the social domains such as economics have been limited to using available physical theory and its “facts” as boundaries or constraints upon its theories. Theory in the social-psychological areas, while it ranges over numbers from its data collection procedures, cannot be supplied with the same basis of experimental procedures available to physics. We simply cannot have as deductive consequences of any
126
W. B. Weimer
psychological or economic theory anything comparable to the point predictions of the ratio-based fields, or laws as inexorable causes.
Probability Theories Help Nothing Here Probability theory cannot overcome this limitation. The concept of probability, no matter what theory it is based upon, cannot overcome the essential limitation that there can be no deductive unification of experience resulting from inherently probable statements or formulations. Probability “numbers” admit only of pattern descriptions, never point predictions. To see this, one need to only consider this time worn joke: A man who is terrified of flying discusses his fears with a friend. When asked what he is afraid of, the man replies that he is afraid that there will be a bomb on the plane and that it will be blown up and he will be killed. After a moment’s thought, his friend proposes what she says is a “surefire” solution to this problem. She tells her friend to make his own bomb, and to take it on the plane with him. Her “surefire” reassurance to her friend is based upon the fact that the probability of any bomb being on the plane is very small. Therefore, she concludes on the basis of probability theory, that the probability of there being two bombs on the same plane is so near zero that nothing will happen to her friend.
Anyone who “deduces” that this would make one safe when flying would be better-off with other friends. We must pay more attention to scaling techniques and the role of probability statements before drawing conclusions such as found in this joke. As the physicist Max Born cogently noted, there is an absolutely insuperable gulf between a probability, no matter how infinitesimally small, and actual zero. In similar fashion, there is an insuperable gulf between what is meaningful in ratio scaling and what is meaningful in nominal, ordinal, or interval data (to consider only the most frequent types). We cannot bridge an insuperable gulf of either sort, no matter how much we want to or how hard we try. Learn to do something else instead. The social and life sciences have got to come to grips with the limitations on the applicability of ratio scaling in their domains and explore
8 Economic Calculation of Value …
127
more adequately what information can be obtained by other means. The “rate of progress” in those areas is not going to increase until we take non-ratio data seriously, and learn to get the maximum amount of information from it. Combined with explanations of the principle for complex phenomena, that is really all we can hope for. Functionality cannot be studied the same way as physicality. Better to have less knowledge because we are constrained to use the correct data analysis techniques for ordinal and some interval data than to have none at all because we have mistakenly used techniques suitable only for ratio scaling procedures.
Human Action Is Not Given Apriori “Epistemology by its very meaning presupposes a separation of the world into the knower and the known, or the controller and the controlled. That is, if we can speak of knowledge about something, then the knowledge representation, the knowledge vehicle, cannot be of the same category as what it is about” (Pattee, 2012, p. 279). There are necessary dualisms forced upon us by epistemology, and that between the knower and the known is definitely fundamental. Recall that Mises said that our knowledge of economic action is a priori, that it is contained in the meanings of the terms we use to describe behavior (in this particular case, as economic action). If this were true, there would be no separation between the knower, as the subject of conceptual activity utilizing these terms, and that which is alleged to be known (the terms utilized), because both would be in the same realm of rate-independent conception, without the possibility of being tied down to an independent reality. Einstein discussed the equivalent problem a century ago in the context of the choice, of crucial importance to physics, of which (conceptual) geometry is the best description of (empirical) reality: How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Is human reason, then, without experience, merely by taking thought, able to fathom the properties of real things?
128
W. B. Weimer
The answer to this question is, briefly this—as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. (1953, p. 189)
Mathematics exists in the realm of explicit or conscious reason—the rateindependent realm of conception—and as such, is inherently syntactic and not semantic: it is still meaningless and devoid of empirical content. It exists in the expression, through syntactic structuring, of semantic content. Syntax (which is what mathematics is) is the vehicle by which meaning or semantics can be manifested. But it is not therefore the meaning itself. Human reason, as Einstein clearly saw, is not “able to fathom the properties of real things” without taking experience into account. The way human conception would do so is by conjecturing a theory, which would have to postulate the connection between thought and an independent reality. Praxeology, like pure mathematics, does not refer to reality. Mises, faced with an awareness of the distinction between physical and functional realities, and the realization that the economic realm was not and could never be physical, was unable to make any headway into understanding how economics as a functional realm is a higher order constraint (as Polanyi used the term) acting upon physical behavior through the mechanism of downward causation. He thought all functionality was beyond any empirical aspect of science, and thus, seeing no other choice available, Mises chose to regard the functionality of economics as a priori indubitable truths. That choice renders economics a merely conventional instrument, an after the fact rationalization, a pragmatic device to restate already available definitional fusions and, very likely, confusions. That choice was a bad one. It is why the “Austrian approach” was not taken seriously as an alternative to the “empirical” (actually, pseudo-empirical) framework of Keynes in the twentieth century. Defenders of the Austrian approach have only recently begun to address the issue, attempting to return to an empirical and naturalistic instead of a priorist framework (e.g., Vanberg, 2004), unfortunately still following or holding in too high a regard the physical science model of explanation.
8 Economic Calculation of Value …
129
Productive Novelty Cannot Occur in an Apriori System Earlier, we noted that existence of the mere phenomenon of novel behavior—what the linguist, restricted to language, calls linguistic productivity or creativity—suffices to refute any a priori conception of either knowledge or behavior, because no a priorist specification could ever list all possible novel instances (indeed, novel sentences, and novel actions, can and do occur despite the fact that they have never occurred before in the history of the human species). This is easily noted in the common linguistics examples. The transformational revolution in linguistics brought to the fore something else that was obvious but never previously discussed in any theoretical framework: the problem of deep structural ambiguity (see Chapter 14). Before that period, only surface ambiguity (which can be disambiguated by parsing a sentence into its surface phrase components) was recognized. Deep structural ambiguity requires one to look back over the derivational history of the utterance in order to disambiguate it. Examples such as: the shooting of the hunters was terrible Praising professors can be platitudinous
clearly show that meaning in any surface form is ambiguous (indeterminate or under determined) in the absence of knowledge of the structural derivation that underlies the intended interpretation. You do not know whether the hunters were shot, in the first example above, or were themselves poor at shooting something else, unless you know how the sentence was intended, which is to say, what the deep structural derivation of the utterance was. You must understand the pragmatic framework and the prior context in which it is embedded in order to know which meaning is intended by the speaker. Contexts determine meaning, it is not determined by isolated individual words or the equally isolated idea of “concepts” that words somehow instantiate.
130
W. B. Weimer
Creativity Is Tied to Ambiguity Human action, inevitably simultaneously embedded in a rich network of contexts, is invariably deep structurally ambiguous because of that embedding. Whether the same physical movement is an economic act of, say, “cashing a check,” or a psychological act of “showing hostility toward one’s mother,” or a social act of “writing a friendly greeting” is indeterminate until one has knowledge of the derivational history of the behavior involved (see Weimer, 2021, 2022, for fuller discussion). Despite the definitional fusions Mises and his followers use, actions are inherently deep structurally ambiguous with respect to physical behavior and cannot therefore “definitionally” or “logically” specify or delimit an infinitely extended domain of physical behaviors. The praxeological approach, identifying physical movements with predetermined economic action concepts or definitions, has no mechanism to deal with either novelty or this sort of inevitable ambiguity. It remains a pre-evolutionary and static account of a fundamentally disequilibrated dynamic realm. Since the “Austrians” were the foremost defenders of the dynamical nature of economic activity in the twentieth century, this is as paradoxical as it is unhelpful. Praxeology was a response to the inevitable gulf between the physical realm of determinate “laws of nature” as every-where every-when inexorability, and the intentional-teleological-meaningful rule governed realm of functionality. But it was inadequate to both domains of existence. There are no every-where, every-when laws of human action (a priori or otherwise)—there are only rules of behavior that are probabilistic, corrigible, correctable by empirical study, and in the real world often violated. There are no point predictions or mensurations based on ratio or absolute scaling properties in functional domains, only patterns of (theoretically specified) classes of behaviors or actions. One cannot salvage the “social physics” model for the functional realm by moving to a closed system of a priori, and thus fixed forever, definitional fusions. The creativity and ambiguity of functional action cannot be contained in a priorist schemes alone. There is genuine novelty and emergence (which would only be error in measurement) in the purely physical universe, and it occurs in the functional domain of action. Our novelty is based upon
8 Economic Calculation of Value …
131
the compounding of errors or misrepresentations of “perfect” mensuration and flawless behavioral responding. It is our fallibility that gives us freedom and creativity. We can make this point by a parallel argument to the impossibility of entailing laws explaining biological evolution and diversity. The biosphere is unpredictable in the sense that it is not possible to anticipate it—it is, as the biologists in chapter 4 argued for the origin of species, not prestatable where or equally what will constitute an adjacent possible econiche into which (or equally, from which) a new organism will emerge. Organisms co-create their own constantly evolving forms and the evolving econiches in which they exist. Similarly, the adjacent possible economic niche is enabled (but never entailed) by the occurrence of economic novelty and action. An elementary discussion of this is found in Kauffman (see the epilogue in 2019). Economic activities— and hence human actions—explode unpredictably into existence all the time. The World Wide Web enabled—but did not cause the existence of—Ebay and Amazon as new kinds of retailers, and in so doing, they in turn created content on the web. This in turn enabled—but did not directly cause—new browser services such as Google to come into existence. Goods and services are a context of enablement (but not of billiard ball causation) of the next invention or novel good to emerge. No a priori deductive systematization can ever entail (directly determine or cause) this emergence as a lawful consequence. Functionality is never causally determined by laws of nature.
Note 1. Many economists regard the field as having “moved on” from the “ancient” issues of this chapter, now looking at mathematical models such as game theory and “rational choice” studies as legitimately experimental and involving nothing but ratio scaled numbers. They see the complexity of the subject matter as easily amenable to abstract (mathematical) models, and disregard any measurement or scaling issues entirely. As I was told by one such individual, economists see complexity as something that abstract models are good at reducing, and they will use abstract
132
W. B. Weimer
models without caring at all about what is “real,” so long as the predictions are borne out. Thus has arisen concern with building a “rational action model” to account for choice behavior in limited choice games and “economic” challenges set up in the laboratory. There are several problems with this optimistic view. First, there has been no attempt to address the scaling and measurement issues at all. Second, these studies remain “demonstrations” rather than experiments in any hard science sense, and thus any “significance” of results would be descriptive only, showing that given situations created descriptively separate results. Third is the Jerry Fodor problem, that there is no guarantee that the subjects would behave the same way in the “wild,” which is to say in the real world, if similar circumstances arose, and they were not performing for an experimenter in a college building. But most important is the presumption that “rational” economic behavior is always conscious, and that the motivations and causal nexes of the behaviors are described correctly in natural language reports by the subjects (or even the researchers). As we detail in the appendix chapters, that Cartesian model of explicit rationality is untenable. Subjects are never able to fully articulate the tacit processes (motivations, etc.) underlying their actions. Indeed, as the last chapter details, rationality is never fully explicit nor instantly accessible.
References Einstein, A. (1953). Geometry and Experience. In H. Feigl & M. Brodbeck (Eds.), Readings in the Philosophy of Science (pp. 189–194). AppletonCentury-Crofts (Originally in A. Einstein, Sidelights of Relativity. New York: E. P. Dutton, 1923, 27–45). Geigerenger, G., & Selten, R. (2002). Bounded Rationality: The Adaptive Toolbox. MIT Press. Kauffman, S. A. (2019). A World Beyond Physics. Oxford University Press. Körner, S. (1966). Experience and Theory. Humanities Press. Mises, L. (1966). Human Action (3rd ed.). Contemporary Books (Now Liberty Fund). Pattee, H. H. (2012). Laws, Language and Life. Springer.
8 Economic Calculation of Value …
133
Selten, R. (1988). Models of Strategic Rationality. Springer Verlag. Selten, R. (1999). Game Theory and Economic Behaviour (2 vols.). Edward Elgar Publishing. Smith, V. (1976). Experimental Economics: Induced Value Theory. American Economic Review, 66 (2), 274–279. Smith, V. (1982). Microeconomic Systems as an Experimental Science. American Economic Review, 17 (5), 923–955. Smith, V. (2003). Constructivist and Ecological Rationality in Economics. American Economic Review, 93, 465–508. https://doi.org/10.1257/000282 803322156954 Vanberg, V. (2004). Austrian Economics, Evolutionary Psychology and Methodological Dualism: Subjectivism Reconsidered. Freiburger Diskussionspapiere zur Ordnungsokonomik, No. 04/3, Albert-Ludwigs-Universitat Freiburg, Institute fur Allgemeine Wirtschaftsforschung, Abteilung fur Wirtschaftspolitik, Freiburg, iBr. Weimer, W. B. (2021). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 1. Cosmos + Taxis, 9 (11+12), 1–29. Weimer, W. B. (2022). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 2. Cosmos + Taxis (in press).
Part II What can be Known, and What is Real
We have seen that much research in the social domains is ambiguous at best, and is not known to provide actual knowledge. We have found that one prominent approach to epistemology, phenomenalism, runs into insuperable problems (primarily because it fails to separate the knower from that which is known). In opposition to phenomenalism has been realism—the doctrine that there is a real world independent of our existence and our experience of it—and here the difficulty has been to formulate an adequate representational form to replace the inadequate naïve or direct form of realism. The only formulation that is compatible with the evolutionary approach to epistemology is structural realism, stemming from the distinction between the acquaintance we undergo, our phenomenal awareness, and our knowledge by description of both commonsense and theoretical concepts. Once structural realism is understood, it provides a framework for reformulating older problems that were usually addressed as issues in the relationship of the Cartesian mind and the material body. We shall see that addressing these (and similar) issues from the standpoint of the distinction between physicality and functionality no longer requires the Cartesian separation of mind substance from body, but allows us to focus upon the organism as a
136
Part II: What can be Known, and What is Real
phenomenon of organized complexity in which perception and action— knowing and being—cannot be separated except for conceptual analysis. The dualisms that remain are epistemic rather than ontological.
9 Structural Realism and Theoretical Reference
Reality… What A Concept Robin Williams (Vinyl album title, 1979)
The evolutionary approach to epistemology requires one assumption that is quite obvious: realism. There must be a real econiche from which we as subjects attempt to extract knowledge and adapt to it. Realism is a metaphysical thesis that can never be “justified” or proven true. Nor can scientific propositions—everything is unjustifiable, and the issue is instead how such a position can be criticized. Criticism of scientific propositions is relatively easy since they can be made to be in conflict with classes of potential observations: the empirical realm is delimited by the observations that are forbidden to occur if an empirical proposition is true (Popper, 1959), and this essential negative constraint of testability is thus the hallmark of empirical science. But realism, the thesis that there exists an external world independent of human perception, is not testable (falsifiable): the thesis is compatible with any observation whatever. Thus, it can be criticized only by arguments from an alternative point of view, and it can in turn criticize other metaphysical doctrines © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_9
137
138
W. B. Weimer
only by showing that they are incompatible with views consonant with realism that have support on other (such as scientific) grounds. As Bartley (1984) said, “The relationship between realism and observational evidence seems to be the following, indirect one: realism itself is untestable. However, the denial of realism, i.e., idealism, is contradicted by certain well tested laws of science; and these are in turn testable by basic statements. Thus, current scientific results leave hypothetical realism in possession of the field” (p. 199). The strongest support for realism presently available is provided by the data of evolutionary biology, psychology, and the evolutionary worldview in physical domains (itself another metaphysical view). It makes no sense to talk of animal life evolving, i.e., increasing its adaptation to an econiche, unless the existence of a “real” world independent of organisms is presupposed. This is why evolution and realism fit together, and why the issue is to formulate an adequate form of evolutionary realism. We must now examine that issue, first by specifying what can be known of an external world and seeing how our theories can constitute knowledge. We divide evolutionary epistemology into a conceptual or philosophical analysis that attempts to specify what human knowledge is, on one hand, and a more biological and psychological specification of the nature of the knowing organism, on the other. Both of these issues are best examined in historical perspective. This chapter considers the philosophical analysis underpinning epistemology in the framework of realism.
Structural Realism and Our Knowledge of the Non-mental World Science provides knowledge of nature that is purely structural , and that knowledge is fundamentally different from what we “undergo” in phenomenal experience (or our acquaintance). This is the distinction between knowledge as a structural description of the non-mental realm in opposition to raw experience, which can be elaborated in “evolutionary” fashion, in terms of a historical overview of our conceptions of knowledge of the external world.
9 Structural Realism and Theoretical Reference
139
Begin with the concept of “powers” in the retreat from naïve realism. The seventeenth century saw the rise of science in a sufficiently modern sense that its classic achievements are now part and parcel of common sense. The men (and unfortunately, very few women) who made these achievements were realists: they took the existence of an external world for granted, in the sense that they believed there were “real” objects causally related to our perceptions. Ontologically, there was a world external to the senses; epistemologically, the question of what we can actually know of that external world became the pressing issue. One notion that gained currency at that time was the primary-secondary quality distinction. Gradually, it was realized that some qualities of objects and events that appeared at first to reside within those objects and events themselves actually existed only in the perceiver. One could be tickled by a feather, for instance, but the feather per se does not possess any “tickling property.” Or one could be burned by a candle, but the flame, although it has the “power” to cause burning in the observer, does not possess the sensation of burning as an intrinsic property. Such sensations were then located within the observer rather than the objects themselves. The primary or intrinsic qualities were thought to reflect the objects themselves, but sensations were thought of as secondary qualities, and were judged to be properties only of the perceiver. Galileo Galilei made the distinction with particular clarity in the sixteen hundreds: Whenever I conceive of any material or corporal substance, I am necessarily constrained to conceive of that substance as bounded and as possessing this or that shape, as large or small in relationship to some other body, as in this or that place during this or that time, as in motion or at rest, as in contact or not in contact with some other body, as being one, many, or few—and by no stretch of imagination can I conceive of any corporal body apart from these conditions. But I do not at all feel myself compelled to conceive of bodies as necessarily conjoined with such further conditions as being red or white, bitter or sweet, having sound or being mute, or possessing a pleasant or unpleasant fragrance…. These tastes, odors, colors, etc., so far as their objective existence is concerned, are nothing but mere names for something which resides exclusively in
140
W. B. Weimer
our sensitive body (corpo sensitivo), so that if the perceiving creatures were removed, all of these qualities would be annihilated and abolished from existence. (1960, pp. 27–28)
While Galileo was silent as to the intrinsic properties of external objects, John Locke seems to have believed that our ideas of primary qualities exactly resemble the actual primary qualities inherent in objects, and that objects possess (in addition to primary qualities) powers which cause us to have ideas representing their secondary qualities (see Aaron, 1937). Locke thought he knew the primary qualities which existed in the objects themselves; in addition to the ontological relocation of secondary qualities from objects to perceivers, his brand of direct or naive realism held that we can know the primary qualities of objects as such. Berkeley’s epistemic attack on Locke, which does not deny the ontological separation of primary and secondary qualities, is strictly epistemic: he asked Locke how he knows the primary qualities, and concludes that one cannot know them as such. The primary qualities of objects are no different from the secondary qualities in the sense that they cannot be shown to exist in the objects themselves. The only things that one can know directly are the deliverances of one’s own senses. This is Berkeley’s idealistic “empiricism,” an epistemic subjectivism, and it is independent of his infamous ontological idealism, with its denial of the existence of matter. The next step in historical development of the epistemological retreat from naïve realism (the doctrine that objects possess the qualities of our perceptions) was taken by Kant. While Berkeley was an “idealist” in ontology, denying the existence of matter (as other than an idea), Kant founded idealism as an epistemology. Historically, the key slogan of the epistemological idealist has been “the world is a construction in the mind of man,” and it means that human knowledge of the external world is a reflection of the way our heads work rather than a direct reflection of any external reality. But note that idealism, which is species relative, is not in itself subjectivism: subjectivism is the extreme form of relativism which says all knowledge is relative to the particular perceiver, that there is nothing intersubjective; whereas idealism says that knowledge is both relative and intersubjective—relative to human conceptual frameworks,
9 Structural Realism and Theoretical Reference
141
but still intersubjective in that it can be shared by those individuals who hold a framework in common. In Kant’s hands, the primary-secondary quality distinction shifted to a distinction of the phenomenal from the noumenal world. The phenomenal world is that of our direct experience, the world of appearances. The noumena, the things-in-themselves, although causally responsible for the appearances they create, are totally unknown to us. We have no idea of the intrinsic properties of things-in-themselves: all we can know, argued Kant, are phenomenal experiences. All we know of external objects is that they possess the power to create in us their appearances. The only things of which we know the intrinsic properties are our own experiences. Science therefore tells us nothing of the intrinsic properties of the noumenal world; epistemically, it is nothing but an inference correlated with our experiences. Kant’s position, it must be noted, is more extreme than Galileo’s, in that it denies that we know anything about the world external to our senses. Soon we shall have to choose between Galileo and Kant; for now note only that Kant, like his contemporaries and predecessors, distinguishes between acts of perception on the part of the perceiver and the object perceived. Perception involves both subjects perceiving and objects perceived. But is it necessary to distinguish subjects and objects? William James leapt to another step, in what he called his “radical empiricism,” and said “No.” For James, the subject-object distinction was not fundamental: there is no need to analyze perception into a knower, or mind or soul (as Descartes had done), and an object known. James thought being conscious, having knowledge, does not require that consciousness be a thing or a subject. Our thought can perform its characteristic function of knowing without there being any “stuff ” of which it is made; especially any “stuff ” which is in opposition to the stuff of material objects (as Descartes had held). Indeed for James, there is no “stuff ” in matter either: there exists, he concluded, only pure experience. Knowledge is a relation between pure experiences. The subject of knowledge, the “I” who has knowledge, is equally much a construction in the mind as is our knowledge of the external world. On this, James was Berkeley in nonreligious terminology.
142
W. B. Weimer
James’s phenomenalism has both epistemic and ontological components. When he says that knowledge is a relation between pure experiences, he is on well-trodden (relational) epistemological ground, following Kant. But what of the existential claim that there exist only pure experiences? That is, of course, the ontological doctrine of phenomenalism. James’ contemporary, Ernst Mach, defended phenomenalism on the grounds of its economy. At the end of the nineteenth century, Mach also started from the epistemic claim of what we can know: Everything that we can know about the world is necessarily expressed in the sensations, which can be set free from the individual influence of the observer in a precisely definable manner…. Everything that we can want to know is given by the solution of a problem in mathematical form, by the ascertainment of the functional dependence of the sensational elements on one another. This knowledge exhausts the knowledge of “reality”. (Mach, 1959, p. 369)
From the epistemic claim, with its denial of the subject-object distinction in awareness, Mach moved directly to the ontological claim that only “elements,” or pure sensations, exist in reality: There is no rift between the psychical and the physical, no inside and outside, no “sensation” to which an external “thing,” different from sensation, corresponds. There is but one kind of element, out of which this supposed inside and outside are formed—elements which are themselves inside or outside, according to the aspect in which, for the time being, they are viewed. (p. 310)
The doctrine of phenomenalism lurks in the background of empiricist philosophy. If one starts from the premise that all knowledge is a deliverance of the senses, in combination with the skeptical arguments of Hume and Berkeley, it is hard to resist concluding that not only is the foundation of knowledge sense data, but also, since all we can ever know is sense data, that sense data must be the only things that exist. Positivistically oriented philosophers from Mach through the logical positivists of the twentieth century, who seek security blankets or certainty more than plausibility or truth, have spent considerable
9 Structural Realism and Theoretical Reference
143
time trying to construct all science from a phenomenal basis. These approaches abandon realism for an unabashed conventionalism: scientific theories become nothing but statements about the relationships or connections between sensations. The retreat from naïve realism to phenomenalism abandons realism entirely. The only “real” world is that of our experience, and external reality becomes a purely conventional construction of connections between sensations. Every empirical statement about physical objects, whether scientific or commonsensical, is reducible to statements referring exclusively to sense data. The thesis, again, is that physical objects are constructions out of sense data; put another way, sense data are the only and ultimate existents. Arguments against phenomenalism as an ontological thesis have in common that they indicate that reality is vastly larger and more variegated than phenomenalism can allow, and thus they are simultaneously arguments for realism. Realist arguments show in one or another manner that our knowledge of the phenomenal world raises problems that can be answered only by altering (and hence rejecting) the conceptions of reality which our senses give us. Any purely phenomenalistic interpretation of science will break down because our senses do not effect an adequate lawful classification of appearances: that is, in order to explain discrepancies in observable regularities, we will be forced to look behind the appearances, to another level of reality, in order to explain why our expectations were violated (see Aune, 1967; Körner, 1966, Sellars, 1963). The ontological claim of phenomenalism depends upon acceptance of the prior epistemic claim that all we can know are our sense data. To endorse phenomenalism, we must first reject Galileo and embrace Kant; then we must make the further existential claim of James and Mach. But that choice must wait for another distinction, examined below. For now, let us sum up this “potted” history. These developments from Galileo to the turn of the twentieth century bring into focus the conceptual problems surrounding experience. Human knowledge is intimately related to experience; indeed it seems to be trapped in it. That, however, forces the abandonment of naïve realism: experience is not an adequate reflection of reality. But what do we know in our own experience? Can we know anything that
144
W. B. Weimer
transcends our experience? Can we have any knowledge of the external world? What does advanced science tell us of the world external to our senses? How are we to understand scientific propositions referring to abstract and unobservable entities? In short, what is the epistemological and conceptual role of experience in the scientific world picture?
Acquaintance and Description We can begin to unravel these issues of experience in science by noting a distinction made famous by Bertrand Russell, between “knowledge” by acquaintance and knowledge by description. Suppose you hear a piece of music (perhaps on the radio, or in a concert hall), you are then acquainted with it, i.e., you know what it is like from personal experience. Acquaintance is what underlies you experience—our phenomenal awareness, what we undergo. In contrast to personal awareness or acquaintance, you can know that piece of music without having experienced it, if its properties are described to you, or if you “read” it on sheet music, etc. Then you know the music in the sense that certain of its abstract properties are specified, and they can be related to items already in your acquaintance. This is a distinction whose implications are crucial. Suppose I am sitting in my living room chair. I am personally acquainted with the sensible appearance of the chair: its occurrent color, smoothness, coolness, flexible resistance to my weight, etc. But as yet, the reader knows nothing of this object which is so familiar to me (except that there exists some object satisfying the abstract description, chair). I can, however, easily convey more knowledge of the chair to you; by relating that it is an English Regency period black leather covered armchair, with rosewood legs and Ormolu embellishment, etc. But such statements, though they convey true descriptions about my chair’s properties, do not let me become acquainted with these properties any better than I was before, and they do not let you become “acquainted” with them at all. The so-called “knowledge” by acquaintance of phenomenal experience is exhausted in the mere having or undergoing of experience. The description I have conveyed is purely abstract and structural, divorced from
9 Structural Realism and Theoretical Reference
145
experience of the chair. Yet if it is a good description, it would convey sufficient information that you could recognize my chair if it became an ingredient in your acquaintance, and even if you never encounter the chair, you can “know” its properties by relating their description to your own experiences. We can know what the description refers to. Russell (1912) reasoned (about the relationship between acquaintance and description) that if we are to attach meaning to our words, then the meaning must have direct reference to something with which we are acquainted: When, for example, we make a statement about Julius Caesar, it is plain that Julius Caesar himself is not before our minds, since we are not acquainted with him. We have in mind some description of Julius Caesar: “the man who was assassinated on the Ides of March,” “the founder of the Roman Empire,” or, perhaps merely “the man whose name was Julius Caesar.” (In this last description, Julius Caesar is a noise or shape with which we are acquainted.) Thus our statement does not mean quite what it seems to mean, but means something involving, instead of Julius Caesar, some description of him which is composed wholly of particulars and universals with which we are acquainted. (pp. 58–59)
This led Russell to endorse the so-called principle of acquaintance. “The fundamental principle in the analysis of propositions containing descriptions is this: every proposition which we can understand must be composed wholly of constituents with which we are acquainted ” (ibid., p. 58). At this stage in his career, Russell was concerned with the classic quest for the justification of knowledge. Acquaintance was to justify scientific knowledge: he was in the process of erecting human knowledge from a “firm foundation” of sense data. Like Mach and James, he endorsed phenomenalism, and held that the data of immediate experience were known indubitably, and that scientific knowledge could participate in that certainty if it were restricted to a sense datum formulation. But consider here only the implications of Russell’s distinction of acquaintance and description for the conflict between Galileo and Kant. Previously, it appeared that Galileo and Kant were in opposition, since Galileo claimed that we can know at least some properties of external objects while Kant maintained that we must remain forever unaware of
146
W. B. Weimer
the intrinsic nature of things-in-themselves. Russell’s distinction allows a resolution of this conflict: Galileo is correct that we can know by description certain properties of objects external to our experience; Kant is correct that we can never know the intrinsic or first-order properties of such objects. But knowledge is broader than and very different from acquaintance. Kant was wrong in claiming that we can know nothing about noumena, and indeed science tells us quite a bit about them. Kant was correct that we have no direct knowledge of, or acquaintance with, them. We can know the external, or non-mental, world by description, but we can never be acquainted with it. We are thus in a position to accept the distinction between primary and secondary qualities even though they are all equally secondary or physically “in” us: we can know the external properties of objects by description, and indeed, it is precisely such knowledge that science discloses. The question now comes down to characterizing what we know of non-mental objects when we possess knowledge by description of them.
From Phenomenalism to Structural Realism Russell introduced the distinction between knowledge by description and acquaintance while under the thrall of phenomenalism. But it did not take him long to try to switch to a realistic ontology (Russell, 1927) in conjunction with his epistemological principle of acquaintance. That is, Russell went back to Galileo’s position to claim that knowledge by description of non-mental objects is purely structural in nature, while simultaneously maintaining that human knowledge refers ultimately to items in our acquaintance. When it is realized that all theoretical and scientific knowledge is knowledge by description, his thesis of structural realism results. Russell came to this conclusion as a result of distinguishing sharply between the questions of ontology and epistemology. Ontologically, he began by accepting the theories of contemporary physics as more truthful than common sense; then in epistemology “I ask myself: Given the truth of physics, what can be meant by an organism having ‘knowledge,’ and what knowledge can it have?” (Russell, 1944, p. 700). There is more at stake in this question than one might
9 Structural Realism and Theoretical Reference
147
first assume. The problem hinges on the gulf that contemporary physical theory has built between its entities and perceptual experience. If physics is true, there must be so little resemblance between our perceptual “experience” and the external causes of that experience, it is difficult to see how, from what Russell called percepts, we can acquire knowledge of any external objects. The problem is further complicated by the fact that physics has always been inferred from perception: “Historically, physicists started from naïve realism, that is to say, from the belief that external objects are exactly as they seem. On the basis of this assumption, they developed a theory which made matter something quite unlike what we perceive. Thus their conclusion contradicted their premise, though no one except a few philosophers noticed this” (Russell, 1948, p. 197). That leaves an unavoidable dilemma: One must decide whether, if physics is true, the hypothesis of naive realism can be so replaced so that there can be “a valid inference from percepts to physics. In a word: If physics is true, is it possible that it should be known?” (Russell, 1948, pp. 197–198). After repudiating (at least to his satisfaction) the halfway house of phenomenalism, Russell argued that realism could be defended, but that naïve realism had to be abandoned. The only tenable form of realism is a representational one. Realism, if correct, entails the truth of the causal theory of perception. In broad terms, the causal theory of perception (which codifies the separation necessary between the knower and that which is known) holds that external objects are themselves the first link in a causal chain that ends in the central neural processes which underlie our perception. Russell’s (1927) favorite example concerns seeing the sun: Science holds that when we “see the sun,” there is some process, starting from the sun: traversing the space between the sun and the eye, changing its character when it reaches the eye, changing its character again in the optic nerve and the brain, and finally producing the event which we call “seeing the sun.” Our knowledge of the sun thus becomes inferential ; our direct knowledge is of an event which is, in some sense, “in us.” This causal theory has two parts. First, rejection of the view that perception gives direct knowledge of external objects; second, the assertion that it has external causes from which at least something can be inferred.
148
W. B. Weimer
If something intervenes between the light waves from the sun and the neural processes in the visual centers of the brain, we will not “see the sun.” Further, even if we do “see the sun,” what we see actually occurred eight minutes ago, and no longer exists except “in us.” (Strictly speaking, the information supporting this experience exists only in radiation emanating from the sun in an increasing spherical pattern that has traveled away from the sun at the speed of light.) And for Galileo’s reasons, the occurrent features of the sun exist only in us. If physics is correct (at least in essentials), it makes no difference whether the sun is actually (i.e., intrinsically) colored, warm, tastes like cheddar cheese, or whatever. The causal chains connecting the sun to our percipient beings are not such that that information could ever be conveyed to us. Physics leads us to a restricted causal theory of perception: the only properties of non-mental objects we can know are structural properties. Maxwell (1968) argued that the decisive point isn’t that it is meaningless or self-contradictory to regard electrons, light quanta, etc., or atoms, molecules, or even aggregates as being colored, but rather, even if such things were colored it would make no difference. Even if it made sense to talk of a collection of blue colored molecules or atoms which emitted blue colored light photons, such a “blue” aggregate could cause us to see the surface in question as a red one just as effectively as a collection of red colored ones emitting red colored quanta; the only relevant fact concerning the color we see is the amount of energy per quantum, or, what amounts to the same thing, the frequency of the radiation (p. 170).
So, even if there happened to be actual colored entities in reality (even colored surfaces as commonsense conceives them in the physical environment), we never “see” them at all and their being colored plays no role in any process whereby we acquire knowledge. Russell’s conclusion is that “Wherever we infer from perceptions, it is only structure that we can validly infer; and structure is what can be expressed by mathematical logic, which includes mathematics” (1927, p. 254). So far as our knowledge of nature is concerned, it follows that since science discloses only knowledge by description of abstract structural properties, a blind man can know all of physics (as Hayek, 1952,
9 Structural Realism and Theoretical Reference
149
clearly said). Our phenomenal qualia nowhere appear in the body of science. They may be the point from which scientific inquiry originates, but phenomenal experiences or qualia are not in science. Knowledge by description makes no essential use of acquaintance. This is the epistemic problem posed by experience: although theoretical knowledge starts from acquaintance, it never returns to it. There is a gulf that completely separates acquaintance from descriptive knowledge. Phenomenal acquaintance does not appear in our theoretical understanding of reality. This is what lies behind the dictum, famous from the time of Kant, that there is only so much science in a domain as there is mathematics. It is not that scientists are blind to the humanistic or personal side of reality: despite the fact that the cover girl’s hair is actually a polypeptide helix, it is no more bereft of beauty to the scientist than to the common man or woman. At issue is what we can know of the non-mental world, and mathematics is indispensable to that knowledge because structural properties are expressible in mathematical (which is to say, syntactical) terms. Mathematics both is structure (in the sense of being a syntactic system) and symbolizes structure in empirical domains. In a sense, empirical science is exhausted by a twofold task: first, ascertaining the mathematical or structural formalism which truly captures the structure of the domain involved; and second, answering why the entities of that domain are captured by that mathematical system and not some other formalism. Note in passing that the fact that mathematics consists of structural relations and also symbolizes structural relationships answers one misunderstanding of structural realism. It is sometimes criticized by saying that if structural realism is true, nothing “exists” except mathematics, so it can’t possibly be true because nonmathematical basic entities obviously exist. But this assertion, that only mathematics “exists” or is “real,” is not part of structural realism (and is antithetical to it). Structural realism asserts that mathematics, because it is our most efficient representation of structure, is what composes our advanced scientific theories. But structure, or syntax, is useless and meaningless without semantics. Theories are structural representations of reality, but their content is not exhausted by just mathematics. Their syntax is not their semantics, and even less
150
W. B. Weimer
their pragmatics. Put another way, structural realism is an epistemic theory—it says nothing about ontology or meaning.
Science and Structure Structural realism asserts that our only knowledge of the non-mental realm (i.e., that which we know by description) is of its structural or higher order (beyond intrinsic) properties. So far as explaining and predicting the interrelations of our perceptions is concerned, it is only the structure of the external world that matters. its particular intrinsic properties are of no consequence whatever. Put another way, all perception of non-mental objects is indirect and mediated. The similarity that obtains between the object and our mental representation of it is purely structural. Physicist Heinrich Hertz understood this, and nearly anticipated structural realism. He noted that science needs conformity between thought and nature in a crucial respect: In satisfying the above-mentioned requirement [that there must be a certain conformity between nature and our thought]. For our purpose it is not necessary that they should be in conformity with the things in any other respect whatever. As a matter of fact, we do not know, nor have we any means of knowing, whether our conceptions of things are in conformity with them in any other than this one fundamental respect. (1960, p. 350)
But he did not realize that that “one fundamental respect” was the mapping by thought of structural properties. That had to await Russell’s distinction between acquaintance and description, and the realization that our knowledge, even when it is of our acquaintance, is always in the language of description. In perceptual “experience,” we find both secondary and some structural properties. But we do not know the non-mental world in sensory “perception,” i.e., by acquaintance. We only know the non-mental world
9 Structural Realism and Theoretical Reference
151
by inference from changes, it produces in our perceptions or acquaintances (which we usually refer to as experiences), and thus we can only know its structural properties. We can conclude that it has intrinsic properties (because structural properties are, in the last analysis, properties of intrinsic properties), but we cannot know what they are as such. But what are intrinsic and structural properties? Examples of intrinsic properties are occurrent colors, sounds, tastes, smells, etc. Intrinsic, or first-order, properties pick out classes. Structural, or higher order, properties pick out classes of classes, etc. Structural properties only mention the parts of objects and the ways in which they are interrelated. A structural analysis “tells you only what are the parts of the object and how they are related to each other; it tells you nothing about the relations of the object to objects that are not parts or components of it” (Russell, 1948, p. 251). Examples of structural properties are number, identity, transitivity, etc. Structural properties are definable in terms of intrinsic properties plus logico-mathematical constants. For example, the number “two” can be given a definition in terms of classes and logical constants. In a class logic “two” literally is the class of all classes (C’s) that satisfy specific relational constraints. The only non-logical or descriptive term in the definition is C, so it must be that two is defined in terms of C: For “two” to apply, C must also. The class of all things “two” is the class of all couples. But if scientific knowledge of physical objects is purely structural in character, can we meaningfully speak of intrinsic properties of the non-mental world? Can we really ascribe such properties without being directly acquainted with them? The answer is yes, we can: it does not matter what X, as an intrinsic property, is—only that there is some X. We can attribute intrinsic properties to the non-mental world on the basis of our knowledge of its structural or higher order properties, despite the fact that we cannot experience what those intrinsic properties are. We can know their reference, but not their intrinsic sense or meaning. Structural realism is an account of reference, not meaning.
152
W. B. Weimer
From Structure to Intrinsic Properties The point just raised is crucial: we can know that there are intrinsic properties of non-mental events and objects despite the fact that we can never be acquainted with them. But could we know the first-order or intrinsic properties of the non-mental realm by description? In some cases, at least, the answer is “yes.” One may be able to specify what the intrinsic properties of external entities are despite the fact that we can only be acquainted with “intrinsic” or first-order properties of our own phenomenal awareness. Maxwell (see 1968, p. 170) made the point that we do know what the intrinsic (first-order) properties exemplified in our sense experience are—they are properties such as redness, warmth (as felt), being warmer than, etc. And we know some structural properties as well; we know, for example, that the property of being to the left of in the (experienced) visual field has the structural properties of transitivity and asymmetry. Since we know what structural properties such as transitivity and asymmetry are, and since transitivity, asymmetry, etc., are also exemplified (in some cases) in the external world, then we do know what (some of ) the structural properties of the external world are. But with respect to the first-order (intrinsic) properties of the external world, we can know only that they are and that they have the higher order (structural) properties that our best corroborated theories assert that they do have.
Science and the Search for Structural Descriptions The genesis of theoretical knowledge, which is knowledge by description only, may instructively be compared to the unfolding plot of a detective story. After the commission of a crime, the detective must track down the culprit. If no one saw the crime occurring, if it was not an ingredient in anyone’s “acquaintance,” then the detective is in exactly the same position as the scientist attempting to model reality. Although the detective is at a loss because no one saw the culprit, his or her position is not hopeless: one can gain sufficient knowledge, by description, to enable
9 Structural Realism and Theoretical Reference
153
the criminal to be identified. That is, by an involved process of inference from the data that are available, the detective can come to know the culprit. Any “clue” in the plot can provide knowledge by description of structural properties of the guilty party. The detective can learn that the culprit was of the given height and weight, walked with a limp, etc. He or she may conceivably learn “exotic” things such as that the culprit’s coat was made from a fiber grown only in Peru and woven into garments only by a manufacturer in a foreign country, and thence infer that the culprit was a foreigner or a traveler. The detective can come to know, as exhaustively as is necessary, all the higher order properties definitive of the culprit. The culprit can be known exactly the way any theoretical entity of science can be known. Since all scientific entities are abstract and unobservable, they remain “culprits” who, although known as exhaustively as the theory can specify, are never “caught” in our acquaintance. This emphasizes the major difference between the fictive detective story and the scientific detective story that provides us with our understanding of nature. At the end of a novel, the culprit is identified: some character shouts “I know him! That’s the Butler, Joe Smith!” By identifying the culprit by his proper name, the author signifies that the culprit is an ingredient in someone’s acquaintance, and not just an entity known only by description. But the equivalent of proper names do not occur in scientific reports: no scientist ever concluded “That’s how I met Paula Proton.” The scientist is never directly acquainted with intrinsic properties of any theoretical object: all such properties of non-mental objects exist wholly as inferences within our minds. Every property of every object with which we are acquainted is represented within us: we can at best infer only the structural properties of the non-mental realm. This is not necessarily a cause for lament. Human understanding is not changed appreciably by presenting a more correct account of its nature. Were we faced with the situation Kant proposed, scientific life would be less enjoyable, and phenomenalism as an epistemic doctrine would be irresistible. We do indeed have acquaintance only of phenomena, as Kant used the term. But the noumena are not therefore wholly unknown. All human knowledge, both scientific and commonsensical, is of the non-mental or noumenal world. But it is knowledge by description of
154
W. B. Weimer
that world. We know by description not only our own physical bodies (external to our nervous systems) and the flora and fauna that populate the manifest image of common sense, but also all theoretical entities and events that postulational science discloses. Just as the real world is infinitely richer and more variegated than phenomenalism allows, our understanding is infinitely richer than phenomenal experience allows. We can understand more than we can experience. That understanding is of structure.
Acquaintance Is Not Knowledge Russell wanted to repudiate phenomenalism, but he could not consistently do so since he felt that acquaintance is still a form of knowledge (his contrast was knowledge by acquaintance versus by description). Thus his (and Maxwell’s) formulation inevitably collapses into phenomenalism: phenomenalism claims that statements about the “external” world are “reducible” to statements about perceptual experience. Russell claimed not reducibility but referability or reference to experience, and still allowed acquaintance to constitute knowledge. But is acquaintance knowledge? It is not. What we know of acquaintance is known to us only by description, even when our own acquaintance is at issue. We undergo acquaintance, but we do not know in doing so. Even when we label our acquaintance when we undergo it (i.e., when we “know” it), we are using the language of description. Schlick (1925) noted this in developing his “critical” (also structural) realism. Schlick sharply distinguished knowledge (erkennen) from acquaintance (kennen): knowledge by acquaintance is self-contradictory, because only structural terms are ever knowable in our conceptual thought, which is always in descriptive language. Schlick’s position succeeded in repudiating phenomenalism, and that alone makes it preferable to Russell’s. But it also emphasized two points that are central to understanding human knowledge. The first, noted above, is that understanding is not limited to the impoverished experience of any individual, or even the sum total of all human experiences. Reality is infinitely richer than phenomenalism can allow. The
9 Structural Realism and Theoretical Reference
155
second point is that the problem of knowledge relates to the problem of meaning, and the meaning of scientific terms (as opposed to their experiential reference) has nothing to do with acquaintance. Experience provides a referential basis for our knowledge of non-mental entities, but it neither specifies nor exhausts their meaning. Meaning (whether scientific or commonsensical) is not just a matter of reference (as Russell seemed to think). Russell’s principle addresses the ambiguity of reference for theoretical or descriptive terms, not the ambiguity of their meaning.
References Aaron, R. I. (1937/2009). John Locke. Oxford University Press (Online by Cambridge University Press). Aune, B. (1967). Knowledge, Mind, and Nature. Random House. Bartley, W. W., III. (1984). The Retreat to Commitment. Open Court. Galilei, G. (1960). Two Kinds of Properties. In A. Danto & S. Morgenbesser (Eds.), Philosophy of Science. Meridian Books (Originally translated by A. Danto in Introduction to Contemporary Civilization in the West. New York: Columbia University Press, Vol. 1, 1954). Hayek, F. A. (1952). The Sensory Order. University of Chicago Press. Hertz, H. (1960). Two Systems of Mechanics. In A. Danto & S. Morgenbesser (Eds.), Philosophy of Science (pp. 349–365). Meridian Press (Originally in H. Hertz, The Principles of Mechanics. (1894). Translated by D. E. Jones and J. T. Walley, New York: Dover, 1956). Körner, S. (1966). Experience and Theory. Humanities Press. Mach, E. (1959/2002). The Analysis of Sensations, and the Relation of the Physical to the Psychical . Forgotten Books Reprint Series. Maxwell, G. (1968). Scientific Realism and the Causal Theory of Perception. In I. Lakatos & A. Musgrave (Eds.), Problems in the Philosophy of Science. North-Holland Publishing Company. Popper, K. R. (1959). The Logic of Scientific Discovery. Harper & Row. Russell, B. (1912). The Problems of Philosophy. H. Holt and Company. Russell, B. (1927). The Analysis of Matter. Kegan Paul. Russell, B. (1944). Reply to Criticisms. In P. A. Schilpp (Ed.), The Philosophy of Bertrand Russell (pp. 681–741). Northwestern University Press.
156
W. B. Weimer
Russell, B. (1948). Human Knowledge: Its Scope and Limits. Simon and Schuster. Schlick, M. (1925/1974). General Theory of Knowledge. Open Court (Reprint 1974). Sellars, W. (1963). Science, Perception and Reality. Routledge & Keegan Paul.
10 The Mental and Physical Still Pose Insuperable Problems
A major focus of this book is the distinction between two realms of existence—physicality and functionality—and the consequences that necessarily follow from that duality for theory of knowledge and the possible ontological speculations (postulations) that we may legitimately entertain in individual fields of knowledge. A welter of dualisms (or complementarities or conceptual oppositions) arose when life came into existence. Most traditional philosophy has split between trying to deny these necessary distinctions while supporting the claim that there is “nothing but” physical science and theory (on the one hand), and in opposition, a somewhat mystical or “supernatural” view of denying the importance of the physical realm in favor of some sort of religious or quasi-religious transcendance. Let us begin to state actual issues more clearly, to move beyond the naivete of these common positions, and understand why these domains can never be “reduced” to one or the other.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_10
157
158
W. B. Weimer
A: The Classic Problems Wherever thought and the causative agent of will emerge, especially in man, that power [the organizing power of life] is increasingly controlled by a purely spiritual world of images (knowledge, ideas). Hermann Weyl
Ambiguity and meaning are the essential ingredients of the classic mind-body problems. Concepts pertaining to the so-called mental realm simply do not mean the same thing as those pertaining to the non-mental realm, and that literally is the problem. One obvious manifestation of this is seen in trying to answer the question, “Where is meaning in the physical universe?” Here a paradox arises: the answer is always simultaneously everywhere, and equally, nowhere. This essential tension between the presence or absence of meaning, and the differences in the modes of its manifestation, comes out clearly in the usual metaphors and analogies use to render intelligible the relationship of “mind” to body. Sellars (1963) used the idea of a “difference in grain” to point out that a phenomenal colored expanse (such as his example, a pink ice cube) does not mean the same thing as packets of light quanta reflected from intrinsically colorless equivalent atoms in a lattice structure. Bohr (and the Taoist tradition) spoke of the complementarity of mind and body. The double-aspect or double-knowledge metaphor served the same purpose for thinkers from Spinoza, through Leibniz, to Wundt, and to many contemporary theorists. In emphasizing the difference in perspective, or point of view, or complementarity, from which the “mental” and the “physical” are seen, these metaphors bring out the inescapable epistemic dualisms of acquaintance and description, as well as subject versus object, and thus the knower and the known. They locate the ultimate nature of these dualisms in the problem of meaning, and, inextricably tied to that, in ambiguity. All these distinctions emphasize the need for a duality of descriptions—two separate but equally necessary theoretical formulations—for the physical and functional domains, and, as the epigraph from Weyl illustrates, simultaneously a duality of control , and a separation between
10 The Mental and Physical Still Pose …
159
the rate-dependent physical domain and that of the rate-independent conceptual one. But we no longer need to speak of the functional realm (as Weyl did) as “spiritual.” Functionality is a matter of agency and choice, which encompasses far more than religion or spirituality. Choice contingency is “transcendant” of the physical, but not with any religious, supernatural, or mental substance implications. This separation of the universe into the controller and the controlled is inevitable, but often invisible to us. Why is that so? Because, as thinkers who are denizens of the control structure, we forget that we are there— like the joke about fish who spend their entire lives in water and do not know they are in it. We make a theory about areas that interest us, and forget that that “theory” is not a physical entity even though it is expressed in physical marks on paper or words in a spoken sentence or neural activity in our nervous systems. But this shows why complementarity is essential: we need a theory (always an epistemic or conceptual entity) in the rate-independent functional domain to have any understanding (again, a conceptual and functional phenomenon) of what we take to be (again, a functional act) an independent physical reality. The “purely physical” becomes functional when subjects come into existence and conjecture or “have” any knowledge of it. What the “physical” is in itself—its intrinsic properties—is, as structural realism indicates, unknown and unknowable to human beings. All we know is that it is there. This insight about knowers and the known, controllers and the controlled, was rediscovered by von Neumann in his pioneering work on automata and computation, when he pointed out that an automaton must always have a quiescent (rate-independent) “program” that controls the physical movement of the automaton in the rate-dependent realm of physicality. Looked at another way, if our thought is an algorithm controlling our behavior, there must be an algorist (as Robert Shaw called us) or epistemic “who” that is employing it. Welcome to the enigmatic problems of selfhood. We are all selves, but see all others as objects—what does that entail?
160
W. B. Weimer
Sentience and Qualia One obvious problem is posed by the “rawness” of the “raw feels” (as E. C. Tolman so felicitously called them) that populate our acquaintance. All conceptual endeavors are inevitably and wholly cast in the language of description. There is nothing in any science that cannot be understood in its entirety by an individual who is lacking the traditional sensory modalities and qualia and has available only the descriptions of structural relationships between the proposed entities or processes of a given scientific domain (see Hayek, 1952, p. 31). From the standpoint of our knowledge of reality there is no need for acquaintance at all . A blind man or woman can know all of physics. This has been obvious from philosophical analysis dating back millennia—at least to the time of Democritus, if fragments such as these are correct: Color is by convention, sweet by convention, bitter by convention; in truth there are but atoms and the void.
And from “modern” scientists since the time of Galileo (in 1623). Let us repeat Galileo’s central point: These tastes, odors, colors, etc., so far as their objective existence is concerned, are nothing but mere names for something which resides exclusively in our sensitive body (corpo sensitivo), so that if the perceiving creatures were removed, all of these qualities would be annihilated and abolished from existence.… If ears, tongues, and noses be taken away, the number, shape, and motion of bodies remain, but not their tastes, sounds, and odors. (1960, pp. 28–30)
Here is a restatement of the point from twentieth-century theoretical psychology: There are no questions which we can intelligibly ask about sensory qualities which could not also conceivably become a problem to a person who has not himself experienced the particular qualities but knows of them only from the descriptions given to him by others.… Nothing can
10 The Mental and Physical Still Pose …
161
become a problem about sensory qualities which cannot in principle also be described in words; and such a description in words will always have to be a description in terms of the relation of the quality in question to other sensory qualities. (Hayek, 1952, p. 31)
I encountered a “common sense” or “ordinary language” instance of this same situation at a farmer’s market when a lady asked the proprietor “What does this guavaberry taste like?” His exasperated response was “What does a strawberry taste like?” He then replied “A strawberry tastes like a strawberry. And a guavaberry tastes like a guavaberry. A tomato tastes like a tomato.” He was pointing out that the “absolute” or intrinsic taste could never be determined by verbal means, that “taste” could be specified only relationally, by saying what something was similar to, or where it was located in some ordering of extant qualities already experienced by the inquirer. As Hayek emphasized, all that can be communicated (to ourselves or to others) are the relations, the differences between sensory qualities. We can never know whether others perceive the qualities we do in any absolute sense. Thus arises the existential predicament of the scientist: our knowledge indispensably involves sensory qualia, but that knowledge never is about them, never discloses what the qualia actually are, or even why we have them. Here, we double back to the problem of selfhood—I do not know that you (as a potential subject in relation to all others) have any qualia at all. While our mind is acquainted with our own qualities, our knowledge has none at all. Democritus knew this millennia ago. The fragment quotation above is part of a dialogue. Immediately after the lines noted above, the senses have their reply to the mind: Wretched mind, from us you are taking the evidence by which you would overthrow us? Your victory is your own fall. (translated by Erwin Schrödinger in his 1956, p. 211)
162
W. B. Weimer
The Problem of Functionality Again Acquaintance possibly is not functional—it may simply be an accident in the sense of an accompaniment of neural activity in the mammalian CNS—an actual epiphenomenon. It may arise in unison with any nervous system activity that makes it into consciousness, or it may be present in any neural activity, even when not conscious. In any case, acquaintance is not causal: that property (the glue of thought) is due to the neural activity prior to it (see Chapters 9 and 12). Functionality is different—it is always intentional (even when not conscious, as depth psychology has shown) with respect to some subject, and certainly is “causal” in the sense of downward causation noted by Campbell (1974a, 1974b). In the realm of conception, it is both intentional (as Brentano noted) and inherently teleological or goal oriented. But how does conception—feeling, thought, cognition, volition, and myriad other phenomena—come into existence? How does knowledge—or any functionality—even exist in a physical universe?
B: Consciousness, Objectivity, and the Pseudo Problem of Subjectivity Consciousness, and every kind of awareness, relates certain of its constituents to earlier constituents. Thus it cannot be conceived of as consisting of arbitrarily short events. There is no consciousness without a memory that links its constituents as “acts of awareness.” Karl R. Popper
If we are directly cognizant of anything at all, it is our own individual mental events within the rate-independent realm and never the world external to our senses. As Chapter 9 details, we know the external environment only indirectly, in terms of its effects upon our nervous systems, and perhaps (if there is a correspondence between its structural properties and the structural aspects of our perceptions) in terms of its higher order or structural characteristics. Thus some form of epistemological
10 The Mental and Physical Still Pose …
163
functionalism (“mentalism” or idealism) seems inescapable in the rateindependent realm of cognition. The further leap to ontological idealism, the thesis that all that exists is mental, is ruled out by the fact that we live in a world in which the external or non-mental realm does indeed impinge upon us (this is why even so-called philosophical idealists and solipsists look before they cross busy roads). Thus all knowledge and phenomenal awareness comes to us as an aspect of and through the mental realm, and passes up to and (often but not always: we must remember tacit knowledge) through description into consciousness. Consciousness, as a denizen of the rate-independent realm, poses a unique set of problems that ramify through epistemology toward ontology and straight into physical-functional problems. Consider some issues crucial to epistemology. Our examination will point out that acquaintance versus description is a fundamental dualism that epistemology can never avoid. What we undergo in our momentary acquaintance is on the one side of an epistemic and conceptual divide, while all our knowledge, including our knowledge of what our acquaintance is, is on the other side of the divide. There is no knowledge whatsoever in the having of acquaintance. All knowledge is a social, and hence inter personal, construction.
Our Individual Consciousness Can Never Be Causal Within Our Own Bodies An obvious but almost completely overlooked point is that conscious contents are the result of incredibly complex patterns of neural activity that are at the preconscious level of processing within the nervous system, and that are temporally prior to our resultant conscious awareness of contents. Thus conscious content, while caused by preconscious patterns of neural activity, is a resultant of causal activity, a product of causation, and never a cause of anything in itself. There is no denial of causal accounts of behavior that are due to neural activity (what else could conceivably do so?) here: only the denial of the causal role of the end result of these neural processes. But what about voluntary action? The sort of thing Eccles (1976) gleefully rubbed in materialists’ faces when
164
W. B. Weimer
using the example “When I will to raise my finger it moves upward”? Surely, this is a case of consciousness being causal? No, it is not. Causal potency lies in the tacit processing that comes about “below” (to use the Freudian metaphor) the level of awareness that resulted not only in the conscious “decision” but also in the movement itself. Everything we think, say, or do to or with ourselves that originates within our own bodies has already been caused by prior processing. Causality is an arrow in time (or in conceptual thought) whose feathers are always in the past. Nothing in our macro universe exhibits any hint of backward causality. Our willing of voluntary action only means that such action is “caused” by our tacit neural processes and has not been caused by external coercion. Downward causation in a coalitional control structure is not backward causation. What about our response to input from others? When someone shouts “fire!” and we look and then run out of the path of danger? Of course, this is causal of our behavior, and in just the same way that we respond to all external stimulation, from the initial classificatory activity of the nervous system (e.g., the orienting response) through the initial sensory processing to separate meaningful speech from noise, all the way up to the comprehension of meaning and our reaction to it, to what subsequently appears in our consciousness. But as always, what appears after the fact of processing in our consciousness is already caused, and therefore not in itself causal. In the 1960s, when cognitive psychologists would explain the inadequacy of behavioristic approaches, we often said “Your head is smarter than you are.” That is what is at issue here—the problems of tacit knowledge: the immense neural activity of classification and reclassification and interpretation that finally results in something popping into our conscious awareness after that preliminary neural activity has already occurred. This discussion removes consciousness from an allegedly privileged position that it could never ever have occupied. We are not “run” by a little homunculus in the head who sits in the driver’s seat and operates the steering and pedals. There is no homunculus, no “ghost in the machine.” The “machine” (the entire CNS) is a coalitional structure that does everything that can be done, and needs no designated separate “driver” at all.
10 The Mental and Physical Still Pose …
165
Our consciousness is in the back seat, not at the steering wheel, a stillevolving evolutionary failsafe mechanism that has evolved to the extent that it has with the rise of language in our species as a result of the dual pressures of communication between parents and very neotenized offspring, and our ever larger groups and greater distances between their members. The function of consciousness, as the epigram from Popper stated, is to aid our memory. It allows us to hold in our immediate attention span a larger amount than is available to an organism that is constantly and only subject to the vicissitudes of changing momentary stimulation. Consciousness is a result of the need for greater memory and longer attention spans.
Consciousness Does Not Exist in Time Suppose you look through a powerful optical telescope and see the light from a star that was created shortly after the “Big Bang.” The photons flying away from that star at the speed of light to our eye have been traveling, from our perspective, for billions of years. But what about from the perspective of the photon? One striking conclusion from relativity theory is that as the speed of light is approached time slows down until at the speed of light it no longer exists. The photon, from its perspective, does not exist in time. It “sees” no temporal difference from the Big Bang to your retina. Time has no meaning in this situation. What could that have to do with consciousness? It turns out that conscious awareness (consciousness per se, but not any of the contents of consciousness) is in the same situation as the photon: time does not exist for it. Our undergone or lived through awareness is always of the specious temporal present: no past, no future, only the so-called “moment” is present. The universe divides into two exhaustive and exclusive classes of thing: those that are rate-independent and those that are rate-dependent. Things that exist in time are rate-dependent, things that exist timelessly are rate-independent. Consciousness per se is rate-independent. Consciousness appears to us as timeless, rate-independent cognizance or contemplation of its contents (which since the contents are the result of
166
W. B. Weimer
prior processing, are rate-dependent). And now is the time to note that the ascription of meaning to any content of consciousness is instantaneous (which is to say, timeless). This is how functional meaning (which is timeless) enters the physical world which is rate-dependent. There is no evidence of meaning in the rate-dependent world that is independent of some subjective cognizing activity. I believe this is the clearest and most powerful formulation of the mind-body problem of sapience that has ever been formulated. It is crucial to understand that there is a qualitative gulf between rate-independent meaning and acquaintance, on the one hand, and the rate-dependent realm of the non-mental on the other. The problem of sapience is not that both realms exist. That must be a given for the real problem to arise at all. The real problem is that the gap it points to cannot be bridged at all from either direction. How do both physicality and functionality exist in our universe?
Consequences of the Fact That Acquaintance Is Not Knowledge Structural realism solves the problem of the reference of theoretical terms with which we “do” science by showing, with the aid of the technical device of the Ramsey (1931) sentence, to what items with which we are already acquainted the abstract scientific terms ultimately refer. This is a procedure for the disambiguation of the reference of any non-mental term. To repeat: the Russellian theory of descriptions is not a theory of the meaning of theoretical terms at all, but rather only a theory of their reference. Their conceptual import—meaning, sense, intension are cognates—is something else entirely. Our cognizance of qualia is knowledge by description (naming, classifying, describing, theorizing about, etc.), and not acquaintance. The having of acquaintance is not descriptive of it but rather a matter of timelessly enduring it. There is no sense of “knowledge” that pertains to the undergoing of phenomenal experience. Acquaintance per se is not knowledge: it is just acquaintance, nothing more, nothing less. All knowledge is theoretical, which is to say found only within description. This explains why there is no “direct” knowledge of reality even of our
10 The Mental and Physical Still Pose …
167
own physical bodies and our own mental processes. All knowledge is always and inherently theoretical. What we regard as knowledge of which we are “directly” aware—e.g., seeing a blue expanse of sky as blue or an expanse—is seen as blue and as sky only through our description. It does not matter that that description seems to be instantaneous (it is not, of course), there are no descriptions (let alone meanings) in awareness. Descriptions occur only in the relational contents of consciousness resulting from the memory of the nervous system. Our knowledge, of both the mental and non-mental realms, is always knowledge by description and knowledge is never found in what we undergo in acquaintance. We add that knowledge when we describe the acquaintance.
The Traditional Problem of Objectivity Is Backwards History of epistemology is in one sense that of attempts to avoid the specter of phenomenalistic solipsism. The picture presented is that of each individual as an island separated from everything “else” by an impossibly vast epistemic sea that can never be crossed. Misunderstanding the gulf between acquaintance and description, the assumption that the realm external to us is only an inference of our individual minds and therefore exists only in our minds has been made by many philosophers. Berkeley, in his famous “esse est percipi,” is the classic example. And that Bishop of Cloyne could avoid total individual subjectivism and thus inescapable solipsism only by postulating “God is in the quad” as the epistemic guarantor of the existence of any objective reality. As the appendix chapters detail, traditional justificationist philosophy dealt with the situation as the problem of induction, as the quest for a foundation from which to justify our inferences, both commonsensical and scientific. But this quest, as Hume pointed out after Berkeley, is not something that can ever be “justified,” and hence, our knowledge really is “mere” animal belief. Ignoring the reality of Hume’s devastating skeptical argument, philosophers ever since have defined knowledge as “justified true belief ” and proceeded to “square the circle” by attempts to justify inductive behavior. This is how traditional philosophy long ago degenerated into the mere scholastic exercise of constructing Ptolemaic
168
W. B. Weimer
epicycles (postulates of inference and “inductive” logic) as substitutes for God. But what of science? Physicists—good ones—are not immune to the Berkeley-inspired interpretation of subjective isolation. Here is a clear example: I began with a naïve realist outlook and never thought about how our senses, our brains, and our language affect what we tacitly accept as “out there” in the world. Years later I read the essay by the physicist Max Born (1969), Symbols and Reality, and I recalled that while reading Pearson’s Grammar I had experienced the same shock that Born describes in his essay: “Thus it dawned upon me that fundamentally everything is subjective, everything without exception. That was a shock.” Born went on to point out that: “Symbols are the carriers of communication between individuals and thus decisive for the possibility of objective knowledge”. The physicist’s concept of “objective knowledge” means only that knowledge that appears the same for all conceivable observers, as tested by the invariance and symmetries of the symbolic expressions of laws. (Pattee, 2012, p. 6)
Are we in fact these subjective “islands in the sun” (to borrow Hemingway’s phrase) or does that view depend upon mischaracterizing the nature of subjectivity and attributing knowledge to acquaintance? The traditional approach to philosophy can be replaced by moving to non-justificational epistemology, which abandons justified true belief as the definition of knowledge for the conception of knowledge as inherently fallible and non-justifiable conjectures held in check by attempts at refutations. Elaborating on the (still justificationist) work of Karl Popper (1945, 1959), Bartley (1984) and I (1979) developed nonjustificational metatheories of rationality and inquiry. By abandoning any attempt at justification we moved discussion past the traditional sterile quest for certain methodology to the quest for an explanatorily adequate evolutionary epistemology. What can we now say about the subjective versus objective dualism? From the standpoint of epistemology, when we move beyond the having of acquaintance (which is in fact totally tenseless and subjective in the sense of unique to the individual in the specious present moment) to the
10 The Mental and Physical Still Pose …
169
language of description—which is always the case in any natural human language and all known cognition—in which we characterize that subjective realm, we have inevitably abandoned subjectivity for intersubjectivity. This must be so because language is inevitably social. There are no private languages. Any attempt to construct a private (i.e., totally uniquely subjective and available to one individual alone) language results in a code—like braille—rather than an actual language (Tolkein’s “elvish” is a famous example). Codes transform or “encode” an already existant linguistic system. They do not constitute a new or unique language. Codes just encode something else. All our knowledge by description of the subjective is actually intersubjective. Intersubjectivity is the essential feature of objectivity. Thus subjective experience is interpreted, and when acknowledged at all, is actually objective and abstract rather than particular and concrete. Human conception never touches the raw experience in acquaintance. As Körner (1966) put it, the disconnection between theory and experience is total and complete. Experience is linked to theory only by postulation and idealization, which is necessary to remove all the “imperfections” of irreducible individuality. That postulation requires the construction of abstract idealizations (instances of nonempirical class inclusion to “freeze out” a transient instant from its indefinite background) to substitute for the actually unique and the totally subjective. The subjective realm can be described (can become known to be such) only by being objectified. All objectification is conceptual and transcends its alleged basis in the subjective and the unique. The objective nature of the subjective was first emphasized by Kant, and is beautifully explicated by neo-Kantian philosophers such as Cassirer (1923, 1957). Speaking of the “ingredients” of the act of perception Cassirer designated this as an act of “symbolic ideation” and noted: This mode of ideation is no secondary and as it were accidental factor, by which vision is for the time being partly determined, but… from a psychological point of view, the symbolic ideation first constitutes vision. For there is no seeing and nothing visible which does not stand in some mode of spiritual vision, of ideation. A seeing and a thing-seen outside
170
W. B. Weimer
of this “’sight,” a “bare” sensation preceding all formation, is an empty abstraction. The “given” must always be taken in a definite aspect and so apprehended, for it is this aspect that first lends it meaning. (1957, p. 134)
Again, “spiritual” simply means functional or conceptual, not religious. Now as an answer to the physicist’s sentiment noted above. This is Cassirer’s rebuttal to the claim that in natural science it: May seem meaningful and even necessary to let knowledge of the parts precede knowledge of the whole, to ground the reality of the whole in that of the parts. But this road is closed to the investigation of language, for the specifically linguistic meaning is an indivisible unity and an indivisible totality. It cannot be built up piece by piece from its components, from separate words--rather, the particular word presupposes the whole of the sentence and can only be interpreted and understood through it. If we now apply this point of view to the problem of perception--if we take the unity of linguistic meaning as our guide and model--we gain an entirely new picture of sensibility. We then recognize that the isolated “sensation,” like the isolated word, is a mere abstraction. (ibid., pp. 31–32)
Physicist Max Born was right about the intersubjective nature of scientific knowledge, and when it is understood that for us the only subjectivity in the universe is the specious present “raw” acquaintance we undergo, it is obvious that our individual models of reality, despite their location only in our heads, are as exactly objective as the “external” realm they attempt to portray. Remember, the knower can never be identical with that which is known. Knowledge, a functional and intentional concept, is (as Brentano clearly noted) about something external to any given act of knowing in a particular subject.
Excursus: The Chicken and Egg of Subjectivity and Objectivity What is the relation in epistemology of the subjective realm to the objective realm? Is one chicken and the other the prior egg? Most accounts
10 The Mental and Physical Still Pose …
171
give priority for one or another reason to subjectivity over objectivity. The obvious point in favor of the idea that subjectivity is primary is the uniqueness of the individual qua subject who was floating in an impersonal sea of others who are just objects to that given subject. This is the basis for the mind-body problem of selfhood: I perceive my self as the only subject in the whole universe—everything “else” is an object from my perspective qua subject of conceptual activity. But from the standpoint of objectivity (viewed from outside the particular individual self ) all “subjects” (including my self ) are equally objects. So our theories exist only in our brains and not in an objective location except insofar as symbols and meanings can be shared among subjects, as clearly they can be. This indicates that all objectification is dependent upon subjects sharing a framework (descriptive language), a very restricted type of “subjective” model that is common to all subjects by virtue of invariance and symmetry relations inherent in the symbols of the model. And “the relation of an agent’s internal subjective models to its external environment is the fundamental problem at all evolutionary levels (Pattee, 2012, p. 16).” To repeat, this point of view states that the objective is equally subjective. Against this is the exact opposite view from Cassirer (1923): that the subjective must be intrinsically objective. How can this be so? Because there is no such thing as a private language or a private symbol. There are no symbols that have meaning for only one subject and can never be communicated to anyone else. Any language can and must convey knowledge by description. Any description always presupposes thingkind identification and idealization from momentary particulars, and this is always objective (a process of objectification). All identifications are trans situational and trans temporal due to the evolution of our nervous systems. Thus any language can communicate “objectively” to other individuals. We are not here concerned with what Quine called the radical indeterminacy of meaning usage between two individuals—that issue haunts both between languages and also between any two speakers of the same language. We can communicate reasonably well without ever knowing whether we translate or see “the same” green in that leaf or “the same” rabbit essence in Quine’s made up term “gavagai.” In all cases indeterminacy dissipates with increasing range of communication. We overcome
172
W. B. Weimer
translation (again, actually meaning) indeterminacy as best we can by enlarging our communication basis (common vocabulary) in order to shrink the areas in which indeterminacy remains problematic. The meanings of our concepts become more and more determinate as they become more idealized and trans empirical, which is to say, more fully objective. Cassirer and Körner made this point quite obvious and I presuppose their discussion on this topic. This effectively reverses the apparent primacy of subject over object. As Cassirer said in 1910: The problem is not how we go from the “subjective” to the “objective,” but how we go from “objective” to the “subjective.”… The “subjective” is not the self-evident, given starting-point out of which the world of objects is constructed by a speculative synthesis; but it is the result of an analysis and presupposes the permanence of experience and hence the validity of fixed relations between contents in general. (1923, pp. 278–279)
In sum, our existential predicament has always been objective: “The conditions and presuppositions of “objective” experience cannot be added as a supplement, after the subjective world of presentations has been completed, but they are already implied in its construction.… Without logical principles, which go beyond the content of given impressions, there is as little a consciousness of the ego as there is a consciousness of the object” (ibid., p. 295). This means that: “The thought of the ego is in no way more original and logically immediate than the thought of the object, since both arise together and can only develop in constant reciprocal relation. No content can be known and experienced as “subjective,” without being contrasted with another content which appears as objective” (ibid., p. 295). So there is no point in assigning primacy to subjectivity or to objectivity in any either-or fashion. As Cassirer noted a century ago, there is only one correct perspective here: both-and. This is a duality that comes into existence as such: subjects and objects arise in unison and cannot be understood as independently specifiable singularities. Subject and object are like the opposite sides of the same coin—one can never have just
10 The Mental and Physical Still Pose …
173
one side without the other.1 Why has it taken the social studies so long to understand this? How has the absurd Comtean position of “social physics” held sway until now?
C: Clarifications of False Starts and Important Issues Economics, as a praxeological discipline (so Mises argues) is concerned with “the a priori category of action [Mises, 1978, p. 41], not with the particular circumstances surrounding observable actions....”It is based on “aprioristic reasoning” [Mises, 1990, p. 29] about “choosing as such” Viktor J. Vanberg
Austrian Subjectivism Is a Misnomer and Often a Red Herring The import of this discussion for the so-called Austrian or “subjective” theory of value is obvious. There is no essential subjectivity (nor any acquaintance) involved. Value is always interpersonal and objective, because there is no possibility of a unique language or value “system” being used by only one individual (even though we all have a unique perspective on the world at all times). Language—which embodies our knowledge by description—is always objective because it exists only as a social rather than an individual phenomenon. Methodological individualism is as “objective” as any procedure in physics. Economics must give up the misleading term “subjectivism” and use what is actually meant— individualism as a basis of action and valuation. Individual decision is never in opposition to “objective” determination.
Awareness of Our Own Internal Milieu How do we “know” our internal to the body phenomena such as our aches and pains, our personal feelings, the emotions that we feel (and
174
W. B. Weimer
often overwhelm us), and our tacit intuitions or intimations (perhaps best called hunches), that we feel we know with more perceived certainty and immediacy than we “know” the fields of physics or biology? Am I not directly and certainly aware of how I feel? Granting that the external distance senses are vicars (like the Anglican vicar who went off into the “distance” to bring back the word of God to the flock— the body) conveying knowledge to us from “outside” the body surface (our skin), am I not (as the immediately present subject or “body” who has knowledge) directly aware of my own somatic states, and isn’t that knowledge in acquaintance? Many philosophers (e.g., from Berkeley on; such as early to mid-twentieth-century writers like Wittgenstein and Malcolm) attempted to specify how such “private” episodes constitute some certainty upon which one can evade the problem of justification of knowledge claims, but that is a derivative issue. Focus here only on the prior claim that awareness constitutes a unique source of knowledge. Against this “comfort food” conception of acquaintance for our internal states, it is necessary to repeat that there is no knowledge whatsoever in raw acquaintance. All internal states of “awareness” are available to us only as a result of applying descriptive categories, classifications according to thing—kind attribution, that are objective rather than subjective. As Cassirer noted, no content of consciousness can be known to be “subjective” without being classified as such by an objective conceptual scheme. The objective realm is not a supplement added to the phenomenal presentation in “primary” subjective awareness, it comes into existence with it. This dualism is inescapable. These concepts arise in unison, and there is no point in assigning primacy to subjective awareness or to descriptive objectivity. They both begin to exist at once, in the language of description, and the “source of knowledge” issue is a misunderstanding that can only occur if that is overlooked. It has been a prominent issue only because of the prevalence of the justificationist metatheory of knowledge and rationality (see the appendix chapters). Why were we concerned with the “source” of knowledge in the first place? Because of the pervasive presence of the justificationist metatheory. From the framework of traditional philosophy with its search for apodictic truth, there had to be a genuine or certified source of knowledge claims—the indubitable deliverances of sense experience (empiricism or
10 The Mental and Physical Still Pose …
175
sensationalism in some form), “infallible” rational intuition (for rationalism), or something that could be taken to be a foundation for knowledge. Once one understands that there is no need for any such foundation it becomes clear that almost anything can be a “source” of knowledge. It does not matter whether we start with hunches or vague intuitions or meter readings or even an article in the New York Times. Where we start is of no concern at all. Knowledge isn’t based upon foundations or beginnings but rather upon our theories and their correspondence to reality—our guesses and their adequacy. Assessment of knowledge is not dependent upon its source at all. Knowledge emerges only from the testing of theoretical propositions, comparing what they say must be the case to what is actually found in the world around us.
Is “Silent Consciousness” of Epistemic Importance? One can be aware without subvocal speech. Many oriental traditions of contemplation propose a state of consciousness that is “silent,” i.e., without reported contents. In the Western psychological, tradition phenomena, such as the ganzfeld (a uniform and featureless visual field such as dense fog or mist) and semantic satiation (the loss of determinate meaning for a word if it is repeated over and over), stabilized retinal imagery that disappears, “over practiced” motor skills that are no longer conscious, and what William James called the “tip of the tongue” phenomenon, all seem to be related to this state of being conscious, but not conscious “of.” So in some sense it appears to be possible to be conscious without a particular referent being required. Various mystical traditions have assumed that such a state is somehow in a more direct contact with reality, somehow a direct awareness of existence. As such, many contemplative traditions attempt to achieve this state as a desirable goal. While probably relaxing, undescribed consciousness without explicit reference or descriptive language would still be a matter of acquaintance—that which we undergo or experience. As such it is of no epistemic import until it is “contemplated” in the language of description.
176
W. B. Weimer
At that point it could become an object of knowledge or its acquisition. There is no need to dismiss the existence of such phenomena, but one must recognize that they are not in themselves of epistemic import. What we call knowledge requires the interpersonal and transsituational language of description for its formulation and meaning. If mystical intuition is to be epistemic, it must be cast in the language of description.
Excursus: Chance, Constraint, Choice, Control, Contingency As denizens of the rate-independent realm we are apt to retreat into some sort of Platonic Third World conception of knowledge (and perhaps reality). From such a perspective wholly within the conceptual realm, we seek determinism (it only exists in this realm) in the search for laws of nature to explain reality. We assume that necessity (determinism) has only one alternative: random behavior or chance. Unfortunately, chance is not an explanatory concept: it only means that we are ignorant of the factors involved. Apparently its use stems from the “games of chance” or gambling, where outcomes were not reliably predictable due to a lack of knowledge of the full situations involved. Participants were said to “take a chance.” Thus chance is a term for our ignorance, which is to say, our lack of knowledge. As such, chance never causes or explains any physical or mental event, and is included in scientific discussion only by those who misspeak. It is unfortunate that the term is often employed in discussions of quantum phenomena, where it perpetuates the “subjectivist” or somehow nonobjective interpretation of results. Constraint is used in physical theory when things are excluded from happening. Such constraints are physical, not conceptual. A common sense example is a closed door—one is constrained to remain on one side unless that constraint is removed (by opening the door). Such constraints could be part of initial conditions specified by a researcher in a particular “experimental” situation. All constraints are passive and are present in advance of measurement or interpretation. They are the effects or results of dynamic lawfulness operating in a given context or set of initial and/or boundary conditions. Constraints are negative prohibitions to particular
10 The Mental and Physical Still Pose …
177
classes of events (you can’t walk through a brick wall). Constraints in themselves are never positive prescriptions of what must occur, although they limit possible states from occurring. What is choice? Everyone makes choices, so why discuss this? Because choice requires the subject-object distinction: there must be an agency to make a choice between physically realizable outcomes. Choice is not a physical notion—it is instead functional and conceptual or formal, and thus independent of the laws of nature. This is the most important difference between constraints and choices. Necessitarian thinkers do not understand this distinction, but if our “choices” (free will) are determined by physical necessity they are part of the lawful dynamics rather than being independent of the determinate laws of nature.
Rate-Independent Formal Concepts Are Not Objects of the Laws of Nature Choices do not determine (constrain) physical dynamics: choices control them. Agency harnesses inexorability. Nothing else can or does. There is no teleology in physical constraints, even in complex cases of “selfordering.” In contrast, choice control is cybernetic: it steers events (functionally, intentionally) toward desired (by an agent) outcomes when genuine alternative possibilities exist. Choice is impossible without genuine alternative possibilities—situations underdetermined by physical laws. Choice turns the possible into the realized actual. At the moment of choice, this “unconstrained” procedure then becomes determinate, but never before. Unlike probabilistic bit combinations (Shannon), our conscious choice contingency steers events toward an agent’s desired functionality. Choice contingency, while it is physically underdetermined or free: Becomes a form of determinism at the moment of choice. Decision theory is not subject to direct statistical quantification. No standard unit of measure is possible for pragmatic individual decision-node selections. Each decision-node choice commitment is unique, especially with relation to other coordinated decision-node choice commitments. Each is made with deliberate cognitive intent. (Abel, 2010, p. 18)
178
W. B. Weimer
Contingency occurs when events can have multiple outcomes because those outcomes are not prohibited by the local constraints (specified in initial and boundary conditions) in combination with applicable lawful inexorability. As Polanyi noted, agency creates choices which constrain the application of the inexorable laws of nature but can never contradict or change those laws. There are at least two kinds of contingency: chance and choice contingency. So-called chance contingency refers to the probability of any one particular event or occurrence being very low, so that the result, while contingent, is a rare or unlikely event. Chance contingent events are not known to be the result of choice, and they are not the result of known constraints (both of which are determinate rather than improbable). Choice contingency is deterministic—like billiard ball causation—because an agent makes a choice in formally absolute fashion in the rate-independent domain of conceptual thought. Subjects act “lawfully” in accordance with the laws of nature by freely choosing (determining) among equally available (not inexorably determined) physical alternatives. This is how life harnesses inexorability. We (our nervous systems) choose to act upon the available physical domain in cognition and behavior. We determine physical results (behaviorally, movement of our bodies) by choice of a physically realizable path or trajectory. The upshot of this is that “chance” should be dropped from explanatory discourse, and replaced by an admission of ignorance of the factors actually involved. So-called chance contingency should be replaced by admission of statistical or probabilistic “determination” of the events in question. Control must be restricted to the conceptual-formal-functional realm where an agent, someone or something that functions as a subject of conceptual activity, is responsible. Control, like choice, is not a physical concept and cannot be “explained” by any conceivable physical theory. To repeat, agency harnesses inexorability—subjects make things happen in accordance with their intentionality and in conjunction with the laws of nature. The domains that study choice and control, such as cybernetics, economics, cognitive psychology, and language use, can never be purely physicalistic. They are always harnessed physicality. Abel (2010) put this beautifully2 : someone cannot argue for a “Purely
10 The Mental and Physical Still Pose …
179
materialistic perspective without violating materialism’s most fundamental premise. The defense of materialism is itself abstract, conceptual, choice-contingent, formal, and nonphysical. Materialism/Naturalism is a metaphysical faith system. It is not only a philosophic formalism, but it is an exercise in self-contradiction” (p. 20).
D: Knowledge Depends Upon the Functional Choices of Nervous Systems Representation of the existing situation in fact cannot be separated from, and has no significance apart from, the representation of the consequences to which it is likely to lead. Even on a pre-conscious level the organism must live as much in a world of expectation as in a world of “fact”, and most responses to a given stimulus are probably determined only via fairly complex processes of “trying out” on the model the effects to be expected from alternative courses of action. F. A. Hayek
How does the representation of structural properties of non-mental objects allow us to come to know, to anticipate, the complexity of our world? To understand that we must see how all knowing is functional (rather than purely physical), and what that implies about the semantic (actually the entire semiotic) domain. Knowing is the representation of functional consequences.
Boundary Conditions Harness the Laws of Nature Michael Polanyi (in 1968, reprinted in his 1969) was the first to emphasize the role of boundary conditions as harnessing the laws of nature when we observe and “experiment” upon it. A boundary condition is always extraneous to the process which it delimits and harnesses. Consider this:
180
W. B. Weimer
In Galileo’s experiments on balls rolling down a slope, the angle of the slope was not derived from the laws of mechanics, but was chosen by Galileo. And as this choice of slopes was extraneous to the laws of mechanics, so are the shape and manufacturer of test tubes extraneous to the laws of chemistry. The same thing holds for machine-like boundaries; their structure cannot be defined in terms of the laws which they harness. Nor can a vocabulary determine the context of a text, and so on. (Polanyi, 1969, p. 227)
Polanyi emphasized that this places a system under dual control : it relies on the operations of its higher principles to constrain the working of the lower level (such as laws of physics and chemistry). The higher principles are additional to the physical laws of nature, and can never be reduced to them (as Longo and Kauffman said of biological-evolutionary enablements: see Chapter 4). The key point is that such hierarchical or layered systems exhibit dual control, which is made possible by the fact that the principles governing the lower or physical level leave totally indeterminate and thus undeterminable the range of conditions to be controlled by the higher order principles. Consider examples from Polanyi: “You cannot derive a vocabulary from phonetics; you cannot derive grammar from a vocabulary; a correct use of grammar does not account for good style; and a good style does not supply the content of a piece of prose (ibid., p. 233).”
Initial Conditions and Boundary Conditions Let us emphasize a distinction at this point between initial conditions and boundary conditions. Initial conditions depend upon consciousness or at least, deliberate and meaningful, choice by agents. Applying physical theory to reality requires that an observer —a cognizing subject— makes decisions. This active choice upon the part of the scientist determines the initial conditions (and thus an initial or starting-point state description) for a physical ensemble, and constrains what the “experimental results” can be. In the common usage of physicists, boundary conditions are not choices on the part of a subject. They are constraints that are the result of our “chance” location in the space-time manifold,
10 The Mental and Physical Still Pose …
181
and the unique set of particulars that happen to be there. Boundary conditions do not depend upon conscious cognition—they are imposed upon us or imposed upon non-cognitive events by accident of the local state of the universe. Boundary conditions are frozen accidents in the unfolding of the universe. For example, the origin of our solar system depends upon unique boundary constraints found here at our local star and the inexorable “laws of nature” exhibited through its evolved development. But without conscious cognition to make a choice of how to apply those laws we could never know that the solar system was in fact lawful. In similar fashion, the origin of life depends upon the context of constraint in which the most primitive cells and self-replication of cellular structures evolved. So what we too often call chance, not choice, determines the information structure in the biological realm. When we get to the cognitive realm this “happenstance” is an ever-present background phenomenon, but choice is the prime motivator that we focus upon, often to our detriment in not recognizing the role of the unknown or “accidental” that, without any inexorability, literally did “just happen.” Information becomes meaningful to us only when alternatives exist which must be distinguished. If things could have been otherwise, a choice must be made to distinguish what path to take. If things could not have been otherwise (there was no possibility of choice) a boundary constraint is present. In this context, information is meaningful to the organism involved when choice first appears. Initial conditions are always chosen for some functional purpose by an agent—a given researcher or organism. Initial conditions are constraints of a special, functional, type, not just physical boundaries. Choice contingency underlies all meaning and knowledge acquisition. The lowest level of neural functioning that must involve choice and thus be meaningful (in organisms with a CNS) is the orienting response to novelty. This is the point at which the nervous system simultaneously first makes a choice and becomes an agent. It is the transition point from control by biological boundary conditions and apparent “chance” constraint to control by choice constraint and (at least proto-) agency, and in physical theory, it is the beginning of the separation of boundary conditions from initial conditions. The orienting response begins the
182
W. B. Weimer
semiotic informational context of constraint in which all higher cognition is embedded. Below this level (if there is anything below this level in living chordate systems), there would be only “chance” or accidental (background pattern) neural activity.
Information Structures Are Constraints, but Not Just Boundary Conditions While a necessary physical condition is that all informational vehicles (signs, symbols, or whatever) are boundary conditions or constraints acting upon a law governed local system, it is better to separate them out from other constraints such as a subject’s physical location, a mountain range, a table that must be walked around, or computer hardware, if for no other reason than clarity in conceptual analysis. Consider a dependency hierarchy of levels of biological and cognitive information. Most basic is syntactic structuring, as found in Shannon’s (1948) theory of communication. A second dependent level is found in biological inheritance, as heritable information evolved and stored in heritable structures in groups of organisms. A third, dependent upon heritable information, is learned cognitive information stored in individual nervous systems. A fourth level, crucial to science, is (scaled) measured information in the context of discovering the “laws” of nature. While obvious, and obvious as boundary conditions, it is necessary to distinguish these levels from each other and from other boundary conditions. They provide one instance, or specification of, what constitutes emergence.
Physical Information (Differences or Bits) Does Not Explain Meaning There is a dogma in contemporary physics that “information” (usually specified in terms of Shannon communication theory differences or bits) is all there is to worry about (we have seen this earlier, in discussion of the views of Wheeler and his students concerning the “it from bit” hypothesis). For example, Pattee (2013, p. 11) said “as a physicist I believe information is a fundamental primitive concept, and all semiotic
10 The Mental and Physical Still Pose …
183
concepts are forms of information.” And Warren Weaver (1949), while noting that the Shannon and Weaver communication theory was “disappointing” because it has “nothing to do with meaning (see p. 14),” also claimed that in all “likelihood” it could somehow be “extended” to include crucial functional issues of communication such as meaning and communication effectiveness. Physics has been running on this “hope” or “likelihood” ever since, without ever even beginning to unify meaning with information or entropy. In contrast, biologists and biosemioticians have consistently denied that information is unambiguous enough as an explanatory concept to address the problems of life and cognition. For example, Hoffmeyer (2008, see p. 61) said an up-to-date biology must acknowledge that the biochemical concept of information is too impoverished to be of any explanatory use. For such theorists, the concept of information is simply a vague metaphor instead of a precise explanatory concept. Put in the context of downward causation of the behavior of organisms, Campbell clearly noted that questions about the function of one level are always and intrinsically questions about the selective system operating at the next higher level of constraint. That does not appear in any discussion of information as bits. Bits don’t have levels of function or constraint. If bits are a fundamental or primitive concept in physical theory, there cannot be “levels” of a primitive. Can one presume that physics, so successful in other regards, can be trusted to “pull the rabbit of meaning out of the hat?”. To the extent that there is genuine emergence, novelty in the universe due to the presence of life and the development of cognition, then the answer is no, the physical conception of information is not adequate to address (in any explanatory fashion) the problems of the functional domain, all of which involve meaning and are thus in a different dimension from the space-time dimensions of physics. There simply is no meaning at all in the concept of information as it is used in physics. The bits of communication theory specify differences—syntactic chunks, if you will—not meanings as semantic content. Syntactic chunks are not the essence of meaning—they are only a means of or for the expression of meaning. Information becomes meaningful only in the context of conceptual frameworks shared by members of the species in which things could have been otherwise (i.e., energy degeneracy considerations apply).
184
W. B. Weimer
And while we certainly do differentiate meanings by drawing distinctions (making choice determinations), meanings are not just differences, they are predications or attributions to something rather than relations between bits in a communication channel. These predications need to be determined by making the distinction between surface structure and deep conceptual structure underlying it, and by realizing that meaning always depends upon historical context (in the case of an ambiguous utterance, looking back over the derivational history of the utterance in order to disambiguate—determine the meaning of—exactly the same physical bits that constitute a deep structurally ambiguous sentence (or any form of behavior). It will not do for the physicist to reply that when one takes into account the brain processes involved, there must then be differences in the “interpretation” of one differentiated meaning from another. While that is true, it is beyond the capacity of the communication theory concept of information to address that issue. Meaning is emergent with respect to communication. No physical law is supposed to be dependent upon a subject of conception looking at the derivational history of that law and “interpreting” it one way or another. The physics-communication theory of information presupposes meaning on the part of the sender and equally on the part of the receiver of “information,” and that means that “physics” has to become an accordion word, stretching to include what Meehl and Sellars (1956) called everything in the space-time manifold instead of the completed physical theory adequate to explain the universe prior to the emergence of life.
Functionality Is Fundamentally Ambiguous Until Its Derivational History Is Specified Functionality faces the problem of physical ambiguity. Despite the fact that we can be fairly precise in terms of semantic specification of functions, it is never possible to specify a precise set of physical movements that are inevitable co-occurrence relationships to those functions. There are, as the old saying goes, different ways to skin the cat, all of which involve totally different physical realizations of getting the job done. We
10 The Mental and Physical Still Pose …
185
blithely use terms like “getting the job done” in all functional specifications in order to acknowledge the under determination of functions in terms of physical specification. Any given realization of function provides a particular set of co-occurrence relations between the functional specification in the rate-independent realm of conception/cognition and the physical embodiment or manifestation in the rate-dependent realm of dynamical theory. It does not matter that we can in fact succeed (FAPP, as J. S. Bell, 2004, called it in physics—FOR ALL PRACTICAL PURPOSES) in specifying a suitably precise specification of physical movements to pick out the functionality and “teleology” for delimited specified concepts—such as a particular predator hunting a particular prey in a delimited environment, as Hayek (in Vanberg, 2017) did— because we could never specify all possible physical realizations of such an instance of functional behavior. There are an infinitude of realizations of being a predator, the function of hunting, and being an object of prey. The only way that that ambiguity can be reduced is by theories that look back over the developmental history of the unfolding of the functional specification. Here we need to remember Donald Campbell’s (1974a, 1974b) admonition with respect to downward causation (which is exactly what such looking back over history involves)—that the account will not be complete until we fill in all the relevant layers and levels from the highest, most generic and abstract functional specification down through a physical level realization of the behaviors involved at the level of neurophysiology and muscle movement, and follow that down to the level of biological molecules, then atoms, etc. The task of specifying what constitutes human action is not one to be taken lightly (Weimer, 2021, 2022). Or, as we now must detail in the next chapter, taken as given a priori. The apodictic or a priori strategy we mentioned in the Introduction to avoid “social physics” confuses the fundamental distinction between functionality and physicality (absolutely essential and always tenable) with the difference between formal certainty and empirical falsifiability (equally essential and correct), to the detriment of both necessary distinctions.
186
W. B. Weimer
Old Wine in Better Bottles The chapter title and a few introductory remarks mentioned the “mind body” problem (or problems). After that, such terms disappeared. They no longer appear to be either informative or necessary for understanding current issues. The growth of our knowledge has retained and reformulated basic issues, but in so doing has moved away from the original connotations of the “mental” and the “physical.” In the first half of the twentieth century, Bertrand Russell often noted that things had changed: What we had assumed matter to be had been abandoned, and thus matter was effectively “dematerialized.” This old joke his family had told him had become even more ironic: What is matter? Answer: Never mind. What is mind? Answer: No matter.
Russell himself had dematerialized “matter” with structural realism, throwing away its sensible or palpable qualities by relocating them in our perceptual-conceptual apparatus, leaving its intrinsic or “first order” properties unknown and unknowable. In similar fashion the “mind” has, like phlogiston given off by burning substances, disappeared into the problems of the origin of life from physicality and the manners in which, as Polanyi said, life harnesses physicality. But what that involves most certainly still “matters.” Issues such as the nature of meaning and the epistemic aspects of cognition and agency still abound, but the important distinctions for the future to explore are now the dualisms or complimentarities we discuss below in Parts III and IV of this book.
Notes 1. The issue of the autonomy of knowledge should be raised at this point. Does it exist in some abstract and disembodied Platonic realm of Ideas, in a Third World independent of the purely physical (world 1) and the psychological realm (world 2) of cognition? Popper (1972, and subsequent writings) attempted to locate the “objective” aspect of knowledge in a disembodied and eternal Platonic realm of ideas somehow floating
10 The Mental and Physical Still Pose …
187
around independently of the existence of any and all knowing subjects (of cognition). His motivation for doing so was to escape the obviously superior evolutionary descriptive account of knowledge and the growth of science stemming from Polanyi (emphasizing the tacit dimension) and Kuhn (emphasizing the seemingly uncritical nature of normal science puzzle solving) in order to retain his “critical rationalist” prescriptive account (examined in the appendix chapters). But just because of the independence of knowledge claims from any particular subject, one cannot jump to knowledge that is independent of all subjects. The correct claim is that (logical and semantic) contents of claims are independent of a merely psychological basis. As Cassirer (1953) noted, Classical writers all incline to the same thought, that has found paradoxical expression in Bolzano’s conception of a realm of ‘propositions and truths in themselves.’ The ‘subsistence’ of truths is logically independent of the fact of their being thought.… No matter whether these acts [of thought] are different in different individuals or whether they are uniform and have constant properties,--in no case is it these properties, that we mean, when we speak [of the objects of geometry, of lines, surfaces and angles]. (p. 312) Discussing mathematics as a pure syntactic system, Cassirer noted “that world [Popper’s World 3] is not the product but the object, not the creature but the quarry of thought, the entities composing it--propositions, for example,--being no more identical with thinking them than wine is identical with the drinking of it” (p. 313). But Cassirer (as well as Weyl and other neo-Kantians) never asserted that the realm of abstract logical contents of cognition implied that epistemology did not entail a “knowing” subject. As Cassirer noted, “these operations [of cognition] can indeed never blend indistinguishably into what is known by them; the laws of the known are not the same as those of knowing” (p. 314). All knowing, since it depends upon description, is objective from the start: “We do not know ‘objects’ as if they were already independently determined and given as as objects,– but we know objectively, by producing certain limitations and by fixating certain permanent elements and connections within the uniform flow of experience” (p. 303). Popper began by arguing that the autonomous realm of knowledge is independent of physical laws and physical necessity. It is certainly true that no physical laws place restrictions on symbols. But can one
188
W. B. Weimer
therefore jump to the conclusion that there is a real realm of symbols existing independent of their embodiment? The problem here is that no symbolic operation or symbol “vehicle” or sign is independent of the lawful dynamics of physics. Physical “law” constrains any material realization or embodiment of information. That is why biology speaks of information structures in living systems and in the psychology of cognition. As Pattee (2012) noted, there are four indispensable physical conditions that must be satisfied for symbolic descriptions to constitute functioning informational constraints in living systems. First, memory or storage structures that are relatively time or rate-independent must exist and must be non-holonomic with respect to the symbols they control; second, thermodynamic energy dissipation must ensure that informational constraints have stable but equivalent energy configurations; third, they must be energy degenerate according to the second law of thermodynamics; fourth, they must exhibit rate independence: they can have no essential physical law like rate dependence. These conditions allow symbols and physical constructions to form a self-controlled unit, whose unity Pattee called semiotic closure: “Without this semiotic closure that includes the grounding of symbols in the material world, all our symbol systems, all languages, all mathematics and formal systems could appear to exist outside the physical world as if they were pure Platonic forms” (Pattee, 2006, p. 225). So pure Platonic forms could not come into existence by any biological means, and if they existed by some other means, they could never be known to exist by biological organisms, which require both informational structures and some physical embodiment for their existence. 2. Abel is often dismissed (usually in unrefereed blog and “opinion” posts on the internet) as a creationist, and therefore, one who “must be” antiscientific and disreputable. However, the distinction between the physical and the functional (or physical and formal) has nothing whatever to do with the endorsement of any religion, or any “unscientific” point of view. And it could not possibly matter what Abel’s (or others) views or beliefs (outside of scientific inquiry) are, or indeed what their “source” would be. The whole idea of requiring impeccable “sources” for knowledge claims is part and parcel of the justificationist metatheory of knowledge and rationality (criticized in the appendix chapters), and is of no merit whatsoever in an evolutionary and non-justificational framework. Indeed, if we were to dismiss out of hand all positions stemming from purported “unscientific” influences, or all scientists who hold strong religious or
10 The Mental and Physical Still Pose …
189
metaphysical beliefs, we should have to dismiss the views of Isaac Newton (who wrote more pages on astrology than on physics), Sir Charles S. Sherrington, Wilder Penfield and Sir John Carew Eccles (to name only a few practicing Catholics who allowed their religion to guide their neurophysiological research), philosopher of science and methodologist Pierre Duhem (equally a devout Catholic), and should question even Henri Poincare, who was raised Catholic and then violently repudiated Catholicism. And what of the Belgian Priest George Lemaitre, a devout Catholic who won the Nobel Prize for predicting the expanding universe? This irrational and decidedly unscientific controversy between theists and atheists—whether in ordinary life or in science—is a classic case of the ignorance of epistemological constraints upon ontological speculation. It is clear that both theists (creationists, or whatever) and atheists (“hard” scientists, molecular biologists, whatever) have created a tempest in a teapot, because both positions are untenable. Neither the theist nor atheist can defend their totally nonempirical and metaphysical claims against the other. There is no evidence that can be mustered in favor of one over the other: both are haunted universe metaphysical doctrines, confirmable and influential metaphysics (as Popper called such views) that are not empirical claims at all (because no empirical phenomenon can ever refute them) and therefore are of no use as empirical science. These doctrines are confirmable (compatible with everything), but cannot be refutable (they are incompatible with nothing). Both positions can be belief systems that may guide a researcher, but that is all—they can be no more than a source of ideas. They are not knowledge claims in themselves, and they cannot entail any ontological position whatever. The theist is “sure” or “certain” that a creator (or some manner of powerful being or thing) exists and is “responsible” for (i.e., causal of ) life, and the atheist is equally “sure” or “certain” that there is none, and therefore isn’t responsible for life. The only tenable or “scientific” position here is agnosticism—we have no knowledge to decide between these nonempirical positions, or even whether there is a genuine issue involved. Metaphysics alone is not science—testability is required sooner or later.
190
W. B. Weimer
References Abel, D. L. (2010). Constraints Versus Controls. The Open Cybernetics and Systematics Journal, 4, 14–27. Bartley, W. W., III. (1984). The Retreat to Commitment. Open Court. Bell, J. S. (2004). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. Born, M. (1969). Symbol and Reality. In Physics in My Generation (pp. 132– 146). Springer. Campbell, D. T. (1974a). “Downward Causation” in Hierarchically Organized Biological Systems. In F. J. Ayala & T. Dobzhansky (Eds.), Studies in the Philosophy of Biology. Macmillan and Company. Campbell, D. T. (1974b). Evolutionary Epistemology. In P. A. Schilpp (Ed.), The Philosophy of Karl Popper (pp. 413–463). Open Court. Cassirer, E. (1923). Substance and Function, and Einstein’s Theory of Relativity. Open Court Press. Cassirer, E. (1953). The Philosophy of Symbolic Forms (Vol. 1). Yale University Press. Cassirer, E. (1957). The Philosophy Symbolic Forms (Vol. 3). Yale University Press. Eccles, J. C. (1976). Brain and Free Will. In G. G. Globus, G. Maxwell, & I. Savodnik (Eds.), Consciousness and the Brain (pp. 101–121). Plenum Press. Galileo, G. (1960). Two Kinds of Properties. In A. Danto & S. Morgenbesser (Eds.), Philosophy of Science. Meridian Books. Originally translated by A. Danto in Introduction to Contemporary Civilization in the West. Columbia University Press, Vol. 1, 1954. Hayek, F. A. (1952). The Sensory Order. University of Chicago Press. Hayek, F. A. (2017). Within Systems and About Systems. In V. Vanberg (Ed.), The Sensory Order and Other Writings on the Foundations of Theoretical Psychology (pp. 1–26). University of Chicago Press. Hoffmeyer, J. (2008). Biosemiotics. University of Scranton Press. Körner, S. (1966). Experience and Theory. Humanities Press. Meehl, P. E., & Sellars, W. (1956). The Concept of Emergence. In H. Feigl & M. Scriven (Eds.), Minnesota Studies in the Philosophy of Science (Vol. 1, pp. 239–252). Mises, L. (1978). The Ultimate Foundations of Economic Science—An Essay on Method (2nd ed.). Sheed Andrews and McMeel.
10 The Mental and Physical Still Pose …
191
Mises, L. (1990). Money, Method, and the Market Process. Essays selected by Margit von Mises, Edited by R. M. Ebeling. Kluwer Academic Publishers. Pattee, H. H. (2006). The Physics of Autonomous Biological Information. Biological Theory, 1(3), 224–226. Pattee, H. H. (2012). Laws, Language and Life. Springer. Pattee, H. H. (2013). Epistemic, Evolutionary, and Physical Conditions for Biological Information. Biosemiotics, 6 (1), 9–31. Polanyi, M. (1969). Knowing and Being. University of Chicago Press. Popper, K. R. (1945/1962).The Open Society and Its Enemies (2 vols., Rev. ed.). Harper and Row. Popper, K. R. (1959). The Logic of Scientific Discovery. Harper & Row. Popper, K. R. (1972). Objective Knowledge: An Evolutionary Approach. Oxford University Press. Ramsey, F. P. (1931). The Foundations of Mathematics and Other Logical Essays. Macmillan. Schrödinger, E. (1956). Science and the Human Temperament. W. W. Norton and Co. Sellars, W. (1963). Science, Perception and Reality. Rutledge & Keegan Paul. Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal, 27 (379–423), 623–656. Weaver, W. (1949). The Mathematics of Communication. Scientific American, 181(1), 11–15. Weimer, W. B. (2021). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 1. Cosmos + Taxis, 9 (11+12), 1–29. Weimer, W. B. (2022). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 2. Cosmos + Taxis (in press, reference later).
Part III There are Inescapable Dualisms
Traditional philosophy and traditional “hardheaded” approaches to science have been terrified of the possibility of dualism. Great lengths are gone to so that everything in the universe is “physical,” and all scientific inquiry must use the same methodology to search for “laws” of nature. Given that physics has contemporary positions that are either based upon a phenomenalistic mentalism or the plurality of realities (one interpretation of the Everett-Wheeler-Dewitt position), this becomes difficult to understand. Chapters 10 and 11 are not concerned with ontological dualisms, but rather with epistemological ones. Knowledge and its acquisition make indispensable use of a context of constraint consisting of incommensurable positions that, if not polar opposites, appear as nonreducible, usually orthogonal, views. We must summarize why observers can never be identical to that which is observed, why subjects (who, as knowers, make judgments and choices) are not just physical objects, how physicality can never be equated with functionality, how symbols and meanings can never be equated with signs, and more. While the dualisms are opposed, they are equally necessary in epistemology. They are complimentarities in a context of constraint in which knowledge arises. They are the epistemic constraints upon the acquisition of knowledge, and
194
Part III: There are Inescapable Dualisms
upon the postulation of what is real. We shall see that no epistemological monism can be tenable, but that claim is not incompatible with physical monism.
11 Complementarity in Science, Life, and Knowledge
To such a system [the human mind] the world must necessarily appear not as one but as two distinct realms which cannot be fully “reduced” to each other F. A. Hayek Epistemology by its very meaning presupposes a separation of the world into the knower and the known or the controller and the controlled. That is, if we can speak of knowledge about something, then the knowledge representation, the knowledge vehicle, cannot be in the same category of what it is about. Howard H. Pattee
Let us overview the import upon epistemology of unavoidable and indispensable dualisms. It is a precondition of life on earth and the existence of knowledge that there must be a separation—from the purely physical—from the domains and descriptions which apply to phenomena central to the origin of life and the origin of knowledge. Whatever is possessed by living organisms and subjects of conceptual activity requires
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_11
195
196
W. B. Weimer
a duality of descriptions in order to be adequately understood. Knowledge inevitably involves semiosis, functionality and intentionality, and a separation of the epistemic subject, as the knower, from the object of knowledge, or that which is known. Similarly, life depends for its existence on the equivalent of constraints effecting meaningful “software” programming in computers, upon semiotic programs entailing that the physical “hardware” is cybernetically steered or functionally controlled at all levels. Such functionality and choice control are fundamentally different from, and cannot be reduced to, the physicality of the nonmental world or our bodies as physical or chemical structures. Choice or selection is physically inconceivable—the opposite of lawful inexorability. Epistemically, we have no “choice” at all here—we must have dual descriptions (for dual control systems) in operation or we have no knowledge, no awareness, and we would never even have been alive in the first place. These are constraints upon epistemology and what it can disclose, as they are equally constraints upon our very existence.
Observers and the Observed Knowledge inevitably entails a distinction between a functional observer (as agent or subject) and that which is observed (the known). It is we who create knowledge. This requires that we acknowledge the logical and conceptual incompatibility (or reducibility) of different modes of description which we must employ to address every instance of knowledge gathering. As Hayek said, the world “necessarily” appears not as one but as two distinct realms.1 This distinction was forced upon physics when Newton made the categorical separation of laws of nature from initial or boundary conditions. Laws are universal and do not depend upon the state of the observer. Initial conditions apply to the state of a particular system and the state of the observer who measures them. Thus physical knowledge presupposes a separation of the universe into a knower and that which is known, or into a controller, and that which is controlled. Two classes of descriptions result: on the one hand, the inexorable laws of nature that by their definition must apply every-where and every-when, and on the other hand, the
11 Complementarity in Science, Life, and Knowledge
197
choices of subjects of conceptual activity who interpret boundary conditions so that we can do such things as measure and construct experiments in order to get knowledge of those laws. Our choices are higher order controls. There is no way to have knowledge without both. Inexorable laws are meaningless and inapplicable unless we can tie them down to reality by specifying initial and boundary conditions (i.e., by relating them to our particular human existential situation). And without regularity we are left with an inexplicable welter of momentary boundary conditions as frozen accidents of our (or the universe’s) history without any intelligible interpretation. We are forced to make what Pattee (2012) called an epistemic cut : making a sharp separation between properties of the knower and the known. Michael Polanyi was the first to emphasize the crucial role of boundary conditions as harnessing the laws of nature. A boundary condition is always extraneous to—by definition on the outside of—the process which it delimits and harnesses or controls. This is the separation of the knower from the known, the controller from the controlled. Polanyi’s insight was that this places a system under a form of dual control. Operations of what can be called “higher” or more abstract, intrinsically semantic and rate-independent, principles or rules act to constrain the workings of what can be called “lower” levels (the physico-chemical domain with its inexorable laws). In evolutionary biology, this is the process Campbell called downward causation, in which higher order functionality constrains the physical form and behavioral capability of living organisms. In psychology, it is the functional assignment of meaning to neural activity constraining the physical movement of the individual in its econiche. Those higher level or higher order constraints are semiotic—pragmatic and meaningful, and usually intentional—and are required in addition to the laws of nature. When life is present semantic rules constrain physical laws. And the evolutionary context provides the ultimate pragmatic context for the semantics of life, and all its “adaptive radiations” (as Mayr put it) in the time dimension of life. Thus Polanyi (1969) provided the first clear statement that life—as functionality—constrains, or harnesses, physical reality. All knowledge requires the presence of semantics (functionality, formality, intentionality, teleology, purpose, however one formulates it) to come into
198
W. B. Weimer
existence—it is not just syntactic structuring. The synchronic physical realm alone can never constitute knowledge or produce any complex structure that has knowledge. Knowledge is diachronic—simply not a physical concept—it is functional or teleological in its intrinsic formulation. Knowledge exists only in and for subjects of conceptual activity. The beginnings of knowledge arose when the physical realm was supplemented by the emergence of an historical form of organization operating by higher order control constraints—when self-producing life, which requires semiotic control, came into existence. In a purely physical universe of inexorability and accident, with no life or meaning, there could be no knowledge. No subject would stand apart from that which was to be known, no semiotics could constrain physicality, no functionality could disambiguate syntactic movement, no regularity could ever be known to exist.
Subjects Make Choices Despite their physical substratum subjects are never just objects. Subjects make choices, and choices require alternatives that can (at least in principle) be realized. Objects, identical and interchangeable, are controlled by the inexorable laws of nature, and thus never have alternatives. Inexorable constraint precludes the very possibility of the existence of alternatives, and thus prohibits choices. Because they have alternatives from which they can choose, subjects as agents have the possibility, the freedom, both to know and to make errors. There is no physical concept of error—it is a purely functional and abstract conceptual notion. This is also the case for the existence of novelty. Subjects, although they cannot violate the inexorable laws of nature, can, as functional agents, do new and unexpected (unpredictable) things. There could never be novelty in a universe totally controlled by inexorability. Ambiguity comes into existence with subjects (and only with subjects, because it requires conflicting or incompatible meanings or interpretations). Because subjects are creatures of choice, they can make choices that are not only sometimes (lets be frank: probably in most cases) erroneous but also create indefinite and contradictory outcome possibilities,
11 Complementarity in Science, Life, and Knowledge
199
with ambiguous interpretations. Choices do not lead to a deterministic finality. Rather, choice constraints lead to the possibility of more and more choices: choice determination leads to more and more possibilities for future action, not fewer and fewer necessary outcomes. Semantics and choice contingency (even though deterministic concepts in the rateindependent realm once they have been made) lead to the possibility of indefinite creativity. Choice requires the presence of real alternatives—that things could have been otherwise. This denial of any necessitarian determinism is the point at which psychological or semantic information comes into existence: functional information becomes meaningful when choices must be made. The realm of conceptual activity makes meaning through choice—thereby attempting to delimit the realm of ambiguity. The fundamental task of the nervous system is to delimit ambiguity as much as possible. The most primitive functional activity of the integrated or centralized nervous system is the orienting response to novelty. This is the fundamental choice activity of the organism, when it is forced to choose—to judge, assess, or evaluate—a pattern of neural activity as meaningfully different from an ongoing level of background activity. Choices involve selecting or responding to particular patterns of neural activity that stand out from or are somehow differentiated from ongoing “maintenance” or background patterns. Any such activity is, as Hayek (1952) emphasized, a matter of classification and reclassification of patterns. Coming to know is reclassification of ongoing patterns of pre-and post-synaptic activity interacting with ongoing patterns of all-or-non-spike potential activity. We are creatures who make things meaningful by creating and interpreting patterns. This is the basis of semantic meaning. Shannon or communication theory bits of “information” play no role in any process of knowledge generation or semantic meaning. Shannon information, as Shannon himself acknowledged, is an artificial concept developed for the restricted task of optimizing communication transmission. His theory was intended to help in removing as much noise as possible from the communication channel. Semantic meaning has its origin within the organism as interpretation of itself and its external environment. That predates the much later problem of transmitting information through physical (only) systems of wires or light or
200
W. B. Weimer
other particle pulses—even neural pulses. Shannon “meaning” (a unit or bit of uncertainty, which Shannon noted had nothing to do with semantic meaning) mainly concerns eliminating noise from already existent meaning (possessed by a sender) that is being sent (transmitted) to a receiver who already knows how to understand it when it is received, even though it is meaningless physicality during the (often noisy) process of transmission. We also face problems of noise in the nervous system. But we also have something else to worry about: living systems that have the possibility of choice can and do make errors. Choice implies uncertainty, and that in turn implies fallibility. We are fallible creatures who are very error-prone. We can and do (as every student has found out) make “mistrakes.” That is why evolutionary epistemology emphasizes that knowledge acquisition depends upon error detection and (if possible) correction, and also if possible, error elimination. Because we can choose wrongly and do weave false theories and metaphysical haunted universe doctrines, we can only learn through weeding out error by constantly checking our conjectures against an independent reality. We have gotten to the point where, as Popper emphasized, we can now let our theories die in our stead. This is our evolutionary predicament.
Life Began with Functional Instruction Life is cybernetic and self-produced—it is in all cases functional, and because agency is present it is steered, and it always has been. Maintenance—metabolism of even the simplest cell—is steered and regulated by instructions that prescribe (or program) what must occur. Every aspect of metabolism with a single cell depends upon programmed instructions, the messaging of those instructions, and feed-back messaging about how well the initial messages were received and carried out.… Without programming and the bio-semiosis of those instructions, no progress could be made within any micelle, vesicle or proto-cell toward eventual life in a true cell. (Abel, 2011, pp. 148–149)
11 Complementarity in Science, Life, and Knowledge
201
These are symbolization and coding systems acting as higher order functional constraints to control lower level purely physical processes. They are far prior to our recent arrival, and our extremely recent understanding of them. They predate our human existence by billions of years. That means that functionality, as instanced in terms of the control, regulation, and integration of metabolism, predates human physical existence by well over 2 billion years. Those concepts are not products of our minds. Our minds are their evolved products. They have determined the course of our evolution. Life used symbol systems and made choices by utilizing linear strings that were, as Pattee (2012) argued, the first real language—the genetic language. All semantic/semiotic/bioengineering functions require dynamically inert (quiescent, as von Neumann said) physical symbol vehicles that represent time-independent, non-dynamic “meaning.” “No empirical or rational basis exists for granting to physics or chemistry such non-dynamic capabilities of functional sequencing” (Abel, 2011, p. 153). The origin of this functionality—co-occurrent with the origin of life—is perhaps the greatest mystery that we can face. So far as epistemology is concerned, we need only grant that it is the case that it does exist, and explore some of the consequences.
Symbols and Meanings Are Rate-Independent As von Neumann said, symbols are quiescent . They are not dynamic, even though their manifestation or expression requires a dynamical physical system. That fact immediately creates the problem of representation—how do physically specifiable tokens represent rate-independent functional symbols (or types) or instructions? This is not just a problem of reference: we know that certain physical entities are the tokens to which the abstract and conceptual (symbolic) instructions refer. Reference is taken for granted. The issue instead is how do symbols “stand for” or “represent” physical or material operations? The answer is that they do so by employing rules. Rules (as opposed to laws) can control voluntary, or choice-contingent, behavior. Rules define formal systems. A formalism describes some particular functionality in abstract terms
202
W. B. Weimer
according to rules of formation and transformation of the symbols. The formalism provides rules to concretize functionality into the physical realm. Thus Euclidean geometry as a formalism provides a rule set that allows us to concretize functionality for the shape of ordinary or threedimensional spaces. Formalism in that sense is what Plato was after in his doctrine of Forms—abstract “essences” or generative rules that are definitive of general classes of entities that are always instantiated in particulars. Rules are arbitrary, unlike laws which are necessary. Rules can be broken, and oftentimes changed in order to function better. Laws can never be broken. Being entirely arbitrary, rules require genuine choices in order to be followed. Laws are inexorable and thus do not allow the concept of choice to exist. Choices function through decisions that prescribe actions. Rate-independent choice functionality prescribes rate-dependent physicality (as Polanyi said, life harnesses physicality), but unlike that inexorable physicality it is always arbitrary, and because it is arbitrary it is both meaningful and possibly wrong. We are simply restating and elaborating Polanyi’s insight—living functionality harnesses inanimate physicality. The formal systems that science and philosophy consciously or explicitly create and study are based upon choice contingency. Formalisms require arbitrary (but definitely not random) choices in order to be realized. Thus they are quite different from physical systems that are constrained by the laws of nature, where, since there are no choices there are no meanings or functions, and no exceptions—everything is inevitable. Some major differences are noted in Table 11.1. This separation distinguishes between physicodynamics (for the dynamical physical realm) and “traversing a cybernetic cut” for the domain of functionality. These columns illustrate the difference between physical constraint upon a system and functional choice control (which is prescriptive of behavior). Life and conceptual activity—including epistemology and all science—are equally formal phenomena (in the distinctions of this table) and are only incidentally physical in their particular realization. And this is the case even though the so-called physical sciences deal with an empirical (rate-dependent) subject matter that does not in itself seem to be formal. It is the formal aspects that define and provide the meaning of the concepts within life and cognition, not the fact that they are
11 Complementarity in Science, Life, and Knowledge
203
Table 11.1 Differences between physicality (inexorable laws or physical boundary conditions) and, on the other side of an epistemic/cybernetic cut, the functional realm of choice control Physicodynamics
Traversing the cybernetic cut
Physical Incapable of making decisions Constraint based Natural-process based Constraints Just “happen” Forced by law & Brownian movement
Nonphysical & Formal Decision-node based Control based Formal prescription based Constraints are deliberately chosen Writes and voluntarily uses formal rules Learns and instructs Programmer produced Directed by choice with internet Makes functional things happen Formally organizational Optimization of genetic algorithms Autonomy Programs configurable switches Writes prescriptive information Managerially efficient Creative Values and pursues utility
Incapable of learning Product or cause-and-effect chain Determined by inflexible law Blind to practical function Self-ordering physicodynamics Chance and necessity No autonomy Inanimacy cannot program algorithms Oblivious to prescriptive information Blind to efficiency Non creative Values and pursues nothing After Abel (2010)
embodied in one or another particular realization that is describable with the laws of nature (which laws themselves, incidentally, are not at all “natural” or “physical” but rather [rate-independent] conceptual and formal). The rate-independent formal realm provides the higher order context of constraints that, in harnessing the physical realm, makes life and cognition possible. Our “human” existential predicament is thus formal rather than merely physical. Small wonder then, that all psychological phenomena have to have both a rate-independent or formalfunctional specification in addition to a concomitant rate-dependent physico-chemical (or biochemical) specification. Epistemology, for its very existence, requires both domains. All living action is inherently both formal and physical, and specification of only one domain (or the other) is inherently ambiguous and incomplete. Here we see the inevitable limited utility of the computer programming metaphors for life and cognition. Even though they may provide a formal representation of such
204
W. B. Weimer
phenomena, there is no guarantee whatever that such a formal representation is the one instantiated by life on this planet. Thus computer or computation-based models, even if correct, could provide only a partial understanding of life and cognition. We would require the complementary description of the physical systems capable of what is necessary to realize those rate-independent specifications. Cognition and mind are necessarily embodied. We have to have complementary descriptions of minds and their embodiments—to say nothing of the physical reality in which both are embedded.
Physicality Can Only Be Disambiguated—And Hence Understood—By Concomitant Functional Analysis Understanding physical objects—those studied by physical theory— requires a duality of descriptions because the laws of nature cannot interact with physical objects unless specific boundary and initial conditions are applied to tie them down to observable reality. The physical requires a knowing subject for its existence to be known as such. In the case of agency, in which a subject (and not a mere physical object) behaves, the situation is slightly different. One must know the functionality, the teleological, or intentional “aboutness” of the agent, in order to determine what any given physical “bits of behavior” represent or mean. Two different agents or individuals could make exactly the same physical movements (down to the subatomic or any other level of specification) and exhibit different functionally defined actions. The referential basis— the “colorless” physical movements themselves—is inherently ambiguous from the point of view of functionality: the same “behavior” can equally well instantiate very many different intentions and thus different kinds or abstract classes of actions on the part of agents. Equally, a single functional intention can be realized in a literal infinitude of movements (physically specified). This fundamental ambiguity of human action is inexplicable unless one understands that a duality of descriptions, with complimentary theories in each domain—with the physical movement
11 Complementarity in Science, Life, and Knowledge
205
specification on the one hand and the functional intention of action specified on the other—is absolutely required for disambiguation, and hence for understanding. We see this problem in physical theory with respect to the specification of record-keeping and measurement. These are purely functional concepts: functional specification on the part of an agent is required to determine that a measurement has in fact occurred, and that a record has in fact been made. Agency selects and freezes out from an undifferentiated flux of events particular records. If we try to analyze the physical movements or processes that an agent makes in these functional actions we find, as Pattee emphasized, that the process of measurement or the function of the record itself disappears in the complete specification in terms of physical theory. These functions are nonlinear with respect to physical specification. Measurements, like nervous activity in classification, are a process of many-to-one mappings. In the case of the measurement, there is an infinitude of actual physical events involved. The functional process of measuring collapses that indefinite number of physical events into a frozen-out momentary measurement that exists only in the rate-independent realm of conception. The nervous system, in processing many inputs as instances of “novel” versus “not different from ongoing background expectation,” does exactly the same thing: it makes a decision to “collapse out” an instance of novelty from the ongoing flux of activity. It is this inherently functional and timeless classification property that gives the measurement (or the novel stimulation) its determinate meaning. The manifestation of the problem in the quantum realm is an instance of the same thing. It is exactly like looking inside Schrödinger’s box and seeing that the cat in it is either alive or dead. Prior to making that measurement—to an agent looking and thus freezing out a measure—one cannot say (because then one as yet has no knowledge) that it is either dead or alive, or even that the box contains a cat.
206
W. B. Weimer
Physics Is Only a Beginning The socialist historian of science J. D. Bernal attributed the saying “all science is either physics or stamp collecting” to Lord Rutherford. Whether historically true or not it epitomizes an all too common attitude: that physical theory is the only science, and the only discipline worthy of study to understand reality. Unfortunately, that simplistic attitude has caused more harm than one can imagine. Physical theory, no matter how complete its specification, is not exhaustive of reality. The idea that everything should be somehow “derivative” from physics or it is merely “stamp collecting” is irresponsible nonsense. Physics is not, as Don Lincoln, the narrator of a popular government-funded US Public educational TV series, says at the end of each broadcast, “everything.” The truth is that physics, dependent upon human conceptual functionality, is already nonphysical, because it is intrinsically conceptual. What we call “physics” is a theory (actually many theories) in our conceptual realm. Physics is just the beginning, underlying but not determining the semiotic realm of meaning and cognition. Indeed, if physics were everything, then Lincoln could not know that claim, nor even formulate the proposition that embodies its thesis. All knowledge is functional. All cognition is functional. All meaning is functional. Without functionality, physics could never be known to exist, nor could it be meaningful. One cannot ignore the indispensable duality of the physical and the functional. The entire domain of life—from the fundamental biology of its inception through to the development of human cognition and social organization—cannot be explained by even completed utopian physical theory. No laws entail the existence of the biosphere. It is emergent from physicality. The entire semiotic realm—everything involving meaning and purpose (or teleology) and understanding—transcends, and as Polanyi noted, exists by virtue of harnessing, the physical domain. It is obvious that life requires a physical substrate that is always subject to the inexorability of physical laws of nature and the constraints of thermodynamics; but it is equally clear that life, once it comes into existence, does harness inexorability. In so doing it has created all our knowledge and our meaning—obviously including that pretender to a nonexistant supreme throne, physical theory—none of which can be
11 Complementarity in Science, Life, and Knowledge
207
explained by physics alone. It takes life—in the form of a subject of conceptual activity—to understand what physics is actually about and what it means. One can never make the “physics is everything” claim as either an epistemic or ontological proposition. Succeeding to do so would eliminate the meaning of the proposition itself (as the act or function of measurement dissolves in the complete physical description of it) as well as any pragmatic motivation for having wanted to make it in the first place. Far better to study how the indispensable complementarity of the duality of descriptions—one of which in terms of the inexorable laws of physics (and frozen accidents), the other in terms of the rules of the semiotics of functionality—which has given us what little knowledge that we presently possess.
Context Sensitivity and Ambiguity Without an historical knowledge of the structure of behavior (telling how and why it is “put together”) one can never tell what meaningful or intentional category of action a given sequence of “physically” clearly defined activity represents. Human action is always context sensitive, and that context ranges over a welter of semantic and pragmatic classes. Without clear and comprehensive accounts of what is involved in not only the pragmatics and semantics of a situation, but also of the syntactic structuring realizing those forms of meaning, any interpretation of the behavior is ambiguous. It would be indeterminate (because completely under determined by any physical laws or physical boundary conditions), and thus not explainable by any theory in the domain of physics. This is very important and is discussed more fully in Chapters 10 and 14.
Emergence Beyond Physicality Many hard science advocates think “there is nothing except physics” is a matter of definition: their definition says that anything that “exists” must be part of the physical universe, and that is all there is to it. Meehl and Sellars (1956) prevented that ploy from disallowing discussions of
208
W. B. Weimer
the problem of emergence by distinguishing two concepts of physical: physical1 and physical2 . Something is physical in the broad sense of physical1 if it belongs within the space-time manifold. But something is physical2 if it is definable in terms of theoretical primitives adequate to describe completely the state of the universe (though not necessarily the potentialities thereof ) before the appearance of life. The assertion that physics is not everything, that life as an emergent harnesses the physical domain and therefore requires its own irreducible concepts and rules, simply means that a completed science must encompass theories whose primitive terms go beyond those capable of explaining the inanimate universe prior to life. Life exerts downward causation, harnessing the lower level physical laws, without in any sense violating them. But no theory at the level of those inexorable physical laws can explain the control constraints of functionality or intentionality, or explain the existence of the semiotic realm. So an initial problem for “life” science involves determination and causality in semiosis: semiosis has, through constraint upon physical2 phenomena, allowed or (in a loose sense) “caused” or enabled the evolution of life (up to and including our cognition), and in so doing has introduced fundamental novelty, defined as non-predictability and also undeterminism—meaning not yet determined (if not indeterminism)—and it has done so without in any direct sense having “caused” it as that term is used in physical2 theories. A second problem concerns explaining the co-occurrence relationships between those domains of existence, since neither traditional causality nor any doctrine of epiphenomenalism can relate them. Each domain level has an epistemic boundary, a cut between domains. Asking what is physical or what is subject to physical laws has no possible answer except both relationally and within that domain level. It is the same situation as asking “What does a tomato taste like?” The answer is the tomato tastes like a tomato, and the physical is (and also intrinsically tastes!) like the physical. We can relate the physical (qua physical theory) to the semiotic (qua theory)—crossing the epistemic cut in conception between the two—and we can note some of their differences, but that is all. Similarly, we cannot ask “What is semiotic?” or “What is meaning?” except relationally in the functional realm. So we have co-occurrence, but not causality between domains. How?
11 Complementarity in Science, Life, and Knowledge
209
The quick and dirty answer to the question of the relation is that the semiotic (or self-productive, functional, intentional, teleological, etc.) represents another dimension of existence: it is in a separate coordinate frame of reference (with fundamentally different concepts on its axes, never the familiar spatial length or time or amplitude, etc.), entirely different from the physical dimensions of spatiality or time. Meaning and intentionality are not in physical or temporal reference frames (dimensions), but rather in their own. The semiotic realm is in another phase of existence from the physical, in analogy to how plasma is another phase of existence from “ordinary” matter, or water vapor is in another phase from ice. Consider this problem from the standpoint of causality in the functional realm: How does the mind (cognition, meaning, intention) “cause” a living body to behave? By making choices. Choice constraint or determination (or control) on the part of an agent that can (physically is able to) behave is what in fact “causes” us to behave (and this, as the tacit dimension of society and cognition clearly show, does not require conscious or explicit choice determination). But choices cannot be graphically (dimensionally) represented by spatial concepts or temporal concepts. We behave according to rules, and rules, being probabilistic and therefore fallible (or, the same thing, prone to error), are never going to be inexorable laws of nature specified in terms of physical basic concepts such as forces and fields in time. It used to be that theorists would blithely postulate new “kinds” of inexorable “laws” for the functional realm (as Elsasser, 1958, did for the “biotonic” dimension), or glibly talk of “laws of learning” or the “law of reinforcement” in psychology, or Ricardo’s “law” of association in economics. That degree of regularity will never happen: no behavioral regularity will ever cease to be incapable of error, therefore of correction, or somehow become other than probabilistic. We can and do correct such assumptions in the behavioral fields. A case in point was the so-called law of reinforcement (Thorndike’s law of effect), purporting that “once a reinforcer, always a reinforcer” for a species. It said that responses that produce a “satisfying” effect tend to increase, while the opposite occurs for “unpleasant” effects. That invariant formulation went out the window when Premack (1959) showed that some children will do what is for them “work” (eat
210
W. B. Weimer
candy) for the reward of playing pinball, while others will “work” (play pinball) for the “reward” of eating candy—no so-called transsituationality holds (and both activities are a priori “satisfying”). Premack thus found that a response that has a higher probability of occurrence in baseline behavior will (usually) be a “reinforcer” for a less probable one (in that particular organism). In short, reinforcement is context (organism) sensitive rather than all or none. Put another way, this context sensitivity is what requires us to seek pattern predictions rather than exact point values, as Hayek made clear in economic action and complex phenomena such as the CNS. Subjects are never simple identical objects. That is a fundamental emergent phenomenon. That life is fundamentally not algorithmic is another emergent. One way to make this latter point clear is to look at biological entities as dynamical and ever-changing entities constituted of or from changing “clumps of matter” occupying a series of ever-changing phase spaces that cannot be specified in advance of their occurrence. If that is so, one cannot have a “physical” explanation of their development. There is no predefined phase space in which one could compute Shannon entropy or define a Turing-like algorithm of change. Evolutionary change is understandable after the fact, but not predictable before it occurs (evolution explains or “rationalizes” the occurrence of species but cannot ever predict the emergence of a given new one). Why? There are no invariant symmetries which would allow calculation of invariant or entailed trajectories through that phase space. We are facing Longo’s enablements—a context of potential co-occurrences—rather than entailed causes. Such “entailed causes” as physics provides cannot be defined without those specified symmetries and invariances. As long as organisms are dynamical agents who can create an econiche (and evolve into it and change it) when evolution presents such opportunities (note that “opportunity” is not a physical concept), then the necessary physical invariance and symmetry cannot be specified (prestated). Organisms and their niches become co-constituted, co-related (built by interaction) rather than caused by prior inexorability or invariant boundary conditions. Longo et al. (2012) put it this way:
11 Complementarity in Science, Life, and Knowledge
211
In biology, there is no way to extract the pertinent biological observables as invariant properties, preserved by mathematical symmetries, and then transfer these observables to a “background phase space,” as physicists did, since (Galileo and) Newton, by using first Descartes’s spaces, then by inventing more general phase spaces (e.g., Hilbert spaces). In short, life, unentailed, “bubbles forth” (as Heraclitus said 2500 years ago) and organisms (their phenotypes) co-constitute their own phase space. (p. 1388)
This appears to be what underlies Hoffmeyer’s (2003) attempt to replace Darwinian selection with his concept of evolutionary “translation.” He wanted to broaden the notion of umwelt as econiche into that of a semiotic niche of all cues necessarily available to an organism for its survival. The organism must learn to “translate” all such cues into what is necessary to perform meaningful behavior, i.e., needed for survival. Each new habit an organism forms exposes the organism to new challenges and, equally, opportunities which were not there before, for development into a new econiche. Organisms would be enabled instead of “caused” to do this: it would depend upon their internal choices of behaviors, not upon inexorable laws. There is no inexorable cause holding a gun to the “head” of an organism telling it that it must move to a new econiche everytime one potentially opens up. This is why Hoffmeyer introduced the concept of semiotic fitness, referring to the competence or success of natural systems to manage the genotype-envirotype interaction. This he regarded as natural “translation” rather than inexorable selection. Natural systems are engaged in interpretations or translations, not just selection, since no particular agency is involved. Put in other terminology, this is a description of choice contingency supplanting purely lawful (physics theories) determination. I find the biological concept of enablement clearer as a presentation of this central point. In a nutshell, physics is inadequate to explain life because it cannot deal with context sensitivity. “Physics is all there is” is killed by the fact that life is inevitably context sensitive or restricted, and not context free. Physicists want to regard everything as a matter of information, and information constraints. But their concept of information stems from the purely syntactic notion Shannon introduced. As Pattee (1995/2012)
212
W. B. Weimer
noted, only the universal and intrinsic aspects of “matter” that have no significance whatever for individuals, are described by inexorable laws, while the context-sensitive or selective aspects of matter that have significance (i.e., meaning for individuals) in a local environment are described only by symbols (thus semiosis) instead of laws. Laws are compressed algorithms with no semantic information in them. Bits are purely surface structures. Life is not surface structure—it involves an underlying or deep structural level (from the language of the gene on “up”) that is where meaning and functionality came into existence. Laws as information cannot ever look back over the derivational history involved in the genesis of living or meaningful systems. Information is semantic only if things could have been otherwise. Life is inevitably deep structurally ambiguous, and physics is inevitably a surface structure analysis of its substrate. The laws of physics underlying life cannot tell the difference between the interpretations of “Praising professors can be platitudinous” that we, as subjects of conceptual activity, as functional agents, must address. The physically specified information—the bits involved— is exactly the same (a single surface arrangement), but the meanings are not. At that point of conceptual or functional interpretation, we have moved from inexorable laws to rules of behavior, and rules are intrinsically meaningful and therefore part of the semiotic overlay that emerged from physicality. That is where the explanatory power of physics ends, and the necessity of explanation (and causality) within the functional realm begins. Physics will always be present as the lower level substratum of life (physical1 ), but it alone will never be adequate to the higher level constraints of the functional domain such as human free will, which we must now briefly discuss. Almost ninety years ago D. S. Miller (under the nom de pen of R. E. Hobart) wrote “Free Will as Involving Determination and Inconceivable Without it.” He said what was so obvious that it embarrassed him to have to say it (hence the pen name): free will, to be your choice of what happens, must be a result of your determination of your behavior. Physical indeterminism—of the sort found in quantum physics—would not make you free to choose but rather a random collection of “uncaused” events, having no “choice” at all. Has this conclusion changed with the realization of the necessity of the duality of descriptions in science and
11 Complementarity in Science, Life, and Knowledge
213
the realization that life is emergent from purely physical phenomena? On some important points, yes. Causality—specifying the nature of that determination—has become far more complicated. With life came the biosphere, and with that complication came the harnessing of the physical domain by higher order constraints through downward causation in evolution, and the creation of unpredictable adjacent possibilities for econiches and organisms in the individual creation of an econiche. This leads to three new problems: unpredictability of outcomes, mutual cooccurrent “causality,” and choice as control of probabilistic events. For the functional realm, this highlights the most important part of causality: memory. In the functional realm, the very possibility of causing behavior depends upon the neurophysiological memory of the nervous system. Agency is intrinsically historical: the memory of the nervous system holds a shifting bundle of systems together as an organism possessing (self-producing) agency, and it is the memory of the organism that allows behavior to occur. And epistemically, we then have to highlight the divorce between predictability, which we cannot achieve, from the changed or diminished nature of linear physically “causal” and/or the merely co-occurrent events in the complex phenomena that are involved. Consider a simple analogy—between physical mutual entrainment and functional co-occurrent enablement. The phenomenon of entrainment refers to the synchronization of an oscillator to an input signal. Mutual entrainment occurs when two or more oscillators interact with each other in such a way that they pull one another together into synchronization. Huygens first noticed this with his observation that two pendulum clocks mounted on a common support soon tick in unison. Biological systems show this phenomenon also: Asian fireflies display mutual entrainment when groups of them flash exactly in unison. Apparently, they sense each other flashing, and through that mutual influence, become mutually synchronized. Hopefully your heart cells are equally entrained. When they so mutually interact, they beat in time. The term fibrillation refers to the breakdown of that mutual entrainment, and it provides an obvious example of how your life quite literally depends upon mutual entrainment to prevent defibrillation. What is involved in such systems? Consider the (purely physical) example of Wiener (1961), of the grid of electrical power systems in
214
W. B. Weimer
the United States. The system is a network of AC generators, each an oscillator with a governor to control its speed so that its deviation is as little as possible from 60 cycles at any given time. When a large number of generators are interconnected into the grid, they are very stable: all fall into step with one another or mutually entrain in a manner basically the same as the phenomenon of the fireflies flashing together, or the heart cells “beating as one,” but with obvious physically specified causal relationships. If one generator happens to lead the others in phase—is slightly faster running—then its energy will be absorbed by all the generators which lag behind it. This increase in load for that faster generator forces it to slow down so that it won’t get out of step. If instead it lagged in phase, the other generators would pump energy into it, running it as a motor, and it would speed up to catch up. So generators which go too fast slow down, while those that lag are speeded up. The result is that they pull together in frequency or are mutually entrained (see Dewan, 1976). This physical demonstration employs determinate causality that shows mutual influence between oscillating systems—whether mechanical like the generators, or living, like connected but independent cells in your heart. Why are they physically determinate? Because no agency is involved with individual generators or individual heart cells. Only the fireflies exhibit the most primitive form of agency, and consequently individually unpredictable behavior, probably because of that minimal agency. And that is the beginning of “free” will. But that “free” will is very costly, as we note below. But what happens when agency—a subject of activity—becomes “actively” involved? Increasingly, complex functionality enables changes and co-occurrences that are controlled by top down (downward caused) or coalitionally controlled relations in the agents. When the choices of subjects constitute that top level of control, no physical theory can predict or “control” those choices. And the more agents that are involved (as in populations), the more uncertain—literally undeterminable—will be the trajectories of the individuals in their interactions and the cooccurrences which will result. The result is the “top down” or coalitional regulation of events that produce novelty. It is how your own nervous system can create for you a new thought or a new meaning or a new
11 Complementarity in Science, Life, and Knowledge
215
behavior. It is how populations of agents can create a social order that in turn co-creates their own individual behaviors. Choice requires genuine alternative possibilities being equally physically realizable. Unless things could have been otherwise it would be meaningless to speak of choice, which could never occur in a world of inexorable laws and linear Cartesian billiard ball causality. Only when energy degeneracy is present, so that the thermodynamic cost of behavior is virtually nil for picking alternatives (such as our different thoughts or words) or the equally possible behaviors of organisms, do we find functionality harnessing physicality. And this situation precludes the possibility of hard science explanation by definition: equipotentiality and energy degeneration cannot “cause” one trajectory over another, so physical determination by laws becomes underdetermined. Further, because of the number of shifting variables involved (which are, physically, frozen accident initial and boundary conditions), exact prediction of outcomes is not physically possible. When a species’ novel behavior creates a new adjacent possible econiche, or the body of an organism changes over generations due to downward causation (discussed more fully in Chapter 12), or participation in the market order enables a new good or service, or the evolved social structure enables new morals to develop as higher order constraints on behavior, there is no way to assign physical causes to the events involved. The results of action but not design cannot be specified in advance of their occurrence. The phase space of possibilities of the specious present cannot be predicted from past spaces. Causes are mutually co-occurrent and dependent upon an evolving context of constraints rather than simple direct successions. This is why choice is an exhibition of freedom: of internal or semiotic self-constraint rather than external physical constraints. We are free to choose so long as the situation involves agency—usually our individual agency—capable of symbolization and movement (even just neural activity) in an energy degenerate context. But what is an agent?
216
W. B. Weimer
Semiotic Closure as Self-Constraint: Agency as a Matter of Internal Determination Agents have semiotic closure—they are closed systems, or are cut off from the flux of the physical world by internal meaning systems (in the rate-independent realm). This provides their functionality—their self-production: their behavior results from internal initiation: selfconstrained factors that perpetuate their own ever-changing existence through the temporal dimension (as well as the necessary external causes). As rate-independent, they are the “one” or unified-in-diversity “entity” that is the prototype of the old philosophical chestnut, the one and the many. They are the paradigm exemplar of what Hayek called the primacy of the abstract, functioning as a bundle of interlocking internal constraints preserving identity while allowing the constant changing of the particulars of an ever-changing physical “body.” They exhibit what Aristotle called efficient and final causal processes. Semantic closure provides a “mechanism” of teleological causation within a physical framework. It provides the necessary duality of descriptions necessary to explain living functionality. As rate-dependent, organisms controlled by semiotic closure can manifest an indefinite number of physically specified “responses” or behaviors in their four-dimensional environment. Any functional system requires that its activity be oriented toward an end. Biological activity is fundamentally teleological: the effects of that activity contribute to the establishment and maintenance of the conditions of its own existence. As Mossio and Bich (2017) put it, biological systems are what they do. Their closure constitutes their essence. As Kauffman (2000) said, they act on their own behalf. Biological constraints subject to closure correspond to biological functions. Self-constraint means that the circular organization— the closure—specifies its own dynamics. Self-constraint involves selfdetermination, and that is why biological organization determined by closure must be teleological. The apparent pull from the future is actually a circular push from the ever-changing present system of enablements. Teleology—the circular or recursive self-constrained push—is the condition of existence of a biological system. Life must always have a goal—initially self-preservation, and then secondary ones, tertiary, and
11 Complementarity in Science, Life, and Knowledge
217
so on. Nothing except biological systems—organisms and their parts— does this, except machines and artifacts whose teleology results from a creator’s living input. This is not just homeostasis. Self-maintenance refers to a causal system whereby a system exerts control on some boundary conditions that enable its existence to persist through time. Homeostasis is a mechanism of the stabilization of a system—allowing it to persist. It presupposes the existence of the constraint organization that, under certain specifiable circumstances, in itself contributes to maintain itself in a stable configuration. So homeostasis presupposes and cannot explain selfdetermination. Stabilization arises as only one goal of preservation. We need to explain the transition from physical stabilization to biological self maintenance and determination (Mossio & Moreno, 2010). The genetic approach, associating self-determination with the genome and its expression, emphasizes only the reproduction of the genetic program as the central mechanism enabling biological selfdetermination. It is thus an evolutionary account of teleology, but as such it cannot account for uniqueness and self-determination of behavior at the level of the individual organism’s lifetime, in its particular organization and functionality. It is necessary to supplement the evolutionary account at the level of the gene with an organizational account in terms of the context of constraints specifying individual behaviors. Evolution cannot account for context dependent adaptive behaviors (like learning) that are under the control of individual organisms. There is more to self-determination than what is available in the language of the gene. We need to explain Aristotelian “final” or teleological causation without resorting to the Lamarckian conception of the inheritance of acquired characteristics. Is there an account that utilizes only Aristotelian “efficient” causality to do this? Mossio and Bich (2017), following Rosen (1991), argue that: Within closure, in particular, teleology coincides with the inversion of efficient causation: if y is the efficient cause of y, then y is the final cause of x. The reason is that, because of closure, what x does (y) contributes to the very existence of x. Final causation…[is] in the very organizational principles of the system, without reference to an external designer or
218
W. B. Weimer
user.… Rosen explicitly distinguishes between two causal regimes which coexist within biological systems: closure to efficient causation, which grounds its unity and distinctiveness, and openness to material causation, which allows material, energy and informational interactions with the environment. (p. 13)
This account locates self-determination at the level of efficient causes: what identifies the system is the efficient causes subject to closure. The self-determination—the maintenance of the organization—is nothing more, nothing less, than the maintenance of the network of efficient causes. An efficient cause acts on material processes and reactions which produce some other efficient cause, without itself being involved in the transformation to that new efficient cause. In that sense the efficient cause constraint remains outside the controlled system to effect the control. Mutual dependence of efficient causes occurs through the action exerted on material causes. Self-determination concerns efficient causation, but requires an adequate comprehension and description of the intertwined relationships between efficient and material causation. When a system is closed to efficient causation, it is able to act on its own constituent dynamics, which in turn realize and maintain the efficient causation organization. To explain any functional behavior thus requires a duality of descriptions involving the rate-dependent physical dynamics (the material causal sequence) and also the rate-independent functional action in teleological or “final cause” form. Rosen defined a material system as organismic if and only if that system is closed to efficient causation—see Rosen (1991, p. 244).2 But what in the biological organism constitutes an efficient cause? The material system needs to use the thermodynamic work generated in the control exerted by local constraints (which reduce the degrees of freedom available to the system as a physical device in a thermodynamic world: see Pattee, 2012) in order to maintain those constraints themselves, so that work and constraint are tied together (the rate-independent semantic constraint must be paired with a rate-dependent physical maintenance process for that constraint to do its job as a higher order constraint— the functional must harness the physical). Mossio and Bich argue that
11 Complementarity in Science, Life, and Knowledge
219
what characterizes biological systems “is the fact that the thermodynamic flow is channeled and harnessed by a set of constraints in such a way as to realize mutual dependence between these constraints.… The organization of constraints can be said to achieve self-determination as self-constraint, since the conditions of existence of the constituents of constraints are, because of closure, mutually determined within and by the organization itself ” (ibid., pp. 14–15). So biological function relies on two things: the idea of (self-constrained or constituted) organization, and the teleological and normative dimensions of the rate-independent realm. Each makes a different yet complementary contribution to selfdetermination of the system. Ascribing functions requires distinguishing different causal roles in self-determination, which is what happens with a closure of constraints: in so doing the physical and functional become co-occurrences. Effecting or bringing about a biological function means closure of its constraint(s). What sort of process is a constraint in an organism? One factor to emphasize is that biological constraints are internal, and very complex compared to typical external physical constraints. If biological functions depend upon closure it should be obvious that the material realization of closure requires a very high degree of physical and chemical complexity. Only very complex (bio)chemical functional structures would be able to adequately constrain the thermodynamic flow (to create work), and in so doing both generate and maintain a self-maintaining network. In comparison, physical dissipative structures possess a relatively low internal degree of complexity, and this is what enables them to organize themselves spontaneously—when fairly simple adequate boundary conditions are met. Eddy flows in a river, for example, while selforganizing, are not complex except when we try to follow an individual water molecule. That sort of self-organizing system is so “simple” in its complexity that it can indeed appear spontaneously. Those “simple complex” systems do not realize closure—they would, as Mossio and Bich note, generate only a single macroscopic constraint—their structure itself—that, apparently, would maintain itself (homeostasis) by acting on its own boundary conditions. Another example of “simple complexity” is found in the market order in economics: the constraints holding a
220
W. B. Weimer
market order together are external to the order itself. The only “internal” constraints are provided externally, by the market participants— agents—whose behavior constitutes that order. Remove the agents and the market order disappears—it cannot be maintained by the external boundary conditions alone. The constraints of agency, always internal, are never spontaneously generated. Life did not just spontaneously “pop up” in completed functional form. That functionality exhibits choice complexity that is only found in organisms. To summarize, biological systems can exhibit self-determination because they generate (at least some of ) the constraints that act on their own continuing activity. Nothing else has choice contingency— nothing else can generate its own set of constraints. In generating these constraints biological systems contribute to determine the conditions in which their own organization can (and can continue to) occur. The set of constraints subject to closure are the set of biological functions. Unlike other classes of natural or artificial systems, biological systems do not merely obey external, and independently generated, constraints. Selfdetermination means self or internal constraint. In biological systems, this takes the form of closure, the organization occurring as a set of mutually dependent constraints. Lose a mutually dependent constraint and you lose that life itself.
Notes 1. This passage clearly indicates that Hayek, despite being an ontological physicalist, remained an epistemic dualist. There has been considerable misinterpretation in the literature, claiming that since Hayek was an ontological monist, he was a physicalist rather than a “dualist” of any form (see Vanberg, 2017). That is a half truth: Hayek is an epistemic dualist with regard to the existence of the mental or functional realm as well as the physical. He is not a strict monist in the traditional sense. Hayek was really an ontological agnostic, arguing against the Cartesian conception of a “mental substance,” and otherwise leaving things open. Nevertheless, his research strategy was to pursue a physically specified theory while being clear not to make a definitive claim in ontology or epistemology (see Weimer, 2021). The sentence quoted
11 Complementarity in Science, Life, and Knowledge
221
exhibits two essential dualisms at once: the subject-object distinction and the physical-functional separation. 2. There are issues here of considerable importance, exemplified in a contrast between Rosen’s (1991) later work, Life Itself , and the more physicalist interpretation of Pattee (2007). Beginning from an evolutionary approach Pattee characterizes the essence of life (living systems) as the development of anticipatory models internal to evolving organisms. In contrast, Rosen focused upon causality, and the necessity of a dual system (echoing von Neumann here) of causal control between rate-independent formal concepts and rate-dependent dynamical systems. Rosen framed this in terms of the distinction, stemming from Aristotle, between material causation in a physical-dynamical system being efficient, and final causation in terms of functionality or teleology in living systems found in the rate-independent realm of semiotics. Here the issue, aside from which strategy to pursue, is whether or not physical theory can be “stretched” beyond its basis in inexorable laws and material or efficient causality to include the semiotic domain of functionality (perhaps in a greatly enlarged conception of “information” to supplant the syntactic conception stemming from Shannon, as Pattee has postulated), as well as whether or not a variant of the neo-Darwinian synthesis in evolution can accommodate the self-constructing nature of the organism as a co-creator of its econiche (as emphasized by the enablement versus entailing laws biologists, noted in Chapters 4 and 12), as well as how life is to be understood from a scientific perspective (such as anticipatory models versus functional constraints, laws versus rules, and dozens more issues). Many of these issues play out as arguments at cross purposes. This book argues there is no alternative to admitting co-occurrent causality in the semiotic and “material” realms, and I do not see the emergent existence (and emergent rules necessary for the explanation) of living systems as a “refutation” of physics, but rather as necessitating a supplement to it. And as the text argues in several locations, teleology is compatible with downward causation in the “material” realm, and necessary in the conceptual realm to explain functionality. The next chapters begin to outline an approach to incorporate, or at least put into perspective, some of the fundamental and inescapable dualisms we face in these issues.
222
W. B. Weimer
References Abel, D. L. (2010). Constraints Versus Controls. The Open Cybernetics and Systematics Journal, 4, 14–27. Abel, D. L. (2011). The First Gene: The Birth of Programming, Messaging and Formal Control . Longview Press. Dewan, E. M. (1976). Consciousness as an Emergent Causal Agent in the Context of Control System Theory. In G. G. Globus, G. Maxwell, & I. Savodnik (Eds.), Consciousness and the Brain (pp. 181–198). Plenum Press. Elsasser, W. (1958). The Physical Foundation of Biology. Pergamon Press. Hayek, F. A. (1952). The Sensory Order. University of Chicago Press. Hayek, F. A. (2017). Within Systems and About Systems. In V. Vanberg (Ed.), The Sensory Order and Other Writings on the Foundations of Theoretical Psychology (pp. 1–26). University of Chicago Press. Hoffmeyer, J. (2003). Origin of Species by Natural Translation. In S. Petrilli (Ed.), Translation, Translation (pp. 329–346). Rodopi. Kauffman, S. A. (2000). Investigations. Oxford University Press. Longo, G., Montévil, M., & Kauffman, S. (2012). No Entailing Laws, but Enablement in the Evolution of the Biosphere. Genetic and Evolutionary Computation Conference (pp. 1379–1392). ACM. https://doi.org/10.1145/ 2330784.2330946 Meehl, P. E., & Sellars, W. (1956). The Concept of Emergence. In H. Feigl & M. Scriven (Eds.), Minnesota Studies in the Philosophy of Science (Vol. 1, pp. 239–252). Mossio, M., & Bich, L. (2017). What Makes Biological Organization Teleological? Synthese, 194, 1089–1114. https://doi.org/10.1007/s.11229-0140594-z Mossio, M., & Moreno, A. (2010). Organizational Closure in Biological Organisms. History and Philosophy of the Life Sciences, 32, 269–288. Pattee, H. H. (1995/2012). Artificial Life Needs a Real Epistemology. In F. Moran, A. Moreno, J. J. Merelo, & O. Chacon (Eds.), Advances in Artificial Life (pp. 23–38). Springer. Reprinted in Pattere. Pattee, H. H. (2007). Laws, Constraints and the Modeling Relation—History and Interpretation. Chemistry and Biodiversity, 4, 2272–2295. Pattee, H. H. (2012). Laws, Language and Life. Springer. Polanyi, M. (1969). Knowing and Being. University of Chicago Press. Premack, D. (1959). Toward Empirical Behavior Laws, 1. Positive Reinforcement. Psychological Review, 66 (4), 219–233.
11 Complementarity in Science, Life, and Knowledge
223
Rosen, R. (1991). Life Itself. A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press. Weimer, W. B. (2021). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 1. Cosmos + Taxis, 9 (11+12), 1–29. Wiener, N. (1961). Cybernetics. MIT Press.
12 Complimentarities of Physicality and Functionality Yield Unavoidable Dualisms
The real riddle, if I am not mistaken, lies in the double position of the ego; it is not merely an existing individual which carries out real psychic acts but also ‘vision,’ a self-penetrating light (sense-giving consciousness, knowledge, image, or however you may call it); as an individual capable of positing reality, its vision open to reason; “a force into which an eye has been put,” as Fichte says, or “an organization turned toward two worlds at once” in the words of Schelling…But this secret, by its very nature, lies beyond the cognitive means of natural science. Hermann Weyl
Exploring the contrast between the physical and the functional faces the simultaneous emergence of dualisms, binary divides between seemingly opposed concepts or processes in nature and cognition. As Weyl said above, these dualisms are not new, having been noted in prior philosophical analysis. But more than that, these dualities have been with us from our very beginning. They arose when functionality began—with the origin of life. The problems of epistemology begin with the beginning of life. Life, meaning, and the fundamental dualisms of subject and object, physical and functional, rate-dependent and rate-independent © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_12
225
226
W. B. Weimer
(and many more), co-occurrently emerged. The seemingly rarefied issues of modern philosophy (and the problems we have discussed) have actually been present (in constantly developing form) on this planet since life originated. They could not have existed in a purely physical world, in which no separation of the knower from the known as yet existed. Throughout we have noted that life depends on records. And keeping a functional record requires a subject-object distinction in order for the record to be meaningful, i.e., for the record to actually fulfill its purpose of being a record. An agent of some form—the subject of conceptual activity—is required to harness the laws of nature when we observe and do an “experiment” such as making a measurement, and it is also the same process when we merely “look” at something in our environment. Such records depend upon initial conditions being boundary conditions. We noted that boundary and initial conditions are always extraneous to the process which they delimit and harness, and by definition must be of a higher order constraint—a constraint imposed upon a lower order one. Polanyi put it this way: “their structure cannot be defined in terms of the laws which they harness. Nor can a vocabulary determine the context of a text, and so on” (1969, p. 227). The higher order principles (in our case, our cognition) control lower ones because the laws of nature governing the lower or physical level leave totally indeterminate or under determined the range of conditions which are to be governed by the higher order principle(s). This is the contrast present at the beginning of life, between laws as invariants and life as enablements. By the time we reach the level of human knowledge we are light years beyond mere physical constraints. As Polanyi said: “you cannot derive a vocabulary from phonetics; you cannot derive a grammar from a vocabulary; a correct use of grammar does not account for good style; and a good style does not supply the content of a piece of prose” (p. 223). We need to do two things at this point. First, back up and go over how such “higher order” constraints can come into (biological) existence through evolution. Empirically or historically, this is the problem of downward causation and the enablement of biological change in speciation. Second, we need to reinforce the unavoidability of familiar dualisms
12 Complimentarities of Physicality and Functionality …
227
that have resulted from life having come into existence, and the epistemic consequences of those dualities. That is the necessity of complementarity of physical and functional theoretical accounts of reality.
Downward Causation Everything about you—the structure of your body, the nature of your perception, the cognition that you possess, the knowledge and theories that you have—is a result of our evolution from the simplest living things to our present state of complexity. As Hayek said, “All enduring structures above the level of the simplest atoms, and up to the brain and society, are the results of, and can be explained only in terms of, processes of selective evolution, and that the more complex ones maintained themselves by constant adaptation of their internal states to changes in the environment” (1979, p. 158). Looked at superficially it appears that we have “always” been pointed toward and are heading to our present state. Indeed, this is why the cumulative record approach to writing the history of science is so popular: we try to make everything we have today be an obvious and straightforward consequence of yesterday. We require a theory—a causal and contextual theory—of how we got from yesterday to where we are today and not to some other place. So we require a “causal” theory (or one of concomitant variation) of evolution, but not one dependent upon inexorable “laws” of such change. Here we seem to face a problem: physical causation is invariably an arrow with feathers on one end and a sharp point on the other. The temporal dimension defines the feathers as in the past and the point on the other end as the present specious moment “pointing to” the future. When we look at our history—either the history of knowledge or the history of our species—we appear to be being teleological: reversing the direction of the causal arrow in such a fashion that the present (and the future) somehow have caused our current state of affairs. We face the problem of seemingly “backward” causation in functionality—intentionality and purpose are teleological, pulling us toward the future rather than propelling us from the past. The problem becomes how to rescue evolution—biological, cognitive, social—from having the future determine the past instead
228
W. B. Weimer
of vice versa. Despite Feynman diagrams, we (and all living things) are not retrocausal anomalies. Downward causation, introduced by Donald Campbell (1974) as an intrinsic part of evolutionary epistemology, solves this problem. Campbell showed how the effect of “higher” social and biosocial levels impose constraints which can cause changes in the “lower” physical ones on which they in fact physically depend. Building on Polanyi’s thesis that life harnesses physical inexorability, Campbell noted that (at least) two biological “facts of life” transcend the laws of physics. The first is the phenomenon of emergence (already discussed in several locations); the second is the then unfamiliar thesis of downward causation. Downward causation refers to higher order constraints shaping physical or lower order biological processes by imposing functional flexible control (but not deterministic control) upon them. It can transcend individuals and, because evolution operates over populations in group selection, can jump from generation to generation in its effects. As Campbell said, “all processes at the lower level of a hierarchy are restrained by and act in conformity to the laws of the higher levels” (1974, p. 180). Furthermore, this causation violates the usual billiard ball instant-of-contact model provided by classic determinism. It requires “substantial extents of time, covering several reproductive generations,… lumped as one instant for purposes of analysis” (ibid., p. 180). The illustrative example he used of such long-term or seemingly backward causation involves the division of labor and social organization of ants, and the genesis of the specialized anatomical structures that have allowed it to happen. Consider the jaw of the soldier termite or ant. In many highly dimorphic or polymorphic species, the jaws are so specialized for piercing enemies that the soldier cannot feed itself and has to be fed and cared for by workers. The soldier’s jaws and the distribution of protein therein (and the particular ribonucleic acid chains that provide the templates for the proteins) require for their explanation certain laws of sociology centering around division-of-labour social organization. The syndrome of division of labour, storable non-spoiling foodstuff such as honey or seeds, apartment house living, and professional soldiers who do no food gathering,
12 Complimentarities of Physicality and Functionality …
229
has been repeatedly independently discovered many times among the proto-termites, as well as independently by the ants, and by six or seven separate seats of human civilization.… This repeated convergent evolution testifies to the great selective advantage of division of labor social organization, economies of cognition, mutual defense and production being some of its selective advantages (ibid., pp. 181–182).
This leads to the main thesis: for biological systems produced by natural selection where there is some node of selection at a higher level, the higher level constraints are necessary for complete specification of the phenomena both at that higher level and also for the lower or physical levels. Questions about the function of processes at one level are questions about an evolutionary selective system at some higher level. For complete understanding—an adequate scientific description of the restrictions in biological systems—we require specification of the additional constraints imposed by the selective or controlling systems of the highest level of “selection” ramifying all the way down through all the lower levels. Two points emerge: emergence and downward causation must be explained historically, with the full explanation of all the intermediate steps involved in those contexts of constraint. Second, there is no violation of the direction of causality here: only blind (not random) variation and selective retention in populations are required. Because of the necessary historical dimension, a full account of a seemingly teleological phenomenon requires complete specification of all the smaller intermediate changes that led up to a present development. Because of the unavailability of lawful determination to explain the seemingly happenstance development of the biosphere on our planet, we need to look at what this emergence could consist of, and where, if at all, causal analysis is beneficial.
230
W. B. Weimer
If Laws Do Not Cause Emergence, What Enables It? Emergence in biology seems to be a matter of co-occurrence relationships alone. With a biological development, other things simultaneously co-occur, and in reasonable but not deterministic fashion. As a process of constantly changing constraints, the unfolding evolutionary process creates novel spaces, adjacent possibilities for niches to be inhabited, and they in their turn are sources for possible new directions for organisms who move into them to evolve. The niche alone does not cause anything—it is just a possibility space which can be filled if a species takes up residence as a result of “blind variation.” The niche enables what evolves, but it does not alone cause it to evolve. It requires the supplemental presence of active agency—life in some form—to together begin to constitute a causal nexus. Thus Longo and Kauffman and their associates argue that life is caught up in a web of enablements resulting when new possibility spaces open up. No usual evolutionary selection pressure need be involved—blind variation could even result from the equivalent of a “randon” quantum level event or “jump.” Such a dynamic interplay of events cannot be explained without taking into account the unique history of the given case of co-occurrent events. Evolution is both indeterminate (not caused by lawful determinism of events), and acausal at the level of physical law, and thus from the standpoint of physics, “random.” But it is not random at the level of biological enablement. The biosphere creates its own possibilities for future evolution, a context of enabling constraints of possible new econiches and possible new adaptations to them. Organisms transform their own ecosystems in the process of transforming themselves. But this continually evolving context of constraints cannot tell us what will happen, or, more broadly, even what could conceivably happen. Explanation here is an “after the fact” account with no predictive capability. As Longo et al. (2012) put it, entailed causal relationships must be replaced by much “looser” but no less scientific enablement causal relationships. In that sense they parallel the notion of pattern prediction in complex social phenomena, which can never be deterministic or make point predictions as in law governed simple systems. In biological adaptation, the individual (whether single
12 Complimentarities of Physicality and Functionality …
231
cell, cellular component, or advanced multicellular organism) is irrelevant. Adaptation is a property of a group, and it is something that is adopted by a process of group selection. Evolution depends upon the genotype rather than the phenotype. Phenotypic expression has evolutionary relevance only through the mixing of the alleles of the individual with a larger population. What gets selected in evolution is always determined by higher order constraints applying to the entire group, to the overall conflux of prevailing phenomena (see also Kauffman, 2019). The combination of genetic factors within the group determines how (if at all) any individual’s changes or mutations could be transmitted, but selection depends upon even higher order constraints than those changes. In the case of enablement constraints, those constraints are not traditional “physical” causes at all.
Evolution and the Competitive Basis of Cooperation Competition has produced cooperation, from the cellular level on up. As organisms, we are borrowed and bartered and in some cases simply have incorporated “foreign” things at the cellular level of our composition and functional organization. Our individual components initially had to compete for survival on their own in their own econiches. It has turned out (evolved) that through no intention at the level of their individuals or species, they have been enabled to survive and flourish as cooperative concatenations within larger ensembles. Those larger ensembles have been enablement niches in which they have thrived. All advanced organisms are groupings of cells and cellular components that have, through the pressure of survival in their individual econiches, come to survive and reproduce as amalgamations folded within larger coherent entities. Biologist Lewis Thomas (1974) put it this way, discussing mitochondria: The usual way of looking at them is as enslaved creatures, captured to supply ATP for cells unable to respire on their own, or to provide carbohydrate and oxygen for cells unequipped for photosynthesis.… From their own standpoint, the organelles might be viewed as having learned
232
W. B. Weimer
early how to have the best of possible worlds, with the least effort and risk to themselves and their progeny. Instead of evolving as we have done, manufacturing longer and elaborately longer strands of DNA, and running ever-increasing risks of mutating into evolutionary cul-de-sacs, they elected to stay small and stick to one line of work. To accomplish this, and to assure themselves the longest possible run, they got themselves inside all the rest of us. (p. 69)
Within the biology of the cell, and in the psychology of the individual, and the social behavior of economics, competition is the most effective means of cooperation that has yet to arise on our planet. In all these cases, the key is the interplay of individuals within a group, and the extent to which the “group” comes to change its own econiche. Adam Ferguson (in 1767) was the first to note this with respect to human behavior and interaction, when he called attention to “the results of human action, but not design” in co-creating the social order. The quotation from Thomas draws our attention to “the results of action, but not design” at the basic biological level. Longo and Kauffmann have reinvented Ferguson’s notion with respect to enablement—it is the result of action but not causation by intention.
Epistemology Originated In and Is Shaped by Selection Pressure in Open Systems Faced with competition from other organisms and the brute fact of a hostile (at the very best, an indifferent) environment, development of effective adaptive mechanisms is paramount. Life requires an organismenvironment mutual interaction. That interaction must make available to the organism factors it requires for survival and reproduction. If the environment were repetitive and finite, with no novelty or no new danger, it would be possible for life to just sit around and “resonate” to the environment, with a single or very limited adaptive strategy. But novelty, and thus uncertainty, the unknown and the unforeseen, are everywhere forcing organisms to more and more adequate anticipations of the environment and its resources (and dangers). These anticipatory
12 Complimentarities of Physicality and Functionality …
233
capabilities underlie all our response capacity as organisms and all our knowledge as conceptual subjects. What we usually regard as biological or species specific knowledge results from the brute fact that organisms become theories of their environment (the theory bring embodied in multiple levels—their genetic endowment and all organism-econiche relationships, to define it circularly). So-called higher organisms have in addition developed more adequate theories and more adequate response systems in conjunction with more adequate anticipatory systems (usually called sensory or afferent nervous systems). In an interplay of structures and functions, we have developed into systems of systems within more systems. Biology deals primarily with the populations possessing such adaptive capabilities over longer time intervals (as in speciation). Ethology and psychology, in contrast, focus on how an individual learns to adapt within its lifetime. Individual learning presupposes biological adaptation within the species. Individual learning is the mechanism which individuates particular members within a given species.
Adaptive Systems in Learning and Cognition As Hayek (1952) said, an organism lives as much in a world of expectation and anticipation as in the present. How can this occur? Because our nervous systems are built upon modeling of the future. It cannot be any other way. We are adaptive systems who are constructors of further adaptive systems. Consider what this involves. The acquisition of knowledge by an organism requires that it match an internal model—of its expectations—against what it perceives in the future. If the model is a good one, if it provides knowledge of reality, there will be a match or near match. If it is a bad or inadequate model, there will not be anything in that future reality corresponding to it, and no knowledge was contained in it. So learning or anticipation involves making a model in the nervous system (in humans we simply call this a “mental model” or, explicitly, a theory) and checking whether the future agrees with the model. But how could the nervous system construct a model that anticipates future states of affairs? The answer is that modeling is the fundamental activity of an organism’s CNS. All nervous activity is a matter of classification (and
234
W. B. Weimer
reclassification) of patterns of neural activity. The nervous system is never quiescent—it is never dormant until some “stimulus” impinges upon it. It is a constant pattern of activity creating the “incoming” stimulus and creating the response to it. As noted in the discussion of structural realism, the qualities and relationships between non-mental phenomena are not originally attached to them, but rather are constructed by our nervous systems. The whole of an organism’s qualities is determined by the system of connections in which its neural impulse traffic can be transmitted from neuron to neuron. And the transmission process involves the memory of the nervous system. We do not first have sensations which are then preserved by memory. It is only as a result of the neurophysiological memory that the impulses are converted into sensations in the first place. Indeed, if there were a stimulus which was not regular, i.e., not an instance of one of the appropriate kinds determined by prior acts of classification, we could never “know” anything about it. It would not be singled out as different from the background activity of the nervous system, and thus could not be detected at all. What we find in the qualities which our acquaintance attributes to experienced external objects are not properties of those objects at all, but rather a set of relations by which our nervous system effects their classification. All our “experience” can do is modify our extant theories and their available classifications. In the 1950s John von Neumann studied what is involved in constructing a self-reproducing automaton. His realized that for a machine to be able to reproduce itself from one generation to another requires the machine to have within itself a complete model of what it is, including the instructions by which it was initially constructed. The import of his analysis is that modeling a living system must contain a formal forward-looking or “feedforward” component in each living system that reproduces itself, and that that internal forward-looking system or model must be an exhaustive specification of what it actually is. In the 1960s the same conclusion was arrived at by psychologists such as Karl Pribram (Miller et al., 1960 & Pribram, 1971), when they studied what was necessary for a behaving system to achieve a goal. A goal is, by definition, a forward-looking functional concept that cannot be specified without incorporation of anticipated behavior. Since behavior is always
12 Complimentarities of Physicality and Functionality …
235
functional (goal oriented), that is why there is no purely physical specification of behavior possible, even down to the micro-particulate level of analysis. Functional concepts are not physical, so cannot be specified by physical accounts alone—the physical simply can never disclose anticipation or expectation or purpose. Pribram and his associates proposed a two process model of neural activity, the TOTE unit, for test-operate, test (again)-exit. A simple model of this is the combination of the bias mechanism called a thermostat, which turns a furnace on or off when the temperature reaches a predetermined set point, when coupled to a furnace. The addition of that bias—through a feedforward neural mechanism in organisms—allows organisms to anticipate and correct for an intended future state. This selection, analogous to downward causation, is not done by invoking some Aristotelian “final cause” pulling events from the future but rather a material cause, like the thermostat, by taking advantage of ongoing central neural activity to direct and shape Aristotelian “material or efficient causation” in the present. The servo mechanism of the TOTE procedure is a hierarchically controlled structure within the ongoing complexity of the nervous system that enables biasing (or forward-looking modeling) within the present time frame. Rosen (1985) provided a general logico-mathematical account of anticipatory systems such as the biased TOTE unit and Hayek’s conception of the nervous system as an instrument of classification. Rosen emphasized that anticipatory systems have the ability to compress time. As one of his students (Louie, 2012) observed: Organisms seem capable of constructing an internal surrogate for time as part of a model that can indeed be manipulated to produce anticipation.… This “internal surrogate of time” must run faster than real time. It is in this sense that degrees of freedom in internal models allow time its multi-scaling and reversibility to produce new information. The predictive model of an anticipatory system must not be equivocated to any kind of “certainty” (even probabilistically) about the future. It is, rather, an assertion based on a model that runs in a faster time scale. The future still has not yet happened: the organism has a model of the future, but not definitive knowledge of the future itself. (p. 20)
236
W. B. Weimer
This model depends on feedforward . Feedforward is what can compress the time dimension. In contrast, feedback control is in essence observed present tense error actuated—the stimulus to corrective action in a feedback system is the discrepancy between the system’s actual present state and the state that the system should be in according to the model (bias). So feedback control works only when a system has already departed from what it was supposed to be doing before that feedback control can begin to be exercised. An example would be the cybernetic helmsman of a boat correcting the course when it is observed to be deviating from the set course previously specified. Feedforward, on the other hand, presets the system behavior, with an internal model relating present inputs to their predicted outcomes, as the actual control of the system. It sets the course for the helmsman. Feedforward systems have the necessary corrective present change of state determined by a presently existing model of an anticipated future state. The vehicle of anticipation is in fact the internal model—the stimulus for change in action is not simply the present input from “outside” the system—it is the prediction under those present conditions that is contained within the model. The discrepancy between outcome predicted by the model and present input drives behavioral change. What is modeling? How do we know that one of two systems models the other? The key is new knowledge: we may learn something new about a system by studying the system which is its model. As Louie (2012) put it, the essence of a modeling relation consists of specifying an encoding and a corresponding decoding of particular system characteristics into corresponding characteristics of another system, so that implication in the model corresponds to causality in the system (see p. 21). Thus for a mathematical modeler, a theorem about the model (an implication) becomes a prediction about the system behavior. A computer program can “derive” behavioral consequences from the commands written into the program. This makes it clear that within the “natural” or physical system probabilistic causality holds, while in the formal system (the model) the equivalent causality is deductively inferential, specified as the relationship between premises within a formalized deductive system and a consequence of that system which is the “inference.“ This is another way of stating the distinction between physicality and functionality—the
12 Complimentarities of Physicality and Functionality …
237
causal relationships in functionality are purely deductive consequences of a formalism (see Abel, 2011; Abel & Trevors, 2005), while in a physical system causality is a matter of (probabilistic) forces defined by physical theory and frozen accidents. Rosen’s account emphasized that error and emergence are crucial to models and anticipatory systems. In the first place, no model is ever exactly the same as the system actually will be in the future: statistical or probabilistic “error” is always present, as well as a finite and thus inherently incomplete representation (like a measurement in physics, being finite, is never a complete specification of an indefinitely extended physical ensemble) in the model. Error thus arises as a necessary consequence when the behavior predicted diverges from what actually is exhibited. Such a system can fail, and do so catastrophically—if its behavior is directed by the incomplete and inaccurate model—or by a failure in the physico-mechanical system, such as a defective receptor causing a bad input or a broken effector causing behavior to deviate from what was intended. On the other hand, emergence enters in when the discrepancies are not sufficiently great to cause a catastrophic failure, and the system will deviate from an initially “intended” path, as when the content of the model contains something new and informative, or not found in the prior system itself. This point has been studied by many philosophers of science, who have noted that a model or metaphor contained within a theory (an explicit theory in science) can (because of the richness of so-called excess content) lead to new knowledge of the theory’s domain when it shows that the old or prior theory must be enriched by the content of that novel material (see Hanson, 1970; Hesse, 1963; Kuhn, 1970, 1977). Such novelty is, strictly speaking, from the standpoint of the theory of anticipation an error or a discrepancy. But in this case, from the standpoint of common sense or scientific knowledge acquisition, it is an “error” with beneficial results: the generation of new, not yet acknowledged or anticipated, knowledge. Bronowski delighted in using examples of such “unintended consequences” to show that we are not as mindlessly reliable as computers. Mathematical models in physics often “predict” that striking things will be observed in physical reality that we have not anticipated, but which are simply deductive consequences of the mathematical formalism the theory utilizes. These are “predictions of
238
W. B. Weimer
the theory” only to us, because we had not thought of them until the mathematics was explored. But those “predictions” arose instantly when the theory specified that mathematical framework fitted reality. The fact that the CNS operates as an anticipatory system allows us to understand what Hayek said decades ago: “the organism must live as much in a world of expectation as in a world of “fact”, and most responses to a given stimulus are probably determined only via fairly complex processes of “trying out” on the model the effects to be expected from alternative courses of action” (1952, p. 121). Organisms live in an environment that is actually an econiche (a better term than the phenomenalistic concept of an umwelt) of its own (part) construction due to anticipation and expectation. Both organism and environment are real entities, but neither can be defined except with respect to each other.
Economic Orders Are Not Agents and Do Not Have Expectations Agents have anticipations and expectations, market orders do not. Market order “information” (which is always semantic in addition to the syntactic bits of communication theory) is not contained in any bias or “model” internal to the market order itself. Markets provide an ongoing pattern or flow of functional information (analogous to the flow of patterns within the CNS) which is potentially available to any agent who participates in the order. An individual may choose to behave or not behave—join the market process and make a purchase (or put something up for sale), or not to participate and hence do nothing, at least for the moment. The market has no functionality. We as agents interpret— which provides functionality to us—that information that is distributed equally throughout the market order for our individual purposes. The control structure of the market is that of a decentralized polycentric model (Polanyi) or coalitional structure (von Foerster) of related polycentric orders, in which the participation of a single agent is represented by a single node that in itself is never necessary for overall market functioning. In contrast, hierarchical orders (equivalent to directed or “command” economies) are found up to the corporation as an organization (what
12 Complimentarities of Physicality and Functionality …
239
Hayek in many locations [1973, 1976, 1978, 1979] called a taxis structure). Market orders are cosmic structures without a “command” node, just impersonal results of actions but not designs, and they neither infer nor expect because, while spontaneously ordered structures, they are not agents and lack any functionality. It is only our use of them which is functional to us. The social cosmos results from the unanticipated consequences of actions of diverse human participants. This presents a unique form of interaction: by definition the social cosmos is a cosmos, and when taken as a whole, so is the human being. But when human beings as individuals intend to interact with the market order they do not do so as complex cosmic structures—they do so as a directed taxis, with only a limited set of specific goals and requirements in mind, ignoring all other potential concerns or goals or relationships they could enter into. All cosmic social orders exist only because of this required integration of the individual acting as taxis within an overall market order or cosmic structure. Taxis organization is thus in fact an indispensable component of cosmic structures involving human beings. But the market order itself is totally impersonal and without expectation, inference, or any actual mechanism of anticipation. It is the result of human action, but not the result of intentional or functional design. All it can do when it “functions” for us is provide information which may (or may not) be utilized by any given individual. It is forever a means, never an end. That is its superior power. This fact—that the market is not an agent and does not anticipate or expect—is why the central problem of economics is how the market as a knowledge dispersing and transmitting system enables the distribution of labor and knowledge in the spontaneously evolving order of human affairs. The abstract or impersonal society in which we presently live is based upon a “mechanism” (divorced of any “machine” connotations) for impersonally distributing knowledge and goods while requiring us to possess an absolute minimum of knowledge of what is going on in that order as a whole. The problem for economics is literally to explain how we can do so much while knowing so little beyond our own momentary present situation and goals. We need to explain our creativity or productivity in economic action, exactly as the linguist must explain the creativity or productivity of natural languages or the psychologist must
240
W. B. Weimer
explain the creativity or productivity of individual behavior (one class of which is economic action). All such tasks require that we provide complimentary theoretical accounts of the functional or formal-logical domain of intentionality to disambiguate the infinitude of “physical” movements that can, at least in principle, be accounted for by physics. If we are to explain the phenomena in these domains we will need complimentary theoretical accounts of both the functional and physical domains involved. If we have only one or the other our understanding will be incomplete, inevitably ambiguous, and incapable of aiding theory construction in any other domain.
Recapitulation: Adaptive Behavior Shows Apparent Teleology Does Not Violate Causality The social physics approach rejects anything that suggests teleology is real. The supposition is that anything scientific requires an explanation that is a “push” from the temporal past rather than any sort of “pull” from the future. A further assumption is that any appeal to internal constraints and controls (e.g., Skinner’s hated “inner man”) must be unscientific because it somehow requires or enables teleological explanations of living behavior. We are now in a position to see what is correct and what is incorrect in these two assumptions. The first assumption, that causal accounts require consequences to be dependent upon antecedents, is completely correct. No teleological account, even those in physics such as John A. Wheeler’s famous “anthropic principle,” can be regarded as other than a haunted universe doctrine camouflaging (at best) a problem somewhere else, or (at worst) simply as obscurantist nonsense. Wheeler’s account is a classic case of confusing epistemology with ontology, following Mach (through Bohr) into ontological phenomenalism when that step is neither necessary nor legitimate. Endorsing phenomenalism erases the separation between the knower and the known, leaving only the knower as the one existant in the
12 Complimentarities of Physicality and Functionality …
241
universe. Our consciousness has not “participated” in the prior creation of the universe. That is a total misrepresentation of Polanyi’s account of how life harnesses physicality. It is somewhat peculiar that physicists are so happy to make such leaps of faith, while simultaneously criticizing the social domains as “unscientific.” Neither teleology nor consciousness entails a “pull” from the future into the past. Downward causation is a clear example of why this is so. All living systems evolve gradually, as changes in populations produce, within the usual causal scenario, changes that are then incorporated into the next generations and next enabled econiches. The soldier termite’s jaws look like a teleological pull was involved in their realization when in fact that was not the case. The theory of anticipatory systems pioneered by von Neumann and Hayek, then partially formalized by Rosen, shows how Hayek’s claim, that organisms live as much in a world of expectation as factual description, occurs in evolution. Evolutionary systems incorporate anticipation and expectation programs or “mechanisms” (however one wants to formulate it) in the evolving present, following traditional “push” causality. There is no known case of teleological “pull” causality in the universe. So the classical account of causal explanation is correct. Is the social physics model vindicated? No. There is the second assumption to be assessed. Internal constraints and controls do not require teleological “pull.” Cognitive processes both exist and are causal of agents’ behavior. But that causality is still a “push” rather than a “pull.” The nervous systems of higher mammals (such as ourselves) are perfectly capable of harnessing, through their higher order constraint capabilities, the “lower” or physical level of lawful regularity. Intentionality does not violate causality—our expectations and anticipations are always in the present, in the biasing component of models in a purely “physical” picture of nervous functioning. We are tool makers and users. We can and do harness the physical realm with tools and technology and our labor—building theories, homes, airplanes and highways, and medical/pharmaceutical products to increase lifespan and make more productive use of our lives, as well as the myriad other structures of modern civilization, all by the employment of our skill at transforming and utilizing the “physical” world. In so doing we change the physical structure of the universe. Adequate explanation of any phenomena
242
W. B. Weimer
involving life and its functionality must have both a physical and functional component. The functional component will be teleological or intentional or purposive (or however one wants to put it), but it will not, therefore, be “noncausal” or unscientific.
The Laws of Nature Are Not the Same as the Rules of Behavior The physical domain searches for laws. Laws are statements of invariant and inexorable regularity: laws are to apply every-where and every-when, and an exception is never permissible. Finding an exception would mean that the purported “law” was not in fact an actual law of nature. That degree of absolute inflexibility is completely absent from the functional domain, based on error and probability, of behavior and cognition, where the quest of scientific understanding is for rules of behavior. We try to reduce ambiguity as much as possible, but our understanding can never even begin to eliminate it. Rules are fallible and probabilistic rather than conceptually certain, and thus admit of exceptions. Rules of behavior are conventional rather than inexorable. The functional, but never the physical, admits error and is subject to correction or modification with the admission of more data. It should be obvious that it is our knowledge claims that are “wrong” when more data discloses that what we presumed to be a law of nature was not in fact so. It was not physical nature that was “wrong” in that instance. Thus there are no “laws” of functional behavior, no matter how thoroughly constrained and controlled and artificial the situation we place a living subject into for research purposes. All we can hope to achieve are regularities at the level of the “deduction” or prediction of patterns of behavior rather than the rate independently exact or point predictions of ideally controlled physics experiments. Life is dependent upon memory at all levels, and memory, being composed of finite and incomplete measures, is in consequence unreliable, subject to change and error over time. So life’s regularities and fallible judgments, like the dynamical events of physics, are statistical rather than billiard ball deterministic. The regularities that we achieve in
12 Complimentarities of Physicality and Functionality …
243
the rate-independent realm of conception, what we enshrine in our theories of behavior, can only specify determinate orderings and never inexorably deterministic ones. We hope to achieve well defined and tested rules of behavior—regularities—rather than inexorable every-where and every-when laws. Theories of learning and cognition in psychology, or economic action in economics, are never every-where and every-when in their application. Detection of general patterns, specifications of possible abstract classes of behaviors if you will, are the best we can achieve. The laws of nature leave underdetermined and therefore free to vary all the behavior controlled by the higher order constraints of cognition. Rules are based upon choice contingency. Subjects are always, as Friedman and Friedman (1980) told us, free to choose. But subjects are fallible, finite, and restricted by their context. Therefore, their behavior can never be deterministically specified. Unlike the context free rules that we call laws of nature, the rules of behavior are intrinsically context sensitive or context restricted. Since they are context sensitive they are indefinitely more complicated and content inclusive (meaningful) than the “relatively simple” laws of nature—which, as compressed algorithms, have very little meaning or intension at all. While physics faces the problem of “degrees of freedom” in many areas (such as the non-holonomic nature of record-keeping in the problem of measurement), the number of areas in which such freedom exists in the functional realm is vastly greater. An analogy we may consider in this regard is that an adequate equivalent of the knowledge possible in physical science would have to range over every single subject of conceptual activity (because each subject is an independent causal nexus determining a particular trajectory) in comparison with the “simple task” of physics attempting to discern general laws for identical subjects—which is effectively to say, for only one subject. Discerning rules of behavior is indefinitely more complicated (and thus is harder in very many respects) than discerning the low content laws of nature.
244
W. B. Weimer
Another Recapitulation: The Physical Sciences Also Require a Duality of Descriptions Quantum phenomena forced an explicit admission of the consequences of Newton’s separation of laws and initial or boundary conditions, which played out primarily in the famous “measurement problem(s)” leading up to the so-called cat paradox, and the uncertainty relations emphasized by Heisenberg. Measurement is a problem of meaning, and that entails the inevitable separation of the knower from the known—the physical from the functional. Where one places that separation, the epistemic “cut” between them, determines the knowledge that can result. That is why a well-known physicist, John Stewart Bell (2004), called it a “shifty split”: Where one chooses to place the measurement in an experiment can influence the knowledge that results from it. Our choices—of what and when and where to measure (and obviously, of what constitutes a measurement)—determine what quantum phenomena actually result, and what our experimentation can disclose. Theories of physical reality are our functional choices, and that constrains what we can know about external reality. If we do not want to lapse back into phenomenalism we have got to, as Bell emphasized, remove measurement and other commonly used functional concepts from the fundamental variables of quantum theory, and use only physical concepts: “Here are some words which, however legitimate and necessary in application, have no place in a formulation with any pretension to physical precision: system, apparatus, environment, microscopic, macroscopic, reversible, irreversible, observable, information, measurement ” (Bell, 2004, p. 215). If these inherently functional concepts remain in the fundamental theory, there is hopeless ambiguity and confusion of the knower and that which is to be known. Physical science has to keep them straight, which is to say, separated.
12 Complimentarities of Physicality and Functionality …
245
References Abel, D. L. (2011). The First Gene: The Birth of Programming, Messaging and Formal Control . Longview Press. Abel, D. L., & Trevors, J. T. (2005). Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information. Theoretical Biology and Medical Modelling, 2, 29. https://doi.org/10.1186/1742-4682-2-29 Bell, J. S. (2004). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. Campbell, D. T. (1974). “Downward Causation” in Hierarchically Organized Biological Systems. In F. J. Ayala and T. Dobzhansky (Eds.), Studies in the Philosophy of Biology. Macmillan and Company. Ferguson, A. (1767). An Essay on the History of Civil Society. Public Domain Text, Available from Online Literature of Liberty, Liberty Fund. Friedman, M., & Friedman, R. (1980). Free to Choose. Harcourt. Hanson, N. R. (1970). A Picture Theory of Theory Meaning. In M. Radner & S. Winokur (Eds.), Minnesota Studies in the Philosophy of Science, IV (pp. 131–141). University of Minnesota Press. Hayek, F. A. (1952). The Sensory Order. University of Chicago Press. Hayek, F. A. (1973/2012). Law, Legislation and Liberty: Vol. 1: Rules and Order. University of Chicago Press. Now Rutledge Classics. Hayek, F. A. (1976). Law, Legislation and Liberty: Vol. 2: the Mirage of Social Justice. University of Chicago Press. Hayek, F. A. (1978). New Studies in Philosophy, Politics, Economics and History of Ideas. University of Chicago Press. Hayek, F. A. (1979). Law, Legislation and Liberty: Vol. 3: The Political Order of a Free People. University of Chicago Press. Hesse, M. (1963/1966). Models and Analogies in Science (Rev. Ed., 1966). Notre Dame University Press. Kauffman, S. A. (2019). A World Beyond Physics. Oxford University Press. Kuhn, T. S. (1970). The Structure of Scientific Revolutions (Rev. ed.). University of Chicago Press. Kuhn, T. S. (1977). The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press. Longo, G., Montevil, M., & Kauffman, S. (2012). No Entailing Laws, but Enablement in the Evolution of the Biosphere. Genetic and Evolutionary Computation Conference (pp. 1379–1392). ACM. https://doi.org/10.1145/ 2330784.2330946
246
W. B. Weimer
Louie, A. H. (2012). Robert Rosen’s Anticipatory Systems. Foresight, 12, 18– 29. Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the Structure of Behavior. Henry Holt and Company. Polanti, M. (1969). Knowing and Being. University of Chicago Press. Pribram, K. H. (1971). Languages of the Brain. Prentice-Hall. Rosen, R. (1985/2012). Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations. Springer. Thomas, L. (1974). The Lives of a Cell . Penguin Random House.
Part IV Complexity and Ambiguity
Organisms are self-produced organized complex phenomena that can only be understood by taking account of their historical development. We need to overview what is involved in such complexity, to see how explanation can no longer be the deduction of particularity or specify point predictions of quantities. Understanding involves seeing the patterns of behaviors that can be generated by subjects in general classes of situations. Just as the nervous system classifies patterns, our explanations of complex systems must disclose patterns, general rules of system behavior that are capable of generating the sorts of behaviors we observe in the situations we study. We can never predict with certainty the next behavior a subject will exhibit any more than we can predict the specifics of the next sentence someone will utter. We can know how behavior is generated the same way we can know how language is generated—by studying (and then specifying) the abstract or underlying rules that led to the generation (or “cause”) of the given instance in the context in which it occurred. All our understanding is going to be context-sensitive and all our determination of the meaning of behavior is going to be inherently ambiguous. The best we can do to disambiguate (either our scientific
248
Part IV: Complexity and Ambiguity
understanding or our observed behavior) is to look back over the derivational history of the observed surface strings of occurrences—whether instances of language, overt behavior, or neural activity.
13 Understanding Complex Phenomena
How can one produce a theory of freedom and a theory that constrains freedom at the same time? Howard H. Pattee We ought to regard what we call mind as a system of abstract rules of action F. A. Hayek
Complexity—or very high complication—adds problems and places limitations on the nature of our knowledge, and how it can be achieved. The nature of explanation is different: all we can hope to achieve are accounts of patterns of behavior rather than precise prediction of definite values (numbers) or points. Added to the problems of scaling and mensuration noted in earlier chapters, the best accounts we can provide are not quantitative in the sense of the simple domains such as physics. The situation is as Hayek noted: such accounts and their limited conclusions and predictions will refer only to some general properties of
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_13
249
250
W. B. Weimer
the phenomenon—to a kind of phenomenon rather than to a particular event (see 1967, p. 15). Our knowledge is essentially generic, not individual or particular.1
Explanation of the Principle In complex phenomena such as biological evolution, the evolution of human culture, the market order in economics, and the psychology of the individual, explanations refer to general kinds or classes. Whereas the hard sciences assume as a matter of course that increased experimental precision will always result in more precise prediction of particulars, that result is not found in the so-called soft sciences (as Meehl, 1967, cogently noted). In complex phenomena increased precision does not aid prediction, it actually decreases it. This should not be surprising, since all living subjects are unique, and so the more precise our information the more we in fact limit the generalizability of a conclusion to fewer and fewer subjects, until finally with ultimate precision, only to that individual who provided the data upon which it was based. As Hayek emphasized: Physics has succeeded because it deals with phenomena which, in our sense, are simple. But a simple theory of phenomena which are in their nature complex (or one which, if the expression be preferred, has to deal with more highly organized phenomena) is probably merely of necessity false – at least without a specified ceteris paribus assumption, after the full statement of which the theory would no longer be simple (ibid., p. 28).
This mirrors the conclusion reached in automata theory by John von Neumann. von Neumann (1955, 1966) had an informal but rigorous proof that, for phenomena of high complexity (to be defined below), the least complex model of such a complex phenomenon would possess a degree of complexity equal to the thing itself. Thus a model (as a candidate explanation for the phenomenon) capable of behaving exactly like a complex phenomenon would for all practical (and of course, for theoretical and explanatory) purposes, be another instance of that phenomenon. When we reach such a degree of complexity—as in the
13 Understanding Complex Phenomena
251
general phenomenon of evolution, the functioning of the central nervous system in higher mammals, overall social and cultural interactions, economic orders, etc.—explanation of the principle is all that we can hope to achieve granted the finite nature of the human condition. Our formal models, as attempts at understanding, require abstract principles of determination for complex behavior, not simply a new instance of the phenomenon itself. The situation was noted by Arturo Rosenblueth (in a paper written with Norbert Wiener, 1945), “The Role of Models in Science.“ As they put it, a material model is the representation of a complex system by a system “which is assumed simpler and which is also assumed to have some properties similar to those selected for study in the original complex system. A formal model is a symbolic assertion in logical terms of an idealized relatively simple situation showing the structural properties of the original factual system” (p. 317). This paper provided the famous comment that the best material model for a cat is “another, or preferably the same, cat.”
A Precise But Unspecifiable Definition of High Complexity The transition point at which the least complex rigorous model of a phenomenon is another instance of that phenomenon can serve as a definition of high complexity. Above that level of complexity are the complex and other self-organizing structures we referred to above. Below this level are the organizational structures found in the so-called simple domains. von Neumann noted this (in Burks, 1966), arguing that one can perform within the logical “type” involved in a system everything that’s feasible to perform, but that the question of whether something is feasible in a type belongs to a higher logical “type” (a metalanguage question). Then “It is characteristic of objects of low complexity that it is easier to talk about the object than produce it and easier to predict its properties than to build it. But in the complicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object” (p. 51).
252
W. B. Weimer
If we substitute the term “explanation” for the term “logical type in a system” in the quotation above, we have a specification of high complexity: the simplest possible explanation for a phenomenon of high complexity is of higher complexity than the phenomenon itself.
Limits of Explanation: Complexity and Explanation of the Principle Since explanation is equivalent to modeling there are abstract constraints that apply to all systems that create such explanations or models. One constraint will be the limit of explanatory capability that a given modeling system possesses—it will be beyond the capacity of systems to explain or model phenomena that are more complex than the systems themselves. An obvious limitation is reached in self-explanation—the system can only be itself, it can never be a model of itself: it can never separate the subject knowing from the object to be known. Consider in this regard the attempt to model the human central nervous system: “Any apparatus of classification must possess a structure of a higher degree of complexity than is possessed by the objects which it classifies;…The capacity of any explaining agent must be limited to objects with a structure possessing a degree of complexity lower than its own.…No explaining agent can ever explain objects of its own kind, or of its own degree of complexity, and therefore,…the human brain can never fully explain its own operations” (Hayek, 1952, p. 185). While this logical point is independent of explanation of the principle, it relates directly to that concept. If the phenomena of interest are those of “organized complexity” such as our brains, or spontaneous market orders, social structures, etc., it follows that all human understanding can hope to achieve is explanation of the abstract principles according to which the system operates. No one will ever succeed in modeling such a system completely in its particularity, nor will we be able to confirm the adequacy of our models in all their particularity. Instead, our knowledge of the model’s adequacy can be determined only negatively—by falsification—in which case we will learn only that the model (in conjunction with our assumptions) was incorrect, and a “good model” will be one
13 Understanding Complex Phenomena
253
that, as Popper (1959) long emphasized with regard to science, has thus far survived our sincere attempts to refute it. Hayek’s example of an explanation of the principle was the theory of evolution, which cannot ever predict the emergence of a single organism (or a single species), or the creation of a novel potential econiche, but which explains the principles according to which species (and hence organisms within a species) arise. Other examples are found in the theory of the growth of knowledge, in the genesis of the market order, and all spontaneously ordered social phenomena that are the result of action but not design. In general, all but the very simplest phenomena involved in life and its evolution will require explanations of the principle, as would all cybernetic or functional phenomena if we extend Wiener’s original definition beyond control by feedback systems to include self-productive agency, with anticipatory or feedforward mechanisms in that control structure (as noted in Chapter 12). We find this limit upon explanation somewhat upsetting, because we have long been told that we should be able to deal with complex phenomena in exactly the same manner as simple phenomena. This is a prejudice—the idea that one type of science fits everything—and it has sustained such attempts at total control from Comte early in the nineteenth century through Watson and Skinner in the twentieth, and is responsible for the present-day attempts to find the “laws of nature” from which we can deduce all human and social behavior in full particularity. It also underlies the original motivation for the study of artificial intelligence from Simon and his associates to the present. The problem that has stymied researchers enamored of this goal is always the same: how to achieve enough “rigorous” control of self-producing living entities to get a clear picture of the presumed-to-exist inexorable “law” underlying the complex behavior being studied. Thus behaviorists in psychology have resorted to artificial constraints upon behavior, as in the experimenter defined and isolated Skinner box in which only one thing—closure of a microswitch—defines a response, and the actual behavior of the organism is completely irrelevant—and not even looked at by the experimenter—to what is recorded as a “response.” This is far
254
W. B. Weimer
from the situation in simple physical studies, where results from one electron or given force are always identical to those from another electron or force, and increased control and precision of measurement are beneficial.
The Superior Power of Negative Rules of Order A corollary to explanation of the principle is the role of negative rules of order in complex domains. It is not accidental that the rules of morality (to take the example of how complex social systems are “organized” or “controlled”) in all cultures are prohibitions to general classes of action— the “Thou shalt nots” of biblical injunctions and the unwritten rules of conduct governing ordinary social interactions, which are most easily expressed as “Dont dos.” The negatives convey an indefinite amount of information in a minimal formulation, thus obviating the otherwise insuperable problem of memory (or storage) for an indefinite number of positive prescriptions of particular behaviors that would provide the same total amount of direction to an individual. Negative rules—Don’t do this or that—enable us to adapt to the unknown and unforeseen by specifying general classes of prohibited actions. In that regard they are “forward” looking, whereas a long list of “Do dos” is backward looking, and unable to handle the unforeseen (because it is not specified in the list of what to do). This superior power is the result of evolution (Weimer, 2020). The task of the first nervous systems was to detect two classes of changes: those that were productive of continued well-being for the organism, and those that were productive of harm. The great step for nervous systems was the orienting response—the detection of change or novelty with regard to the background level of ongoing stimulation. Then learning (in the individual) or adaptation (in the species) could occur as a result of the consequences to the organism of that response to change. The nervous system learns to match to a standard (by not responding) and to detect change from the standard (by responding). The consequences of behavior reinforce future behavior. When the result is beneficial, it leads to increased probability of that response. If the
13 Understanding Complex Phenomena
255
response is deleterious, the response probability will decrease and other responses will be more likely to take its place. That is a simplified presentation of the theory of learning as a result of reinforcement of (or consequences of ) behavior. But what is learned in such situations? What knowledge does an organism get from the learning situation? This question leads to the famous asymmetry between the logic of modus ponens and modus tollens. First discussed in 1934 by Karl Popper (1959), the import of this asymmetry is that we can never learn that a theory or hypothesis is true from modus ponens reasoning (101 million “confirming” modus ponens instances could always be followed by a single “refuting” instance according to modus tollens). In contrast, a single falsifying instance (via modus tollens) suffices to show that an hypothesis (plus whatever auxiliary assumptions we made) cannot be correct. The process of learning in an organism is exactly analogous: 101 million confirming or positive reinforcements does not mean that the organism’s “theory” is correct or “proven” or “justified.” Just ask that turkey who, on a crisp November morning, expected food to always be forthcoming when the farmer entered his pen. The turkey’s theory did not include information about a holiday called Thanksgiving in the United States. Organisms learn something new only when an expectation, a theoretical hypothesis which has been adopted, is falsified. This is why so-called inductive confirmation cannot exist, whether conceived as knowing “for certain” or merely knowing “probabilistically.“ What organisms learn is not what is correct or true or proven, but rather what mistakes to avoid making again. We learn what does not work. There is no difference in this procedure between a scientist’s most esoteric conjecture, a rat’s deciding to try a new maze alley, or the simplest organism with a nervous system trying to find food and avoid noxious stimulation. There can be cases of “positive” learning only in totally artificial situations in which a finite number of choices are possible, and the choices are specified in advance to the organism. This would be like counting cards in a card game in order to determine whose hand holds which remaining cards. This is so-called eliminative or enumerative induction, as each successive round eliminates more possibilities from the finite deck
256
W. B. Weimer
and thus limits the possibilities for which cards remain in a given player’s hand. In an unpredictable world with unforeseen consequences resulting from our behavior, it is possible to learn about new situations only when an hypothesis is falsified, i.e., does not lead to the expected result. But it is not possible to learn more than that a hypothesis which is thus far compatible with our expectation, has yet to be refuted. What we learn is what mistakes not to make, not that we are right or that the hypothesis is true or justified just because it has not yet been falsified.
Negative Rules of Order Constrain the Social Cosmos This is because positive rules that specify particular actions or results to achieve cannot deal with the indefinite welter of events which an organism (or an individual in society) may encounter. Novelty cannot be addressed by positive prescriptions. All successful theorizing in domains of essential complexity utilizes a context of constraint consisting of three overarching regulative principles. These principles capture the regularity of what are in essence dynamic equilibrating systems that exist and evolve only as a delicate balance of essential tensions. Teleologically described, they “aim” toward equilibrium, but in fact they never reach it, and so the systems continue to tend toward equilibrium without ever achieving it. Three sets of principles regulate change in every spontaneous order I have studied. The first principle is creativity or productivity. Such systems exhibit fundamental novelty, change (at the level of the particulars involved) that is inherently unpredictable. The second principle is rhythm and its progressive differentiation over time. All change is rate-dependent and should be subject to dynamical rules that differentiate systems over time. The third principle is regulation by opponent processes. Development fluctuates between extremes that constrain possible changes. Going beyond those extreme end points leads to breakdown of system organization. Interaction of these three principles creates an essential tension, literally a context of constraint, between the previous form of organization, the present state of organization, and future changes. This essential
13 Understanding Complex Phenomena
257
tension between tradition and innovation, stability and change, is a dynamic equilibrating tendency common to all essential complexity. It is a manifestation of the superior power of forces of disequilibrium over equilibrium in all such structures. These explanatory or regulatory principles are strikingly different from the “positive” principles that prescribe particulars in simple domains— they are essentially negative or prohibitory of classes of action. The context of constraint manifests its power by prohibiting the occurrence of particular (classes of ) events. Creativity and complexity can neither be explained nor brought about by the positive prescription of particulars (e.g., commands such as “You must do this X in situation Y”). Successful theory is negative: it specifies its domain in terms of constraints that its phenomena cannot violate. Consider the difference between two types of directives. First, the “simple” prescription of a positive particular: sit up straight in your chair. This is algorithmic (computable), and we can all decide whether or not an individual has completed this task. In contrast consider the directive: lead a just life. This is abstract, indeterminate, and not computable, or capable of fulfillment by even an infinite list of particular actions. It is a never-ending task whose precise character can never be specified in advance. This is not a computable function—and thus the computation metaphor of mind so popular now is quite obviously false. The only way this command can be approached is negatively, as a directive prohibiting all forms of injustice. The taboos and “don’t do’s” are the only type of rules that can regulate conduct in the spontaneous cosmos of society. They can tell us what mistakes to avoid without attempting to delimit in advance what classes of particulars must be achieved. They allow novel behavior rather than restricting us to what is already known (the “merely” probable or the predictable). The practice of falsification in scientific praxis is an example. Since all theories worth exploring are productive (i.e., they have an infinitude of particulars as their logical consequences), it follows that confirming results—no matter how many—can never show that the theory in question is true. All we can do is show that a theory is false (which is to say inconsistent with the combination of the theory plus the relevant background assumptions) if one of its predicted
258
W. B. Weimer
results fails to obtain. We then know that we cannot retain both the theory and all of those assumptions.
Science Is Constrained by Negative Rules of Order Thomas Kuhn’s (1970, 1977) conception of normal science as paradigmbased puzzle solving according to inculcated traditions that arise within the relevant research community is well known in both the simple and complex domains. It was a pioneering account in three respects: first, it portrays the praxis of science as equivalent to participation in a market order; second, it is an explicitly evolutionary approach to the growth of knowledge. The power of tradition—the tacit rules of scientific methodology—cannot be captured in positive prescriptive rules. The individual researcher absorbs the “rules of the game” simply by practicing in the presence (or near the presence) of senior researchers. Third, it is crucially dependent upon the independence of the social from the psychological, and admits, following Polanyi, that the social domain is tacit rather than explicit. If you ask successful scientists what they are doing in their research they will give you an account (an explicit “rational reconstruction”) of what they think science is supposed to be. If you then look at their actual practice, it will bear little if any resemblance to whatever the positive prescriptions of the preferred methodology entail. As Polanyi (1958, 1966) emphasized, science is largely tacit. The practitioners know far more than they can tell. This tacit dimension of research praxis cannot be captured in explicit methodological prescriptions. The only successful methodological rules are prohibitions to certain kinds of action—e.g., do not fabricate data, do not cook the results with the wrong statistics, and do not ignore contradictory results. C. S. Peirce (1898/1992) put it beautifully at the end of the nineteenth century: Upon this first, and in one sense this sole, rule of reason, that in order to learn, and in so doing desiring not to be satisfied with what you already inclined to think, there follows one corollary which deserves to
13 Understanding Complex Phenomena
259
be inscribed upon every wall of the city of philosophy: do not block the way of inquiry.
Negative Rules of Order in Society Spontaneously arisen orders allow the performance of (potentially indefinitely many) unforeseen particular acts because they are governed by deep structural abstract rules of determination. The context of constraint provided by such abstract rules is negative in several senses. The first negative sense is that rules of order are negative or prohibitory injunctions against certain classes of actions. The Scottish moralists of the eighteenth century knew this better than our generation: “The fundamental law of morality, in its first applications to the actions of men, is prohibitory and forbids the commission of wrong” (Ferguson, 1785, p. 189). This is why the regulatory ideals of social conduct are always taboos. Justice, freedom, peace, as well as truth, and similar concepts in science are specified in terms of the elimination of their opposites, not in any positive specification of particulars that must be achieved. The second negative sense concerns the predictive or anticipatory power provided by an explicit knowledge of the rules. All that is available is explanation of the principle and with it general pattern prediction. This shows why explanatory theories of complex phenomena can never predict the occurrence of particular events. We can never achieve that infinitely precise specification in complex domains. A third sense concerns the indispensability of our ignorance when acting as agents within complex orders. The tacit dimension of behavior is not conscious, is not explicit, and in that sense must remain forever unknown to us while it is guiding our actions. The social domain is not the same thing as the individual psychology of consciousness. All that we can hope to achieve in theories of society is an understanding of regulatory principles that govern certain classes of occurrences. We can never understand what is controlling our behavior while it is doing so. This indispensable negativity becomes ubiquitous in character when we collate examples from disparate areas. From the Scottish moralists such as Hume, Smith, and Ferguson, we learn that justice can be
260
W. B. Weimer
defined only as the elimination of injustice, and that its achievement can never be attained for once and for all, but requires a standing order of obligation throughout our lives. Similarly, political and intellectual freedom depends upon adherence to a framework of rules that delimit how creativity can occur. Creativity itself, whether in social or economic conduct, in the ability to use language productively, or in the genesis of behavior, is regulated by a context of constraint that consists entirely of inhibitory or prohibitory rules. To such examples one may add the negative definition of “economic” in terms of scarcity of what is not ultimately available (Menger, 1871/1950). Menger also emphasized that the concept of cost, as the importance of the next most urgent want that can now no longer be satisfied, and thus the concept of marginal utility are inherently essential negatives. Popper even defined the empirical domain of science negatively, equating empirical content (for a theory) with the possible states of affairs that the theory forbids to occur. The methodology of scientific research also provides numerous negatives (many of them Popper’s) such as “Don’t attempt to justify,” “Do not argue about linguistic definitions,” “Do not fabricate data,” etc. Indeed, most prescriptions with positive specifications of particulars actually apply to simple or non-complex phenomena alone, and thus have no applicability at all in complex orders unless they can be “translated” into negative formulations. The “taboo mentality” of spontaneous orders is far from the throwback to ignorant ways that “progressivist” social thinkers assume. It is instead an indispensable aspect of the organization and creativity of spontaneous orders. The greatest freedom and creativity results neither from positive prescriptions of particulars to be achieved nor from “anything goes” anarchism, but rather from strict adherence to general rules of order. It is as Hayek said: Since our whole life consists in facing ever new and unforeseeable circumstances, we cannot make it orderly by deciding in advance all the particular actions we shall take. The only manner in which we can in fact give our lives some order is to adopt certain abstract rules or principles for guidance, and then strictly adhere to the rules we have adopted in dealing with the new situations as they arise. (1967, p. 90)
13 Understanding Complex Phenomena
261
Our actions form coherent and rational patterns because we limit our successive choices or decisions by adhering to those rules, not because the actions have been decided upon as part of a single plan thought out beforehand. We are rational because in each successive decision we limit our range of choice by the same abstract rules. In contrast, it would be completely irrational to limit our behavior to a plan of particulars specified in advance. It is clear that not just the social organization we call scientific inquiry but everything any subject of conceptual activity does to acquire knowledge is subject to these same constraints. The acquisition of knowledge, based upon the phenomenon of group selection as well as the workings of an individual’s mind, is governed by abstract or general rules that are essentially negative in character, because no other form of rule governance could allow for the unknown and unforeseen to be fitted into new frameworks that emerge when enough unexpected results (Kuhnian anomalies) accumulate, and what is akin to a “scientific” revolution then occurs as a result. Enlightened common sense reasoning is equally as subject to revolutions in thought as is science.
Excursus: The Context of Scientific Inquiry The last half of the twentieth century saw what some have called (obviously overenthusiastically) “science wars” about the nature and scope of what could constitute an adequate methodology for scientific research. Triggered primarily by Kuhn’s distinction between normal and revolutionary periods of scientific research, clashes developed between defenders of logical empiricism’s monotheoretical model of assessment (only normal science research, and one theory tested and elaborated at a time) such as Hempel and Scheffler, Kuhn’s views, and variants of positions stemming from Popper’s conception of metaphysical research programs. These positions proposed both different types of scientific activity and different levels of analysis at which science occurred. Most arguments occurred at cross purposes—confusing what was appropriate at one or another level of analysis or type of scientific activity with what was characteristic of other levels, and which were actually different
262
W. B. Weimer
types of activity (see the overview in Weimer, 1979, Chapter 8). Disambiguating the contexts involved is paramount in understanding what otherwise would appear to be cross-purpose arguments about conflicting claims for different subjects. One diagrammatic representation of major differences is found in Table 13.1. This shows the two major distinctions of type of science presently studied, normal and revolutionary, and three major levels of analysis at which they can be understood—within a single theory, between a series of related theories that constitute what is usually called a research program in a specified topic area, and the pragmatic level behind (or perhaps beyond) theories, called metaphysical paradigms after Kuhn’s initial analysis in 1962. Most problematic has been this third or higher level—what Kuhn called paradigm clashes as conflicts of incommensurable points of view or clashes of world views. Traditional philosophy of science has done its best to deny that these clashes require the understanding of ambiguity at a psychological or conceptual rather than a philosophical level. But the best proposals for understanding what is involved in metaphysical paradigm clashes is precisely the conceptual and perceptual ambiguity involved in perceptually deep structurally ambiguous figures such as the Necker cube or “my wife my aunt” or the “cornice versus steps” sorts of figures, and also the deep structurally Table 13.1 Minimum complexity for the understanding of science: two types of activity and three levels of analysis After Weimer (1979)
13 Understanding Complex Phenomena
263
ambiguous utterances such as “The police were ordered to stop drinking after midnight.” The traditional approach has been to regard these issues as problems of reference instead of meaning. Taking a purely surface structure approach to simpler perceptual illusions (such as the Muller-Lyer illusion of the length of a line when there are inward and outward pointing arrow marks), they have concluded that there is nothing present to support incommensurability of theories in any deep or ultimate sense of their meanings. They propose that it is “just” a context effect, and once one adds a bit more reference, there is no “incommensurability” because the ambiguity has been removed by the added context. A similar argument with regard to Quine’s famous indeterminacy of translation thesis has been proposed by Davidson (1973), who argued for only one “ultimate” conceptual scheme and, therefore, that there must be ultimate reduction of the indeterminacy by further sampling of the disparate “languages,” which can indefinitely increase the areas of agreement in both reference and meanings. Neglecting the totally nonempirical dogma of only one “ultimate” conceptual scheme in which all others reside, we can assess the “reference” arguments easily. Such views assume that reference is more important here than meaning—if we can determine the reference of the terms we can zero in on the meanings to whatever degree is necessary. There is a presupposition of a determinate connection between a reference and a meaning. In this these positions echo Russell in his work on denoting, ignoring the “meaning” of theoretical terms for their reference—disambiguate the reference, and all is well (recall chapter 9). That strategy is like disambiguating a surface structure utterance to specify “the” reference— “What’s that in the road, a head?” is either a head or something else, and that is all one needs to do in order to know—go up the road and look. That is why the science examples fall completely outside these cases, with what Kuhn emphasized, which is deep structural ambiguity, in which one and the same referent has two incompatible meanings no matter how exhaustively the referent is specified. Deep structural ambiguity cannot be resolved at the surface level. It requires one to “go back over” the derivational history of the surface structure in question in order to determine which of multiple meanings that one reference object actually exhibits. An example such as
264
W. B. Weimer
“The shooting of the hunters was terrible” is referentially only a single object (whether spoken or seen), but its meaning can never be determined simply by looking at that single referential object. The context required is deep conceptual—not available in the surface structure at all. The centrality of deep structural analysis for the entire domain of human sciences—the functional realm—is explored in the next chapter.
Excursus: Notes on the Methodology of Scientific Research It may appear surprising that there has been no full chapter or concerted discussion on the topic of scientific methodology, but only scattered remarks on a few typically methodological issues here and there. This is intentional: this is not a methodology primer. How science operates is certainly a topic for epistemological discussion, but it is a limited— and actually an empirical —issue in comparison with the range of topics under the heading of the nature of knowledge and its acquisition. Indeed, a crucial topic—how meaningful measurement can occur in research—is found in the first Part of this book. Now, to presage the content in the appendix chapters, we should note (especially in relation to science as a spontaneously organized complex phenomenon) a limited range of “methodological” topics: the nature of explanation (specifically, limitations of the so-called hypothetico-deductive method); limitations of the “one size fits all” utilization of the H–D explanatory format stemming from Hempel, 1965); crucial differences and similarities between descriptive and prescriptive methodological accounts; and especially (since they differentiate the human sciences domains from the “simple” physical ones) the differences between explanatory systems in complex as opposed to simple phenomena. As the first chapter noted, explanation is a statement of equivalence between what we accept as known or familiar and something which is not known or familiar to us. Explanations are arguments that some disparate things are actually equivalent. That may or may not occur in the context of a deduction of a consequence from formally specified premises. It may be an instance of tacit knowledge, in which it just is an “Aha!” or a
13 Understanding Complex Phenomena
265
less obvious “feeling” or “intuition.” There is no evidence that such a “conclusion” is logically transmissible from a tacit (probably unarticulated and therefore without defined premises) “theory.” What, exactly, is a deductive conclusion from the “theory” of evolution? Should we deny its status as a “theory” or as “scientific” because there is no possibility of deducing the occurrence or nature of a species? What do we do if there are no “laws of nature” available in the domain we study? It is not possible to logically deduce consequences from fallible “rules” of behavior because that uncertainty would be transmitted to the conclusion. All such problems are real and are usually dismissed because it is assumed that philosophy is only to do an after the fact rationalization of the practice of science—an “explicit rational reconstruction” of the end products of inquiry only. That approach fails for an incredibly simple reason: any actually informative “explanation” must commit the elementary logical fallacy of four terms if it actually explains, and if it is correctly utilized (with only three total terms in the premises, it can address nothing new, and so fails to “explain” (as is detailed in the appendix Chapter 17). Far better to admit that explanatory discourse makes arguments for a position (as a few, such as Kitcher, 1981, 1989, did). The transmissibility assumption—that everything of importance must be transmitted from premises to conclusion via deductive logic—is criticized in the appendix chapters. Turning to another topic, almost everyone has heard of the Kuhn versus Popper or Kuhn versus logical empiricism clashes in the last century. Here the issue was between prescriptive accounts—methodologies that explicitly tell what one must do to practice “good” or “successful” science, and likewise must avoid doing in order not to be a “bad” scientist—and descriptive accounts that do not have any explicit moral or evaluative component, but attempt to accurately describe what constitutes science—whether good or bad science. Philosophers, interested in rational reconstruction, always propose prescriptive accounts. Historians (or psychologists), interested in correct factual description, endorse nothing normative beyond the inherently evaluative nature of historiography—any theory of the writing of history is from a point of view, and therefore a judgment or evaluation. Even Kuhn’s “let scientists
266
W. B. Weimer
alone so they can do science” posture was “prescriptive” in that unavoidable sense. A correct description does indeed prescribe how to do “good” science, and in Kuhn’s account, this is to let normal science research practice go without imposing new methodological criteria beyond what the research community already does, so that it can continue to do its job. So this controversy results on a pox on both houses, because Kuhn’s views (if correct) were evaluatory as history, and Popper’s and the logical empiricists’ were too evaluative as philosophy, which is to say, factually false on many points with respect to actual science practice. As a last point, consider the limitations imposed upon the inherently complex realm of functional domains (i.e., the existence of agency). Here we see that while there is complexity in both the physical realm (Prigogine’s work on dissipative structures is an example) and the functional, it simply is not possible to impose the explanation-as-deductiveconsequence-prediction-of-exact-data-points model that has done so well in the “simple” domains. That approach will not work at all in either the functional domains or in the complex areas of physics. That is the correct distinction between the physical and the functional sciences—between what is required for self-produced or organized complex phenomena versus for simple phenomena—and not just the bald statement that what constitutes “explanation” is different between physics and, say, psychology.
Note 1. We must not forget the discussion in Chapter 10 about the differences between complexity resulting from the behavior of systems controlled by external constraints and those that are controlled by individual agency, consisting of internal constraints (and goals or teleology) which are responsible for the self-production of their own existence and thus for the continued existence of the complex phenomenon. The context in which the discussion in this chapter historically arose did not require one to make this distinction. When von Neumann talked about the nature of self-reproduction, it was intuitively obvious that agency and internal constraint was what was at issue. When the problem of complexity was taken into the economic-social domain, that distinction was increasingly
13 Understanding Complex Phenomena
267
lost sight of. The market order—despite exhibiting the sort of complexity discussed in this chapter—is a relatively “simple” system maintained by external constraints. The internal constraints reside in the individual market participants, not within the order itself. Another way of putting this is that if one takes a functioning market order and then removes the agency found in the participants within that order, there is no internal mechanism in the market order itself capable of sustaining its own existence—without the participating individuals, the market order ceases to exist. It has no internal self-productive context of constraint sufficient to continue its own independent existence as an agent does. That is why later discussion emphasizes the fact that the market is merely functional, a means for an indefinite number of individuals to use, rather than a goal or self producing teleological entity in its self.
References Davidson, D. (1973). On the Very Idea of a Conceptual Scheme. Proceedings and Addresses of the American Philosophical Association, 47 (1973–74), 5–20. Ferguson, A. (1785/2010). Institutes of Moral Philosophy. Gale ECCO, print editions. Hayek, F. A. (1952). The Sensory Order. University of Chicago Press. Hayek, F. A. (1967). Studies in Philosophy, Politics, Economics, and the History of Ideas. University of Chicago Press. Hempel, C. G. (1965). Aspects of Scientific Explanation. Free Press. Kitcher, P. (1981). Explanatory Unification. Philosophy of Science, 48, 507–531. Kitcher, P. (1989). Explanatory Unification and the Causal Structure of the World (pp. 410–504). University of Minnesota Press. Kuhn, T. S. (1970). The Structure of Scientific Revolutions (Rev. ed.). University of Chicago Press. Kuhn, T. S. (1977). The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press. Meehl, P. E. (1967). Theory Testing in Psychology and Physics: A Methodological Paradox. Philosophy of Science, 34 (2), 103–115. Menger, C. (1871/1950). Principles of Economics (J. Dingwall & B. F. Hoselitz, Trans.). The Free Press. Now Mises Institute.
268
W. B. Weimer
Peirce, C. S. (1898/1992). Reasoning and the Logic of Things: The Cambridge Conference Lectures of 1898 (K. I. Kettner, Ed.). Harvard University Press. Polanyi, M. (1958). Personal Knowledge. Harper & Row. Polanyi, M. (1966). The Tacit Dimension. Doubleday (Penguin Random House). Popper, K. R. (1959). The Logic of Scientific Discovery. Harper & Row. Rosenblueth, A., & Wiener, N. (1945). The Role of Models in Science. Philosophy of Science, 12(4), 316–321. von Neumann, J. (1955). Mathematical Foundations of Quantum Mechanics. Princeton University Press. von Neumann, J. (1966). Theory of Self-Reproducing Automata (A. W. Burks, Ed.). University of Illinois Press. Weimer, W. B. (1979). Notes on the Methodology of Scientific Research. Erlbaum Associates. Weimer, W. B. (2020). Complex Phenomena and the Superior Power of Negative Rules of Order. Cosmos + Taxis, 8, 39–59.
14 The Resolution of Surface and Deep Structure Ambiguity
Most of human sentences are in fact aimed at getting rid of the ambiguity which you unfortunately left trailing in the last sentence. Now I believe this to be absolutely inherent in the relation between the symbolism of language (that is, an exact symbolism) and the brain processes that it stands for. It is not possible to get rid of ambiguity in our statements, because that would press symbolism beyond its capabilities. Jacob Bronowski The number of coordinates necessary to specify the configuration of the constrained [living] system is always greater than the number of dynamic degrees of freedom, leaving some configurational alternatives available to “read” memory structures. This in turn requires that the forces of constraint are not all rigid.… Howard H Pattee
Like the universe we inhabit, human behavior is inherently ambiguous, and is thus without determinate meaning, in the absence of an adequate historical account of the evolved genesis of particular behaviors. (Note that cosmology, in trying to understand the universe, also has to go back in time, to recover the evolutionary history that has brought us © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_14
269
270
W. B. Weimer
to the present.) Without supplying a complementary structural analysis looking back over the history of the functional specification of a particular behavior, our understanding is ambiguous and incomplete. As anywhere in science, we must have a duality of descriptions, one for the physical, one for the functional. Understanding requires complementary functional analysis for any attempt at physical specification of human behavior, and vice versa. From the standpoint of physics, this is a matter of the number of dynamic degrees of freedom being great enough to allow life to “harness” physicality. Considered from the perspective of semiotics, with the threefold categorization of the manifestations of functionality and meaning as involving syntax, semantics, and pragmatics, the acquisition of knowledge and the use of language are first and foremost pragmatic in nature. Even the simplest activity of an individual is invariably embedded in both a pragmatic and a semantic context. Disambiguating (usually at the syntactic level, at least for surface structure ambiguity) what any behavior is would be much easier if we could simply assume a common pragmatic context and concentrate upon disambiguating the meaning of the actions involved at the level of semantics. Unfortunately, it is not clear that one can ever find a “simple” case in which to do this. Soliloquy is a potential example, but even in speaking to one’s self, pragmatic context may not be constant, or even obvious or clear. Shakespeare’s plays provide sufficient examples of soliloquies with that lack of clarity. Following Bronowski in the epigraph above, those soliloquies would have to go on forever to be unambiguous. And even in seemingly clear examples there are often ambiguities in word meaning, phrasing of an utterance, and implied context. Human action is always context-sensitive, and that context ranges over both semantic and pragmatic components, as well as its syntactic expression. Without clear-cut accounts of what is involved in not only the pragmatics but also the semantics of the situation, to say nothing of the syntactic structures that realize those forms of behavior (and make them meaningful), any interpretation of human behavior is ambiguous and thus underdetermined by physical laws and simultaneously not yet explained by any functional theory.
14 The Resolution of Surface and Deep Structure Ambiguity
271
The Inevitable Ambiguity of Behavior An example I often use will make this point. It is simple, involving the activity of only a single individual. Suppose someone (one need not specify anything more than that a person is involved) engages in the following sequence of “physical movements” or “behaviors.” The individual drives up to a building, looks at it briefly, parks the car and gets out, then enters the building after walking up steps toward a door. After opening the door and entering the building, the person walks over to a counter, takes a piece of paper and a writing instrument out of the pocket of his or her clothing, makes a series of marks on a piece of paper, and then puts the piece of paper in front of an individual on the other side of the counter. Now ask the seemingly simple question: given that sequence of (in principle) exhaustively physically specifiable (down to the subatomic level of description if necessary) bits or sequences of physical movements, what behavior did it instantiate? Without further functional information being specified, one cannot even unambiguously interpret a sequence as simple as this. Was it an act (the functional category we are most interested in) such as cashing a check? Was it a signal to someone standing outside the building looking in? Was it an exhibition of latent hostility toward one’s mother? Was it perhaps a love note for a significant other? Was it someone just presenting a joke to another person? Was it the act of an evil neuropsychologist outside the building demonstrating some brain stimulation process to control behavior? Was it a desperate attempt to get money for a relative of the actor to get him or her released from being held hostage? One can go on and on. The list of plausible (equally meaningful and possible) accounts for that string of physical “bits of behavior” is indefinitely large, and none can be determined to be more or less correct than any of the others without a complete specification of the functional—the intentional and pragmatic and goal directed or teleological—context in which it occurs, as well as the semantic context. We have to look back over the historical derivation of the movement sequence in order to determine the function it represents. At any level of semiotic analysis, any specification of physical movement alone on the part of a living subject is inadequate to specify
272
W. B. Weimer
what functional behavior it exhibits. We have to have a semiotic framework—from pragmatics through semantics and finally syntax—in order to disambiguate the meaning involved. All behavior is totally context sensitive, and thus ambiguous in isolation. For purposes of analysis, we must always select something within a single context at the next higher level. Such an exemplification of downward causation (in Campbell’s sense) will enable us to concentrate upon, say, a semantic analysis if the pragmatic context is stable enough to be taken for granted. This is the context in which Chomsky (1957, 1965) was able to show that, in the study of language, one can ignore many semantic issues by concentrating on syntactic structuring. His “revolution” was able to show how structural devices, at least at the surface level of syntax, can eliminate some semantic ambiguity. At the same time, he made the fundamentally new contribution of showing that other cases of semantic ambiguity could not be explained only by looking over the surface syntactic derivational history of the utterance. For such cases, he showed that at a higher level, it would be possible to disambiguate the pragmatic-semantic context in which syntactic structures occur by looking back over the context provided by the history of their derivation. This is what is required to disambiguate a given surface structure or linear string. What my example of ambiguous behavior shows is that (a) language is simply one form of behavior, and (b) all behavior is subject to the surfacedeep and derivational history analysis, and inherently ambiguous until that is done. Any functional specification of what an actor’s intention actually consists of cannot be unambiguously described without a structural derivation (specification of the derivation of the physical syntax) of the movement necessary to manifest it—whether in language or behavior. Parenthetically one may note that in economics, considerable progress has been made by ignoring the full range of pragmatic and semantic contexts in order to focus on the restricted domain of what has come to be called “economic action.” For the economist, action has a very limited and fairly well specified meaning in terms of standard examples that are discussed in the literature. Perhaps this familiarity with a delimited range of “actions” misled overly rationalistic theorists (such as Mises) into thinking that economics could be formalized into an axiomatic system.
14 The Resolution of Surface and Deep Structure Ambiguity
273
But since human behavior is productive, as Chomsky and many other linguists have pointed out, and I just argued with respect to behavior in general, such an approach is far too simplistic. There is no hope of constructing an exhaustive catalog of intentional action, and the praxeological approach to economics is a waste of time comparable to the behavioristic approach to psychology. The so-called transformational revolution in linguistics, which provided techniques for the structural analysis of language to disambiguate function (and hence meaning), provides a blueprint for how one can approach the study of all human behavior. We need to understand the relation of surface structures (linear strings of words or behaviors) to their underlying causal structures and processes in order to remove the ambiguity that is always present in the surface strings. Such an analysis points to the implicit pragmatic and semantic context from which those strings were generated.
Deep Structure Ambiguity Is Fundamentally Different from Surface Structure Ambiguity What can we learn from the analysis of a single “isolated” sentence? The revolution in linguistics showed us two things: first, any adequate grammar (of language or behavior) must employ formation and transformation rules that are powerful enough to rewrite strings of symbols into strings of symbols. Chomsky showed that that power requires more than the then available phrase structure grammars were able to provide: it requires precisely the power to “look back over” the sentential derivational history of the utterance to see how its phraseology is to be parsed in order to provide an interpretation (a meaning). Second, Chomsky’s revolutionary addition to the surface structure analysis of phrase structure grammars was the realization that ambiguity could occur at both the surface and also the deep conceptual structural level of grammar. In addition to simple phrasing and embedding problems of meaning, requiring only the correct grouping of strings into phrases in a sentence in order to disambiguate them, there was another kind of ambiguity that was fundamentally different. This newly noticed (but always present) phenomenon
274
W. B. Weimer
of deep structural ambiguity requires one to look into the non-surface or underlying or “deep” derivational history in order to see which of the alternative possible interpretations a speaker had intended to mean with a given surface linear string. Consider this old joke: What’s that in the road ahead?
This can be taken to mean two things. First, “What is there ahead of me in the road?” Or, second, it can be interpreted to mean “Is that a head on the road in front of me?” Whether the “something” on the road refers to a head or to an unknown is disambiguated by paying attention to how the linear string is in fact parsed (or phrased) into groups of words—which is to say, what the form is in its surface structure. Against that background, Chomsky introduced a fundamentally new kind of ambiguity: Praising professors can be boring The shooting of the hunters was terrible
such examples require one to be able to look back over the derivational history of the utterance in order to disambiguate the interpretation. In the latter example, you do not know whether the hunters were terrible shots, or whether the hunters were themselves the ones being shot. And there is no way that one can determine this simply by doing a surface regrouping of the words. All our perceptual behavior (whether speech perception, visual or auditory perception, tactile, or whatever) provides input that is ambiguous in this sense. All our conception does likewise: the history of science is replete with examples, many pointed out by Hanson (1958, 1970), and Kuhn (1970, 1977), just as Chomsky’s examples were becoming known in the study of language. For instance, Hanson delighted in examples such as what Tycho and Kepler “saw” when they looked at the sunrise from exactly the same point of view. Tycho would have seen the sun revolving around the fixed earth, while Kepler would have seen the moving earth revolving around the fixed sun. Or take another Hanson example, one in which there are two circles and two elliptical extensions which extend from the outermost circle. This
14 The Resolution of Surface and Deep Structure Ambiguity
275
is interpreted very differently if someone says “incoming missile” rather than saying “Mexican on a bicycle.” And it should be clear that these interpretations do not require one to rearrange the lines on the paper. It is, as Hanson put it, the same “optical sensibilia” that are involved. Identical bits have many meanings. This analytical process of looking back over the history in examples such as these requires that one look at what the linguist calls nonterminal vocabulary elements instead of just the terminal (spoken word) items. This stems from the theory of Post languages (Emile Post, 1943, 1965). Terminal items are words or surface entities in the language, nonterminal items are (for natural languages) such things as NPs and VPs, Det., or Aux., and the concept S for the "sentence" as a whole (S is the axiom, standing for highest and most abstract concept for natural language). We do not speak noun phrases or determiners or auxiliaries or verb phrases in our speech, but it has been necessary to postulate their existence as classes of syntactic structure devices in order to understand (i.e., to have a theory or to disambiguate) the longer linear strings we call sentences. The terminal versus non-terminal distinction shows a fundamental difference between the abstract underlying conceptual structures and the surface or terminal vocabulary items of language behavior. The study of (natural) language utilizes only one axiom, S, intuitively understood as sentence. Other Post languages may have different axiom sets (such as A for act in a grammar of behavior), and indeed may have more than one axiom. The derivational history of a sentence proceeds by differentiating the highest level of relevant non-terminal items in natural languages, usually something like the S-V-O structure (read as subject, verb and object), and then the constituents of those lower levels of analysis. For example, the S may be differentiated into a noun or a noun phrase, the V into a verb or a verb phrase, and the O into a direct or indirect object. The problem posed by deep structural ambiguity is that such utterances instantiate (at least) two completely different instances of S, i.e., two distinct sentences instead of one, in the same linear string of terminal elements. One “sentence” in the deep structurally ambiguous sentence illustrated above is paraphrased as “It is terrible that the hunters were shot” while a second sentence is “The hunters were terrible shots.” In
276
W. B. Weimer
the behavioral case represented in alternatives to the “check-cashing” example noted above, there are potentially many types of different act, or perhaps other axioms involved in addition to A for act so that an adequate grammar of behavior would require an axiom set of A, B, and C, if you will, or perhaps more likely, E (intuitively, emotion) that must in some fashion be incorporated into the account. In such “deep structural” cases, which require looking back over the history of the semantic context in which the act occurred, there may be many different determinants involved in the production of a single bit of physically specified behavior. If we happen to find out that, immediately before the behaviors described in the above example, the actor received a threatening phone call from his or her parents’ residence, then we can narrow down our possibilities to two or three: a basic act of check-cashing, and also something that may look like the hostage rescue interpretation, or perhaps simply a very bad joke. Such a dual or perhaps triple “act” possibility in a single sequence of physical movements is one reason why any attempt at constructing a grammar of action is incredibly difficult. Without a full contextual determination there simply is no possibility of determining what act, if any, a given bit of physically specified movement instantiates. We await the development of an adequate syntax of action.
Why Is Behavior in Linear Strings? Many writers have emphasized the “cloud-like” nature of brain functioning (Bronowski, 1978; Popper, 1972; von Neumann, 1951, 1958, 1966), and the discussion of Post languages above makes clear that preconscious tacit processing is behind (temporally and conceptually) the final output of a sentence as a linear string of words. Why do the products of those multilayered and multicausal processes become surface structures in strings that unfold through time? Why is linguistic behavior temporally ordered? Similarly, the genesis of overt behavior, such as the coordinated movement involved in striking a baseball with a bat, cannot be explained as a series of chained together surface events. All coordinated activities involve the same sort of deep conceptual structure determination that is found in language (the classic account of this is
14 The Resolution of Surface and Deep Structure Ambiguity
277
Lashley’s [1951] presentation of problems of serial order in behavior). But what eventuates is always strings of movement in a linear sequence. There is no difference between our linguistic skills and our bodily skills— each requires brain processes that are of at least polycentric control complexity behind what eventuates at the surface in linear strings. But it is this surface-deep processing distinction that allows for the existence of deep structural ambiguity. It allows for ambiguity in the surface structure that could potentially be avoided if we could talk and think and behave in the underlying deep conceptual structures. Why can’t we just skip the surface linearity and communicate in the deep structures, or have all our behavior in whatever the cloud-like higher order processes actually are? Why are we so one dimensional? Several factors are crucial: first, we cannot violate thermodynamic and physical energy considerations (remember, functionality harnesses physicality, it cannot do away with it). Additionally, even if the nervous system is polycentric or coalitional in its complexity (and perhaps holographic in addition) it still makes organizational sense to eventuate into linearity. Thermodynamic factors constrain the basic physical requirement necessary for any symbolic system to exist. Symbols and their use must be free in the crucial sense that the symbols must be underdetermined by physical laws (or we could not speak our own thoughts and feelings— something like Laplace’s or Maxwell’s demon would do our “thinking” instead of us). This freedom, this underdetermination, requires energy degeneracy, which means that the “cost” or amount of energy expended in creating symbols, as well as the energy differences between particular symbols, is virtually nonexistent. This allows a practically endless number of symbol strings (for instance, sentences or actions) to come into existence and still be unconstrained by physical laws—they would be available to harness but could not—and do not—evade laws. Linear strings, as one-dimensional entities, are as cost-efficient as we can get with respect to this requirement. Higher dimensional entities—perhaps what is involved in the cloud-like structure of primate neural activity— would require greater energy costs, so it is probable that evolution began with the cheapest way to construct symbol systems, and only violated this “cheap and lazy” approach under tremendous selection pressure from a hostile environment.
278
W. B. Weimer
It is quite plausible that the first “languages” of symbol structures in life are nothing but surface structure strings. It would appear that the genetic language, undoubtedly the first to arise, is such a surface structure string system. Before efficient and centralized nervous systems (such as the centralized ones in higher mammals) arose, there would be neither the need for deep structures nor the means to produce them. Everything would remain at the surface—the simple input–output level of analysis (like a peripheral reflex arc), and there would be no possible central initiation of behavior—nor would there have yet been any need for such central control. But with increasing speciation, neural structure differentiation arose. With this differentiation came hierarchical structures of neural activity. The initial simple pattern of activity was supplanted by patterns, and then by patterns of patterns, and thus levels. Once this genie of (relative) complexity was out of the bottle, the evolving CNS has unleashed level after level of structural complication, with the ensuing functional patterns of overlapping activity and increasing complexity. The human CNS appears to have hierarchies of hierarchies of activity simultaneously performing a multitude of functions. With the exception of our consciousness as the tip of the iceberg, that activity is deep structural and inherently tacit. We do not know what these “silent” languages of the brain are, or how they relate to our spoken natural languages. And as noted above, the theory of Post languages, covering all activities of changing strings of symbols into other strings of symbols by prescribed rules of varying degrees of computational power, is not limited to a language with only a single axiom. Other axioms could appear in the hierarchical structures of brain activity, and there is no reason to suppose that they would all be available to conscious awareness. Processes controlled by the autonomic nervous system, for instance, such as alimentation (to take one possible example), clearly involve hierarchically structured patterns of activity, but they need never be conscious (you do not tell your stomach to digest your food, or tell it to behave differently when it contains steak or fish) and they require no axiom equivalent to S for sentence (but clearly some axiom is needed). While computation and symbol manipulation-based theories of life and cognition (the computer—AI approach) regard the exact form of
14 The Resolution of Surface and Deep Structure Ambiguity
279
the hardware that “runs” their rate-independent programs as being relatively unimportant (or perhaps simply theoretically less interesting), that tactic is unappealing to those who study the “wetware” of the functioning CNS. Such theorists resonate to neural network or “connectionistic” accounts to try to comprehend the incredible amounts of distributed “information” processing going on in actual brains—i.e., brains as physical entities constrained by the second law of thermodynamics. Hierarchical patterning of levels of processing supervised (controlled may no longer be an appropriate concept) by downward causation is nonlinear in real time. What goes on in the brain is, as Bronowski (1978) often emphasized, cloud-like and statistical even though the final output appears in the form of a deterministic and linear string or sentence in a natural language. We are most familiar with linear strings in our consciousness, but our brain is not restricted to them. Our emotionality and vague “feelings,” apprehensions, unverbalized hopes and fears, premonitions, and myriad other “vague” notions such as hunches in science seem to involve global and diffuse (i.e., multilayered) levels of cognition either instead of or in addition to linear symbol strings. This is why the words used to represent the meanings that we have for these tacit, deep structural processes appear to be so “vague” and “unformulated” to the mathematics-über-Alles mentality. It has always been the hope of such “hardheaded” theorists (especially information processors, computer programmers and logicians) to remove all that unspecifiable “slop in the system” to get to good, solid deterministic linearity. But the CNS is determinate, not billiard ball deterministic. The “slop” in the system is the system. And the system is smarter than the computer programmers. What would be an efficient form of nervous system functioning if massively parallel processing and distributed networking is to occur? Effectively minimally two-dimensional instead of one for linearity, such higher order systems would require more energy expenditure and the problems of constructing and then reading the coded information which the symbols possess would be much greater than for linear systems. The most efficient system we have available as a model for that is holography. The holographic “information processing” model of neural functioning was first proposed by Pribram (1971). This model regards linear strings
280
W. B. Weimer
as frozen “snapshots.” Temporal integration of such string snapshots can produce a holographic (i.e., multidimensional) array of neural activity that can underlay creativity or productivity as well as the phenomena of visual perception. A classic demonstration of this was the Bransford and Franks (1971) study in which subjects add to or fill in content which they have never actually been exposed to during prior sampling of linear strings of semantic content. They are more sure that they have in fact experienced compounds of simple content which they did not actually encounter than they are of simpler items which they did encounter. They construct the “3rd dimension” of the novel or not actually experienced holographic content in their brain activity. The effect here is analogous to a subject’s “walking around” a two-dimensional hologram in order to find the third dimensional content of that two-dimensional array. The necessity of dealing with problems such as creativity, the genesis of abstract and totally nonphysical meaning, the recognition of abstract and higher order properties in indefinitely extended domains of particulars, the biasing of anticipatory modeling systems, as well as myriad more problems, will require great effort to unravel when we look at the level of neural systems dynamics. Holography is presently the only model we have available to address such issues. Against this, we note that it is much easier to fall back on the symbolic systems that can be computer modeled in their entirety by rate-independent linear computation theory without being concerned with the complexities of the hardware. But doing so is simply not adequate to explaining the functioning of living central nervous systems.
Excursus: Ambiguity and Dimensionality It is a commonplace that we live in a world of three spatial dimensions (length, width, and depth) and one temporal dimension. But what are dimensions, and could they have any relationship to ambiguity? How many dimensions do we really need to understand physical or psychological reality? How does the dimensionality of either domain relate to the problems of ambiguity, meaning, and understanding? How could the dimensionality of the universe relate to epistemology? These are not
14 The Resolution of Surface and Deep Structure Ambiguity
281
issues that are usually discussed in the “human” disciplines. Nevertheless, they are both informative and relevant. What is a dimension? The answer seems to be deceptively simple: a dimension is a measurable extent of some kind, a coordinative reference system to which numbers or reference markers may be assigned which designate intervals. In mathematics and physics, the dimension of a “space” (which need not be spatial, but can be anything—temporal or conceptual) is intuitively defined as the minimum number of coordinates that are necessary to specify any identifiable point within it. So linearity is one dimensional, because there is only one coordinate needed to specify a point “upon” or “within” a line or dot. A flat surface such as a piece of paper (a plane surface) has two dimensions, because we would need to specify both a length dimension and a width dimension to unambiguously locate any point upon its surface. We would move to the third spatial dimension when we acknowledged that any physically existent “plane” surface would have to have some depth, and that specification of only the length and width coordination would leave ambiguous or indeterminate the position on the depth dimension until it was also specified. Adding time as a dimension (intuitively based upon the concept of existence as endurance through time) adds a fourth dimension necessary to understand events on our planet, as for instance, any physical dimension must specify when that point in three-dimensional space was in existence. An account of any particular point you placed upon a piece of paper requires a specification of a time interval in which it existed upon that piece of paper and then ceased to exist when either erased or the paper went out of existence. So dimensions are, intuitively, coordinative reference frameworks. They are necessary to unambiguously locate something. Suppose we wish to develop a theory of the dimensionality of cognition or reality. Presumably, we would follow William of Occam, and apply his conceptual razor in order to find a minimum number of dimensions, rather than by postulating many dimensions simply because mathematical systems allow us to talk of as many dimensions as we can imagine. It turns out, somewhat surprisingly, that we can do just fine in both cognition and reality with only two “spatial” dimensions. There is an old adage that says “surfaces are where it’s at” that is very informative
282
W. B. Weimer
in this regard. The necessary dimensionality of the universe and mental functioning is pretty minimal. We can see this by looking at holography. We are all familiar with the hologram as providing a three-dimensional representation of objects. Videogame players and moviegoers think of holographic images as fantastic beasts and animated machines projected into an environment that seems to be “real” and in which they as gamers or audience members are present. Indeed, they can become an actual participant by manipulating the reference beam of the holographic image and changing the background—they can become the avatars on their screens. Holographic imaging is a great improvement over two-dimensional photography in which an image of three-D reality is represented on a two-D film surface (for example, photographic paper or a cinema screen). A “moving picture” display, from which the term movie is derived (via Zipf ’s law), portrays three dimensionality by sequences of separate 2-D still or photo images taken as the focal objects move through the “volume” or the depth dimension of space. The third dimension effect comes by playing these sequences on the screen rapidly enough that the psychological critical fusion frequency (CFF) is overcome so that we see continuous motion and depth in the image. We can see what is “behind” an object if the camera operator walks around behind it and records a sufficient number of two-D still images on the film. So cinematography provides a bootstrap method to “cheat” our way into a three-dimensional representation of the world with nothing more than an optical lens and 2-D strips or surfaces of film to retain separate images. Holography does not do this. It utilizes no lens to create an image, only a reference beam of coherent light played over the holographic film. The holographic film image itself is not a recognizable image of the objects recorded, but rather a mosaic, a moiré like pattern of light and dark on a 2-D surface. When the reference beam is played over that moiré surface the original 3-D information reappears, and the viewer can move around (not the camera operator) and see the “other” side of the objects. The “information” (for want of better terminology) necessary to support our perception of that third dimension of an object is available in the 2-D holographic strip representation, and can be made manifest
14 The Resolution of Surface and Deep Structure Ambiguity
283
(psychologically we want to say, retrieved or just “seen”) by the reference beam. How much information in that third dimension can be recorded on a two-dimensional surface? How much of a volume is present in the total surface area? The answer, counterintuitively, is all of it can be specified. Results from Bekenstein (1973), Hawking (1975), ’tHooft (1993), and Susskind (1995) have shown that the information in a volume can be equated to the total 2-D surface area of that volume. This has led to the holographic conception of 3-D volume up to and including the universe as a whole in physics. It is worth recalling that surfaces, specifically the membranes of cells, play a crucial and physically two-dimensional role in being the “interface” (pardon the metaphor barrowed from computer hardware) through which all activities of the cell occur. For life, surfaces are literally “where it is at.” More than that: as Hoffmeyer (1998, 2000, 2006) emphasized, the membrane “can actually be seen as the principle locus for life itself.… It’s the membrane that creates the potential inside-outside asymmetry from which the organism-environment asymmetry must have grown out. The origin of life is by necessity also the origin of the environment,…” (Hoffmeyer, 2006, p. 168). Life requires no dimension beyond its two-dimensional membrane surface array. The topological closure that the membrane establishes forms the individualization of an individual space (thus differentiating an interior from an exterior) and is what creates (proto-) agency. It is the asymmetry between what the membrane excludes on the outside and what remains on the inside that creates the organism-environment separation. The semi permeable membrane is the beginning of the self. Put another way, life is always on the surface of physicality. What about the central nervous system? Note that it is the surface properties of the nerve (the membrane) that transmit both the all or none spike transmission or neural impulse and also the wave fronts of the preand post-synaptic potentials and their interference patterns. At this level of analysis, one need not worry about the physics problems such as incorporating gravity and relativity into a unified account of the very small or the very large. What psychology needs to explore is how memory, as a proxy for knowledge and awareness, can be distributed throughout the
284
W. B. Weimer
CNS, and what would constitute the reference beam that allows access to that information which underlies our phenomenal awareness and our individuated “selves.“ The CNS, viewed from the standpoint of the information conveyed in neural activity, is nothing more than surfaces. Looked at in crosssection a nerve is a hollow tube on whose (semi permeable membrane) surface the electrical activity, both pre-and postsynaptic potentials and the all or none spike, occurs. The interior of the tube is a chemical soup that creates the conditions requisite for that surface activity, but the interior is not that activity, and it does not convey the information content of it. From the standpoint of understanding the semantic information capacity of CNS activity, we can get away with a “behavioristic” approach of considering the nerve as a “black box.” We look only at its two-dimensional surface activity. And we cannot forget that the more dimensions, the more ambiguity. The first proposal of the “neural hologram,” or the hypothesis of the nervous system acting as a holographic image processing apparatus in the representation of our knowledge and the memory necessary for it to be so, was due to Pribram (1971). That work was largely ignored by psychologists because it was too “foreign” for the popular taste. To this day, it remains the only “mechanism” yet proposed which is capable of understanding the generative capacity of cognition and the distributed nature of memory involved in such skilled behavior. Holography has limitations in both physics and psychology. In both fields, it solves problems of reference without actually addressing problems of meaning. And in no field can it do other than presuppose all the subject-object distinctions. In physics one can ask what is the relationship between waves and particles, on one hand, and information on the other? Shannon information is differences—most easily understood as bits, which specify yes or no decisions (usually represented in two valued logic as zero or one). Consider an atom of hydrogen (or any other element). Is it “nothing but” bits of information? Certainly not in ontology. Here we enter into epistemic constraints on ontology. The holographic hypothesis claims all the “communication theory” information necessary to specify hydrogen is contained in its 2-D holographic
14 The Resolution of Surface and Deep Structure Ambiguity
285
image. Similarly, Boltzmann gave up any attempt to specify or to understand the other “individual” properties of molecules in an ensemble such as a gas for an informational analysis of them as statistical ensembles. These accounts specify the reference of physical concepts, but do not address any further conceptual properties of those concepts. We all know (except for Wheeler and his phenomenalist students) atoms and molecules are not just bits of information. To us as subjects of conceptual activity, they mean something else as well. Semantic information is simply not the same as communication theory information. Structural realism does not tell us what the intrinsic properties of objects are, but it tells us that they must have them. The subject-object divide is particularly obvious and inescapable in psychology. Grant that the holographic model can explicate how cognition (particularly memory) operates in terms of neural processes. But that model has both a holographic “film” 2-D surface and an independent reference (and note the term is definitely reference, not meaning) beam that is indispensable for extracting or uncovering or finding the 3-D “information” in those funny moiré patterns. What is the reference beam in the CNS (beyond what we obviously already know—patterns of activity)? Or perhaps we should say who is doing the referencing? Epistemic dualisms do not disappear just because there are technological advances and newer conceptual models that aid our understanding in such problems. We need a theory of that reference beam—the “who” or agent or subject of conception—to understand the problem of selfhood.
Dimensionality of the Mind Commonsensically, we live in a four-dimensional world: length, width, depth, and time. But what is the dimensional nature of the mental realm from which we invest the world order with these attributes? We construct the “fourth” dimension of time from successive wavefront interference patterns (in both slow and spike potentials). The strongest evidence for this is the critical fusion frequency (in vision) and its correlates in other modes. Below the CFF, we "see" snapshots or still renderings
286
W. B. Weimer
(and not a movie) or a blurred image in a “photograph” representation, while at higher “shutter” speeds than the CFF, we see continuous changes or things in motion. So duration, existence through time, is added to our conception of reality because of the way the mind works, when it perceptually and conceptually uses or joins “snapshots.” There is no independent evidence of the existence of duration apart from the brute fact of its being there because that’s the way the CNS works. The conceptual specious present is not obviously any sort of physical duration (See Weimer, 2022c for issues relating rime and dimensionality to understanding). Holography is a dynamical theory: the postulates it proposes function in rate dependent environments, such as the unfolding history of our universe (and for the neural hypothesis, for our unfolding selves as patterns of neural activity). Time enters the dynamical theories only via successive 2-D timeless “pictures” or snapshots. We have no evidence for truly continuous existence in either the external realm or the mental or cognitive realm. We have no continuous theories in any domain whatsoever. Even our mathematics of “continuous” domains and “continuous” functions—such as Fourier analysis for wave interaction—are discrete, or digital approximations to continuous activity. There is no continuous mathematics of continuity—only discrete integrations (as in integral calculus) to form, for example, a “smooth” curve or a “fill in” to make a line. Continuity exists only in our rate-independent or specious present timeless comprehension of meaningful phenomena and continua. Continuity exists only in rate-independence, not in dynamical reality. Everything dynamic is discrete or quantal. Let me emphasize that this situation holds for the rate-independent domain. Conceptual schemes do not require time or its passage at all (even though we have conceptualized time as one of their subject matters). Our acquaintance does not make use of temporal succession—it occurs in the eternally specious present. Consider whether acquaintance requires depth. It does not. What is “presented” in the “raw feels” of phenomenal awareness is surface-like rather than volumelike. The fact that we all have learned to “see” in 3-D, adding depth to both the surface of the eyeball and to the infinitely narrow strip of the surface of consciousness, does not change this. This is easily “seen”
14 The Resolution of Surface and Deep Structure Ambiguity
287
when considering other modalities. Where do you feel the temperature of the room? Where do you taste that vegetable? Where do you hear that noise? Where is the intensity of your pain? Considering only our acquaintance the answer is always the same: in a two-dimensional array within the specious moment. The “you” doing the feeling, tasting, smelling, hearing, or whatever is a two-dimensional subject. The third and fourth dimensions we “experience” as a result of moving through time are something you, as the subject of conceptual activity, generate from 2-D information. The same holds for thought as subvocal speech— when you think aloud or “in your head” it is still length and width, with depth and time added by your nervous system. There is neither depth nor duration in thinking about (to use the time worn example) a “golden mountain.” The illusion of duration comes about by “continuing” to think about the same thought (object) that is in each discrete instance presented all at once in succeeding specious present moments. There is no evidence that this “continuing” to think is anything other than our nervous activity exceeding the CFF and thus appearing to be continuous. Continuity of thought emerges from rapid fusion of succeeding “images.” Sampling an array below the CFF, we “see” discrete, motionless snapshots. This is similar to dropping down the scale of events in physical time: the closer we come to the Planck length of time, the less illusion of movement we encounter in any given “snapshot” of reality. All we see are successive frozen snapshots—the “frozen accidents” of history.
Surface Structures, Deep Structures, and the Ambiguity of Dimensionality Let us link ambiguity and dimensionality. Consider an entity in a single dimension: a point or a line. There is little possibility of ambiguity in the dimensional description of such an entity. A point has nothing around it, and it cannot be put in any linear order (since, by definition, there is no order in a point). A line, composed of points, is similar: between any of the points that constitute a linear array one can insert more points, but since the line is nothing but the sum total of identical points there is
288
W. B. Weimer
no ambiguity that results from so doing: the one-dimensional frame can provide a coordinate reference for any point on a resultant line. Consider the more complex two-dimensional array. Suppose one were to represent (graphically, in the most intuitively obvious way) the relationship between two point properties, call them A and B. One can have any kind of straight or curved line that one wants: there are an infinitude of ways to link two points in a line (Russell used this fact to refute induction by enumeration). But with regard to dimensionality, any two points on such a line define it as involving two dimensions (say, length and width, height and age, or whatever pair one uses as coordinates). Since the surface layout is totally undefined (thus undetermined), there is indefinite reference to any linear “shape” unless more points are provided. Less ambiguity can enter in a two-dimensional array in terms of additional specification of the shape of a line, depending upon how many points (beyond the minimum number of two) are supplied in order to define the shape. This sort of ambiguity that can characterize a two-dimensional array (which is after all, a surface) is surface structure ambiguity. It can be ameliorated (if not totally eliminated) by sampling more points on the line. Adding points to the linear specification is like using that indispensable component of your visual system, your legs, to disambiguate “What’s that in the road ahead?” All you need to resolve that sort of ambiguity is to parse the linear string sentence by getting close enough to the object. What happens if one were to move from a surface array to the utilization of three (or more) dimensions? Here we find that the possibilities for under determination—for the ambiguity of the conceptual “shape” involved—can now include deep structural ambiguity. If we take the famous example from Chomsky, “The police were ordered to stop drinking after midnight,” we can see that the single representation in an isolated linear string is actually six ways ambiguous in its meaning. We have the problem that the two-dimensional or surface representation of this single linear string, whether it is spoken or written, is always ambiguous in a fashion that cannot be eliminated at just the surface level. Any adequate disambiguation requires that we “look back over” the derivational history of the string in the conceptual framework of the speaker (or writer) in order to understand its meaning. This “looking
14 The Resolution of Surface and Deep Structure Ambiguity
289
back over” is equivalent to moving to an additional dimension (or dimensions). Perceptual as opposed to linguistic structures also exhibit this sort of ambiguity. The classic “ambiguous figures” of introductory perceptual psychology texts provide many examples, perhaps the most familiar being the Necker cube, which is a series of two-dimensional lines on a surface that are perceived or “interpreted” as one of two three-dimensional object shapes. Far from being perceptual curiosities or instances of somehow “impoverished” stimulation, these phenomena demonstrate how contextual sensitivity determines the meaning of what is “perceived” (as Maurits Escher systematically explored in art). They are the clearest examples of how additional degrees of freedom (to use the physicist’s term) enter into conception when dimensionality increases. The addition of a dimension—whether into a conceptual framework or into an account of physical reality—has as a consequence an increase in the possibilities for ambiguity, and the type of ambiguity that is involved. In physical theory, all such ambiguity is lumped together and regarded as a matter of the addition of “degrees” of freedom over which variables may range. When one adds a degree of freedom to a physical situation, it means that an additional factor must be explained and somehow “integrated” into the resulting equation of the relevant theory. Adding degrees of freedom means that more variables must be accounted for, and somehow reduced (which is what “integrated” actually means) to the explanatory physical equation. Physical theory becomes intractable when such “degrees” of freedom cannot be so integrated—when the addition becomes non-holonomic, as in the function of measurement being lost in an exhaustive specification of the equation for the measuring apparatus. At that point, the inexorable laws of physics no longer determine what is going on in the system under study. The admission, emphasized by physicists such as Eddington (1929), Gell-Mann (1994) and Pattee (2012), that the laws of nature can address only a very small part of the actual history of the universe, is an indication that physical theory is not going to be complete, now or in any conceivable future. We are, unfortunately for hard headed physicists, the result of frozen accidents. Ambiguity is always going to be present. Degrees of freedom have a way of multiplying ahead of any increase in adequacy of integration of physical theory. This is another way of interpreting Warren Weaver’s
290
W. B. Weimer
(1959) well-known remark that the more the circle of our knowledge increases, the greater is the circle of ignorance surrounding it. The human sciences dealing with functionality face exactly the same sort of problem. Despite the fact that we can be fairly precise in terms of specification of functions, it is never possible to specify a precise set of physical movements that are inevitable cooccurrences to, or whose physics equation could substitute for, those functions. There are, as the old saying goes, different ways to skin the cat, all of which involve totally different physical realizations of getting the job done (and all of which are pretty darn messy, which no possible physical account could address). We blithely use terms like “getting the job done” in all functional specifications in order to acknowledge the underdetermination of functions in terms of physical specification. Any given realization of function provides a particular set of cooccurrence relations between the functional specification in the rate-independent realm of conception (and cognition) and the physical embodiment or manifestation in the rate-dependent realm of dynamical theory. It does not matter that we can in fact succeed in specifying a suitably precise specification of physical movements for functionality (and teleology) for specified delimited concepts—such as a predator hunting a particular prey—because we could never specify all possible physical realizations of such an instance of functional behavior (Weimer 2021, 2022a, 2022b). There are an infinitude of realizations of the function of being a predator, of the function of hunting, and of being an object of prey. The only way that that ambiguity can be reduced is by theories that look back over the developmental history of the unfolding of the functional specification. Here we need to remember Campbell’s admonition with respect to downward causation (which is exactly what such looking back involves in biological evolution) that the account will not be complete until we fill in all the relevant layers and levels from the very highest, most generic and abstract functional specification down to an exhaustive physical level realization of the behaviors involved at the level of neurophysiology and muscle movement, and then follow that down to the level of biological molecules, then atoms, etc. The task of specifying what constitutes human action is not one to be taken lightly.
14 The Resolution of Surface and Deep Structure Ambiguity
291
References Bekenstein, J. D. (1973). Black Holes and Entropy. Physical Review D, 7 , 2333. https://doi.org/10.1103/PhysRevD.7.2333 Bransford, J. D., & Franks, J. J. (1971). The Abstraction of Linguistic Ideas. Cognitive Psychology, 2, 331–350. Bronowski, J. (1978). The Origins of Knowledge and Imagination. Yale University Press. Chomsky, N. (1957/2002). Syntactic Structures. Mouton & Company (Walter de Gruyer GmbH, 2002) Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press. Eddington, A. S. (1929). Science and the Unseen World . MacMillan. Gell-Mann, M. (1994). The Quark and the Jaguar. Henry Holt & Co. Hanson, N. R. (1958). Patterns of Discovery. Cambridge University Press. Hanson, N. R. (1970). A Picture Theory of Theory Meaning. In M. Radner & S. Winokur (Eds.), Minnesota Studies in the Philosophy of Science, IV (pp. 131–141). University of Minnesota Press. Hawking, S. (1975). Particle Creation by Black Holes. Communications in Mathematical Physics., 43, 199–220. https://doi.org/10.1007/BF02345020 Hoffmeyer, J. (1998). Surfaces inside Surfaces. On the Origin of Agency and Life. Cybernetics and Human Knowing, 5 (1), 33–42. Hoffmeyer, J. (2000). Code-Duality and the Epistemic Cut. In J. L. R. Chandler and G. Van de Vijver (Eds.), Closure. Emergent Organizations and Their Dynamics (Vol. 901, pp. 175–186). New York Academy of Science. Hoffmeyer, J. (2006). Genes, Development and Semiosis. In E. M. NeumannHeld & C. Rehmann-Sutter (Eds.), Genes in Development: Re-reading the Molecular Paradigm (pp. 152–174). Duke University Press. Kuhn, T. S. (1970). The Structure of Scientific Revolutions (Rev. ed.). University of Chicago Press. Kuhn, T. S. (1977). The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press. Lashley, K. S. (1951). The Problem of Serial Order in Behavior. In L. A. Jeffress (Ed.), Cerebral Mechanisms in Behavior (pp. 112–135). Wiley. Pattee, H. H. (2012). Laws, Language and Life. Springer. Popper, K. R. (1972). Objective Knowledge: An Evolutionary Approach. Oxford University Press. Post, E. L. (1943). Formal Reductions of the General Combinatorial Decision Problem. American Journal of Mathematics, 65, 197–215.
292
W. B. Weimer
Post, E. L. (1965). Absolutely Unsolvable Problems and Relatively Undecidable Propositions—Account of an Anticipation. In M. Davis (Ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions (pp. 340–344). Raven Press. Pribram, K. H. (1971). Languages of the Brain. Prentice-Hall. Susskind, L. (1995). The World as a Hologram. Journal of Mathematical Physics, 36 (11), 6377–6396. https://doi.org/10.1063/1.531249 ’tHooft, G. (1991). The Black Hole Horizon as a Quantum Surface. Physica Scripta, T36 , 247–252. von Neumann, J. (1951). The General and Logical Theory of Automata. In L. A. Jeffress (Ed.), Cerebral Mechanisms in Behavior: The Hixton Symposium (pp. 1–31). Wiley. von Neumann, J. (1958). The Computer and the Brain. Yale University Press. von Neumann, J. (1966). Theory of Self-Reproducing Automata (A. W. Burks, Ed.). University of Illinois Press. Weaver, W. (1959). A Scientist Ponders Faith. Saturday Review, XLII (1), 3. Weimer, W. B. (2021). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 1. Cosmos + Taxis, 9 (11+12), 1–29. Weimer, W. B. (2022a). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 2. Cosmos + Taxis (in press). Weimer, W. B. (2022b). Retrieving Liberalism from Rationalist Constructivism: Basics of a Liberal Psychological, Social and Moral Order (Vol. 2). Palgrave Macmillan. Weimer, W. B. (2022c). Continuity, Time, and Order. The Journal of Mind and Behavior, 43(2), 119–142.
Part V The Corruption of Knowledge: Politics and the Deflection of Science
The scientific quest for knowledge is a very fragile endeavor. It depends for its success upon a strong context of constraint that allows it to (continue to) function in a manner in which the search for truth about the nature of ourselves and reality occurs without interference from outside factors. That context of constraint must be independent of—not subject to the momentary whim of—--the larger context of human interaction. When “political” considerations overwhelm the spirit of free inquiry going wherever it leads, knowledge can no longer be achieved, and it will be supplanted by momentary political correctness and religious fervor in defense of one or another ideology. When government—whether for the best of intentions or the worst—controls the nature of scientific inquiry through control of funding of research or prior specification of the “correct” outcome of that research, there is no knowledge produced. The results become propaganda for the approved political point of view, and if there is any actual science involved, it is entirely by accident, having slipped by the government censors. When that political view controls other aspects of society, such as the education system for all individuals in society—especially including the next generations of scientists—the result is an inevitable slide to totalitarian
294
Part V: The Corruption of Knowledge: Politics …
control of both knowledge and the opinions of the citizenry. Epistemology becomes “social epistemology,” and the pursuit of truth is ignored in favor of inculcation of the officially determined “correct” ideology. Science becomes a government-controlled instrument of social control. The result is that epistemology disappears from the field entirely.
15 Political Prescription of Behavior Ignores Epistemic Constraints
The desire to remodel society after the image of individual man, which since Hobbes has governed rationalist political theory, and which attributes to the Great Society properties which only individuals or deliberately created organizations can possess, leads to a striving not merely to be, but to make everything rational. F. A. Hayek The greatest dangers to liberty lurk in insidious encroachment by men of zeal , well meaning but without understanding. Justice Louis Brandeis
Political “science” is a misnomer: there is no separate science of politics. Political behavior is human action, and as such, politics is a typical instance of the application of knowledge (real or imagined) to the attempt to control behavior. Political philosophy focuses upon data from psychology, economics, and domains and topics such as ethical and moral behavior. It is informative to note that political “theory” ignores historical lessons available from the evolutionary approach to epistemology, and instead favors a form of political organization (for the technological control of social behavior) that is left over from the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_15
295
296
W. B. Weimer
face-to-face society of tribal organization. Instead of attempting to take advantage of and learn from the “wisdom of the ages” that is available in the functioning of the ongoing evolved social realm, politicians (inevitably those who wish to exercise power, invariably control “freaks”) endorse one or another form of primitivism, because they, like tribal dictators, assume that they are capable—even with limited knowledge, experience and lack of understanding of the forces responsible for social organization—of telling the other members of society what they should and should not do. Embodying ideas familiar to “common sense” from the time of Descartes, they believe that their “clear and distinct” ideas are obviously correct, and that as such, they have an obligation to impose those ideas upon others—whom they presume will immediately accept them if they also possess Cartesian reason (see Hayek, 1973, 1976, 1979; Weimer, 2022a, 2022b). They believe that only those who have opposed “selfish” interests or are entirely lacking in Cartesian common sense could possibly oppose them. This is what has led, from the time of Comte to the present, to the prevalence of the “social physics” approach in economics and politics, and to the attempt at extreme behavioral control in psychology. For followers of this approach, there is no problem of knowledge: it is already given to them in the clear and distinct ideas of their Cartesian common sense. It is their opponents who are, by definition, lacking in knowledge or are ignorant—of the “objective” nature of knowledge that is so obvious to them, of the “correct” nature of social organization, and the desirability of rigid and inflexible political control in order to bring about desired ends that have been clearly specified in advance. Nothing should be left to chance or happenstance or mere tradition— everything must be organized (Russell, 1934), made explicit and directed by central planning (Marx, proponents of socialism, activists insisting that government intervene on their behalf ), in order to achieve definite ends specified in advance. Spontaneously arisen—“found” or “grown” institutions—since they were not consciously or rationally planned to achieve particular results, must be rooted out and systematically replaced with explicitly planned structures and programs. Rule by benevolent and enlightened dictators who, in their “given” omniscience that is prior to and independent of experience, know how to parcel out the “just
15 Political Prescription of Behavior Ignores Epistemic Constraints
297
desserts” to the waiting masses—that is assumed to be the best form of government. Socialism, the doctrine that the community as a whole should own the means of production of all social/economic goods, and that the government should be sovereign (have coercive power) as the sole means of control or distribution of those goods (including knowledge), is the inevitable result. To those thinkers, such as the Victorian era Bloomsbury Group of intellectuals, or the logical positivist philosophers, or the Wilsonian-Rooseveltian “New Deal” politicians (who took their cue from Dewey, 1935), it is the “natural” end point of society, and they have willingly accepted the task of bringing it about.
Progressivism and the Philosophy of Rationalist Constructivism This approach to political “theory,” which has been called rationalist constructivism (Hayek, 1978; Weimer, 2022a, 2022b), is directly parallel to (indeed stems from) the idea of conscious and deliberate control in experimentation. It is an attempt to make a kind of Galilean revolution in the theory of society and the social domain. Such a revolution would make all individuals’ conduct subject to explicit conscious or "rational" control in order to achieve results that were desirable (at least according to the would-be dictators), and thus selected for in advance. Thus, just as Galileo chose to study the motion of uniform balls rolling down an inclined plane (and chose the angle of inclination, chose their weight, etc.) to gain insight into the laws of motion, the progressivist approach to social phenomena is to constrain it into an experimenter fabricated box from which all outside influences can be excluded except those they have chosen. This would allow an experimenter, such as the Skinnerian behavior modifier or a socialist dictator, to determine in advance (with no thought ever given to scaling, measurement theory, complexity, etc., or any of the problems noted in previous chapters) what behavior is to count as a “measure” and what conclusions will be drawn from such measurements. The experimenter becomes the dictator who tells subjects what they may or may not do. From this perspective, there is only one legitimate task for a science in the social domain: the organization of
298
W. B. Weimer
behavior according to explicit, rational criteria. Thus at the small cost of abandoning the quest for knowledge of reality, we can have a society restricted to what they think it should always have been. From this perspective, it is obvious that the new field of political epistemology (as Hannon & de Ridder, 2021, call this field) is concerned with knowledge claims only as cudgels, as an authoritarian trump card to persuade voters that their particular “clear Cartesian vision” is the true or correct one. Debate in this area is undertaken with this as a presupposed background assumption, and it compliments those who unabashedly follow Plato and similar authoritarian thinkers in endorsing an “educated” and enlightened dictatorship (the prototypical benevolent despot or philosopher-king of Plato) in opposition to “democracy” or individual rights (which they assume to result in the ignorance of the rabble quickly overthrowing the enlightened truth of the Cartesian intellect), on one hand, and contrasts with those who defend the wisdom of the masses— and the market order—as a source of more knowledge than that available to the dictators. Both positions have presupposed justificationism, which is a false theory of knowledge and rationality (how and why it is false is examined in the appendix). To top off this unhappy state of affairs, most of the data used by one or another position is subject to all the criticisms of incorrect scaling and measurement discussed in Chapters 4 through 8. So what really constitutes the actual extent of knowledge underlying contemporary political “discussion” (if one is willing to glorify it with that term) is quite open to debate, but obviously less than each side assumes.
Liberalism and the Division of Labor and Knowledge Against the rationalist approach noted above has been the doctrine of classical liberalism, a theory of society based upon a crucial economic insight: the division of labor that has arisen in the economic market system co-occurs with the division of knowledge possessed by market participants. The market order in society serves a fundamentally epistemic function. Markets, like language and other productive or creative
15 Political Prescription of Behavior Ignores Epistemic Constraints
299
social orders, make infinite use of finite means: a market order enables a participant to further their individual aims without requiring that particular participant to possess all the knowledge of momentary circumstances that went into the determination of present prices for goods or services. The market functions as a summarized knowledge transmission and distribution system—it enables individuals to further private, often conflicting, aims and goals in the absence of information about particular details or the desires of others. If an individual knows the momentary price of whatever they are interested in, that provides all the information needed to either make a purchase in the market order, or to avoid doing so. Either course of action then reshapes the overall order and informational structure of the market. Being a spontaneous complex order without conscious planning or direction, the market order evolves—as a result of human action but not design—which then enables us to take advantage of inevitable ignorance. The participants in the order cannot possibly know all the factors that determined the momentary price of a good. The market allows us to avoid becoming paralyzed by that lack of total knowledge because it operates according to abstract regulatory principles rather than momentary particular directions or the particular whim of a dictator. Markets have no social or political goals or directions. Markets are a means to individual ends, they are not ends in themselves. Such “means determination” by a system of abstract constraints allows markets to cope with the unforeseen consequences when a given individual makes use of knowledge that is local—that only he or she, because of their unique circumstances, alone possesses. Since a market order is not constrained in advance to fulfil particular purposes: it permits genuine novelty or creativity (productivity as the linguist uses the term) to occur when individuals make their own unique choices. How individuals fare in the market can never be predicted in advance. The freedom provided by equally applied general constraints constitutes the essence of liberty in the economic and political spheres, the basis of economic individualism or so-called free (meaning free to choose, not having no cost) enterprise, and the protection of the individual in the framework of society. Ignorance is an indispensable component of both evolution and market orders. The presumption of knowledge of all the particulars
300
W. B. Weimer
of a complex order is chimerical. Progress and the acquisition of new knowledge depend upon being able to take advantage of our inevitable ignorance by consistently following abstract rules or constraints upon our behavior in the ever changing and unpredictable environment we inhabit. This is exactly opposite the popular “progressive” socialist thought which strives to eliminate all ignorance and concentrates all knowledge and power in the hands of a dictator or centralized planning committee, who then distributes economic goods to others. Just as Polanyi (1958, 1966) argued that we know more than we can tell—that our tacit powers and knowledge greatly exceed what is available to our conscious awareness—the theory of social organization called liberalism stemming primarily from the eighteenth-century Scottish moral philosophers1 argues that the tacit dimension of social and cultural organization is greater than any individual intelligence or central planning board can address (see Hayek, 1973, 1976, 1978, 1979). The attempt to impose conscious or “rational” control upon such a decentralized spontaneous system will inevitably result in the loss of the productive or creative capacity upon which the evolution of both knowledge and society depend. Such a restriction would lose the unintended consequences of our action—those phenomena which are truly social instead of individual—which Adam Ferguson (1767) long ago referred to as the results of human action but not design. The market order is the decentralized framework which permits the results of human action but not design, which in its turn allows unplanned progress in society, and the learning of novel things within the individual. It is why we can both make mistakes and learn something new that no one else knows, and why we can do things that no one else has yet done. The key liberal insight—in both social theory and politics—was both evolutionary and epistemic in nature: that social growth or progress cannot be planned in advance unless one is willing to forgo the power of the market order to bring about unintended results and novel behavior and knowledge. By limiting ourselves to focusing upon particular aims or goals specified in advance, we can no longer receive the benefit of the impersonal market order, the power of their polycentric or coalitional control structures, because directed or centralized control limits the system to hierarchical or linear control. Because they must follow (at
15 Political Prescription of Behavior Ignores Epistemic Constraints
301
the highest level of analysis) from a single locus of control (there must be a top node in a hierarchy), such systems cannot allow creativity or productivity, and thus must restrict the growth of knowledge, wealth, and social progress. So as a result of that social theory the classic liberal tradition from the Scottish moralists on up has had a different approach to governance—empirical observation to see what in fact “works.” Classical liberals are data driven, while neo-Cartesian “progressivists” presume that they already know what works.
The Data Relevant to PoliticalTheory Is Economic, Psychological and Sociological And it is never subjected to critical assessment if it supports one’s prejudices and beliefs. Armchair philosophy and ungrounded speculation have a terrible reputation everywhere except in politics. The “scientific revolution” that ended the middle ages resulted from the discovery of empirical techniques that reliably disclosed knowledge about external nature. That type of thinking is what supplanted metaphysical speculation such as that about how many angels could simultaneously occupy the head of a pin. The Galilean revolution led to modern physical science and the tremendous increase in our knowledge. Nevertheless, earlier discussions have emphasized that there must be a necessary correction to that view when it is overextended to the realms of essential complexity and the phenomena of living systems. The “social physics” model of knowledge acquisition cannot be successfully extended to the social realm, and attempting to do so leads to more harm than good. But that does not mean that we can accept armchair speculation as truth in the political realm. We need to look at and criticize data—albeit empirical data rather than experimental—to determine what “policy formulations” for social control and transformation can possibly work without in the process returning society to primitive barbarism. Unfortunately, it is all too easy to ignore what little data exists, or to ignore it for purely “political” reasons such as feelings or hopes, or long refuted but “good sounding” dogmas such as socialism. That is why we need far more empirical research in economics (without lapsing into calling it “experimental economics”). That is why
302
W. B. Weimer
we need far more empirical research into social and psychological factors relevant to morality and conduct, without (as, for example, Jonathan Haidt, the founder of Moral Foundation Theory and considerable related research, 2012, 2018, did), entering that arena with the clear and distinct Cartesian idea of electing more democrats in US politics. Consider a few examples of prejudices overruling facts. Here is a brief list of cases in which popular prejudices have overruled the examination of facts in favor of causes that “sound good.” As it stands now the majority political “opinion” especially in college age youth in the Western world actively precludes critical examination of either the tenability or consequences of policy statements that “sound good”. What sounds good is simply accepted without assessment as to its truth or its feasibility or its consequences. The presumption is that what sounds good simply must immediately be legislated or mandated into required practice. But what “sounds” good depends upon who said it and where it was said. Numerous on site interviews with students and young passersby show agreement with statements when the interviewees are told the statements are from prominent leftist politicians or candidates, and then disbelief, cognitive dissonance, or outright rejection of the position when told they were really from “conservative” ones. 1. One telling case of the disingenuity of “politically charged” belief is the extent to which students believe wholeheartedly in “diversity” in the abstract (they never ask for or provide a definition in particular cases), except when it comes to their sports teams, where it is just fine to have all or many more minorities on teams (especially if their teams have winning records). Another case of bias determining the perceived desirability of a proposed program is found in the hot button issue of voter identification requirements. Interviewees on college campuses on the West coast of the United States overwhelmingly say that proposals for ID requirements to prevent possible voter fraud are politically motivated ploys, designed to make it harder for blacks and other minorities to vote. Black interviewees in Harlem (an overwhelmingly black and minority area) find that attitude silly— they uniformly have and carry identification with them (including gradeschool and highschool students), and see it as no “hardship”
15 Political Prescription of Behavior Ignores Epistemic Constraints
303
to do so, since identification is necessary in many aspects of day to day life, and easy to obtain. It appears to be the mainly white intelligensia, remote from the actual black and minority communities, not the blacks themselves, who think that this is discriminatory. 2. Equally politicized is the issue of urban rebuilding and redevelopment, derisively called “gentrification” by its leftist opponents, and “revitalization” or “renewal” by supposedly conservative proponents. (Conservatism, of course, is the opposition to change, so it is rather hard to fathom how those who would bring about change here are to be labelled ‘conservative.‘) Alleged to be a ploy to drive out minorities or the poor, the redevelopment of older, run down areas has been a characteristic of all civilizations and societies for thousands of years. It is the means by which cities have avoided or slowed down the spread of (and potential takeover by) slums, and it eventually reconstructs even what were once the most gentrified and wealthy neighborhoods after they have fallen into disrepair and neglect. Healthy cities are are always reconstructing and building in both new and old areas. When such “gentrification” is prohibited or restricted, property values drop precipitously, crime rises, and sanitation and public health become problems. The empirical issue is how and why some such change is now being regarded as a “race” or “discrimination” issue to be prevented by government intervention. What if this reasoning were applied to the human body? Would all medical interventions be the equivalent of gentrification and therefore assumed to be bad? Where is the data that allowing continued decay, creating lawlessness, “homeless” camps and slums, benefits either the city or its occupants? 3. Do you say “migration” or “immigration”? Those who want open borders and movement without consequence only discuss migration, as if someone were moving from one city or state to another. Those who point to unavoidable consequences of unfettered transit across borders refer to immigration. What is the actual difference? Migration is something that occurs within a country or politically defined area. Immigration involves crossing accepted country borders. It usually requires one to give up citizenship in one country (the one being left), but always requires one to obtain citizenship (or legally sanctioned temporary residence) in the location being moved to. Those who
304
W. B. Weimer
would allow “undocumented migrants” (actual legal meaning: illegal immigrants) into developed countries argue that, say, the United States allowed such “migrants” to settle the western part of the United States during the frontier days. Conveniently forgotten in such arguments is that the United States no longer has any frontier available for settlement, and that the original frontier “settlers” were landed immigrants, who were allowed land only, and had to have a strong work ethic and many skills to survive by working that land, since they had no government aid or entitlement handouts provided to support them. The clamor of one or another political party to provide “humanitarian” aid to illegal immigrants is apparently based upon the premise that the “migrants” will be grateful, and then, when given free or very easy citizenship status, will continue to vote for their party. The “conservative” opposition is castigated for not wanting legitimate citizens to have their taxes increased and taken to pay for such repatriation and subsidization. 4. What is one to make of the recent abrogation of the facts of biology in favor of the idea that sexual “identity” is nothing but a “social” construct and therefore something that individuals may freely choose for themselves? Here we have the distinction between sex—which maybe “traditional” biology can address—and the more nebulous but “obviously” (to the progressivists) more important notion of sexual “identity” as a personal psychological construct not subject to the constraints of the science of biology. Here only one’s private “feelings” are relevant to the determination of this new concept of identity, and since there are no valid standards and no “objective” knowledge (so the argument goes), anyone can choose what they want to be at no cost to society (see Shrirer, 2020). Anyone who disagrees must be guilty of sexism (now defined as parallel to white racism (the only kind of racism that leftists admit actually exists)). But what is the argument that the taxpayers should pay for “unisex” bathrooms, or primary or secondary sex characteristics surgery or lifetime maintenance doses of hormones? Are those not costs to the society as a whole? While an individual with sufficient wealth can “transition” to another sex out of his or her own pocket, what of all the others who want it paid for by taxpayer dollars as part of “free” healthcare? How
15 Political Prescription of Behavior Ignores Epistemic Constraints
305
is top-down governmental supervision and intervention here superior to free market forces at work? 5. As a last example, consider the “hot” issue of causal factors in climate change, Here there is legitimate debate about whether the climate is changing more than it has previously—since it has changed continually since the planet formed, and it was far warmer than today at times in the past. At issue is whether the apparent warming trend that is presently being debated is due to anthropogenic factors, and if so, what can be done about it (see Alcorn, 2019; Idso et al. , 2016). Here it is very hard to find acceptable scientific knowledge claims but easy to find a lot of hot air about “proof ” and similar notions for poorly designed and ambiguous research, as well as irrational and vehement denials of anyone who does not agree with the leftist position. One can legitimately question whether the so-called observed warming in recent history is “significant” (Moore, 2010, 2021; Singer et al., 2021) without being a “right wing shill” or “paid off ” by the energy “interests.” Presumably in science significance means statistical significance as defined by appropriate statistical assessment procedures. But considerable “research” (actually, just examination of the assumptions or presuppositions of the statistical procedures employed) has indicated that the potential error propagation within the confidence bands of the statistical procedures from the sampling techniques being used exceeds the “observed” increase in temperature. So one literally cannot tell if what we think we “observe” in global warming trends actually is greater than the possible measurement error. What is confidently recorded as warming by a “believer” can be due to no more than data extrapolation to supply missing observations, or to rounding consistently “up” instead of “down.” That is why one is forced to say “apparent warming trend” instead of something more definite. One must always question extrapolated “trends,” in this or any other case, especially when the data must be “rounded” one way or the other. The warming trend is subject to bias, to rounding “up” if one’s bias is to support the hypothesis of anthropogenic causes, and “down” if one is opposed. Not surprisingly, rounding “down” does not show a “significant” deviation from the longer term trends (assuming they have been accurately extrapolated from very limited data points
306
W. B. Weimer
and conjectured relationships), and supports the no warming argument just as reliably as rounding “up” supports the climate change models. All of this research is simply not reliable enough to sustain a political conclusion one way or another, and it neglects the problem of cost effectiveness (Lomborg, 1998, 2020)—of the added costs of mitigation and cost of compliance to society—altogether. That leads one to the obvious: if one cannot trust the “scientific” data because it is not in fact either clear cut or unbiased, then one should look elsewhere for “policy” information. Since one can hardly get more biased and opinionated than in politics, that fount of “enlightened opinion” is of no help on this issue. As I have indicated, the data relevant to political issues are in economics, (primarily social) psychology, and sociology. There is no actual data in opinions, hopes, feelings or what “sounds good” to a teenager. So how about, as Lomborg suggests, looking at economics? How can we address political divides from an economic framework? Especially if the governments of the Western world have now taken over the funding and thus simultaneously the direction of all research? And that takeover is so obviously for left leaning political purposes?
Science Is No Longer a Spontaneously Organized Endeavor It is now controlled almost entirely by governmental bureaucrats and unelected funding czars. Their responsibility is only to themselves (job security: to continue their employment), not to the taxpayers who provide the funds, nor to the scientific researchers, and least of all to their supposed purpose, the growth of knowledge. Nor are they elected by the voters. Since their job security is really all that matters, they are greatly influenced by the political concerns and preferences of the elected officials who hold power of appointment over them (see Butos, 2015; Butos & McQuade, 2015). The merit of a proposal from a researcher (as determined by peer review) is of secondary (or no!) importance to the
15 Political Prescription of Behavior Ignores Epistemic Constraints
307
agenda of the politicians and the bureaucrats, who determine the categories under which funds are available for distribution to researchers, and in so doing judge the “merit” of the proposals (Sounds pretty much like a socialist dictatorship, doesn’t it?). Thus, the previously quite successful self-correcting rules of scientific conduct are no longer allowed to play their crucial role. The independent and self-funded researcher, capable of doing research on his or her own, is now virtually extinct. This leaves all research now in the service of political (policy) ends. As one researcher noted with respect to climate issues, “Our research points to massive increases in government funding of climate science over the past twenty years and to the rise in IPCC [UN Intergovernmental Panel on Climate Change]-induced “crony science” and claims of putative “scientific consensus” that have been used to quell, if not suppress, the normal processes of climate science” (Butos, 2019, p. 115). Another chilling effect upon the independence of scientific investigation is direct political interference with scientists for the political gain of particular politicians, at either the local or national level. A case in point is the claim of “fraudulent research” against Nobel laureate David Baltimore, lodged by a disgruntled former research assistant (see Judson, 2004; Kevles, 1998). Picked up by local newscasters crying “When is the government going to step in and do something about fraud in science?” it soon caught the attention of both local and national political figures, who managed to all but ruin Baltimore’s career. It did not matter that thorough and (very expensive!) investigation exonerated Baltimore of all the charges, because that took considerable time, by which point his innocence was no longer front page news. The damage was done in the initial sensationalism (which had high news ratings value), whereas his exoneration had no news ratings value, and was thus relegated to backpage newspaper columns (and ignored by TV and radio presenters).
308
W. B. Weimer
The Moral: The Constructivist Desire to Make Everything Subject to Explicit or “Rational” Control Cannot Work The epigrammatic remarks of Hayek beginning this chapter cannot be ignored: spontaneously organized complex phenomena such as science, society and all market orders cannot be either controlled or “improved” by constraining their activity to the very limited purview of a single mind or some central planning agency. The essential insight into social theory emphasized by the Scottish moralist philosophers, that our progress is due to the “twin miracles” of the division of labor and its resultant division of knowledge, cannot be forgotten. Nor can they be somehow replaced by conscious or directed control structures without a loss of output of the overall order. Conscious direction of complex orders can reduce their overall output (while pursuing particular goals), it cannot increase it. While we can cripple or abolish a spontaneous order by imposing top-down “rational” control, we can never improve upon its overall superiority. Favoring one set of outcomes always has unintended consequences in shortchanging some other direction(s).
Evolved Social Institutions Are Indispensable Knowledge Structures I noted that relevant data for political discussions come from social sciences—psychology, economics and sociology being central. But epistemology also enters in: the evolved institutions of society are also knowledge structures, and they, like markets in economics, serve the function of automatically collating knowledge (especially tacit knowledge) and transmitting it in a highly efficient and effective manner to upcoming generations. The evolved and continually evolving social structure called the family is a central and indispensible case in point. Epistemologically it serves the same functional role as the market order— making information available to its participants in order to help them
15 Political Prescription of Behavior Ignores Epistemic Constraints
309
survive in the uncertain and increasingly abstract and impersonal environment we are evolving into, as we increasingly leave behind the primitive organization of the tribe and the small group. It is the meeting ground between two worlds, as Hayek (1989) noted. Our emotional history evolved (for millions of years as hominins) in the face-to-face interaction of the small group or extended family (Levendis et al., 2020), and all our morality and “fellow feeling” stems from such personal contact situations. This is also the framework in which we became used to obeying the dictatorial commands of a powerful leader (tribal chief, mother or father) who tells us what to do and is responsible, at the end of the day, for parcelling out to us the chunk of meat that is our “just desserts” (our fair share) for being a faithful member of the tribe. We look to the tribal chief for our leadership, safety and sense of selfworth as a member of the tribe. While our ANS (autonomic nervous system) and bodies embody the tribal morality of interpersonal benevolence, our CNS (especially cerebral hemispheres) and cognitive faculties are presently being thrust farther and farther into the abstract society which has resulted from adopting the impersonal market order to supply our material wants. In the abstract order, we feel lost, alienated and alone—with personal benevolence and close personal contacts–because it is a new type of order, in which impersonal morality and lack of individual personal warmth and contact resides. This conflict between the familiar organization of the tribe (supplying our “gut level” feelings, emotions and benevolence morality), and the immense benefits of the impersonal market order dependent only upon competition and, as a result, cooperation in market exchange, is what underlies our present “postmodern” malaise and “existential” crisis—the perennial “alienation” of modern youth and the depression of the contemporary working environment so common to the “big city.”. As I have overviewed elsewhere (especially in Weimer, 2022b), it has eventuated into the nihilism and intellectual numbness of philosophies such as positivism, existentialism and the “beat generation,” the despair portrayed in film noir cinema, and the escapism of drowning in mindless video games in so much of contemporary materially rich but intellectually and morally impoverished youth. It is also responsible for the desperate clinging to a cause
310
W. B. Weimer
of the moment as a crusade to provide a sense of self-worth and personal identity—hence the rise of “identity politics.” This is a main reason why socialism, with its appeal to tribal dictatorship “parcelling out” goods to satisfy your emotional wants and needs, is so strong. It “sounds good” and reassuring, promising a return to that lost “bliss” of the tribal order in which everyone is “special” and to be handed out their “fair share” of rewards according to the ethics of benevolence in the older face to face society. Small wonder then that progressivism would attempt to destroy the modern family, blaming it for the failure to properly “educate” children (or to adequately provide for their emotional needs and “moral” development). Certainly (the argument runs) modern progressive education in an enormous and impersonal politically correct environment (the “public” school half an hour or more away) can do a better job, and should supplant the traditional family (so often already destroyed by divorce and “single” parenting) and thus return children properly to “cradle to grave” care under the benevolent dictators of socialism. But the question one must ask is whether the socialist or collectivist substitution for the family—the government controlled educational system run by “the village” (you should watch the original 1960s British television series, The Prisoner to see what the government “village” actually is) in the welfare state framework—can actually accomplish the task of the family, or will instead make things worse. Unfortunately, this “kill the goose and keep the golden eggs” collectivist strategy ignores one incontrovertible fact: social institutions have always evolved from the need to solve some problem(s) faced by society. If successful, it is because they have eased our problems in accomplishing our purposes and plans and aided in their solution—even if we were unaware of their existence—often by stabilizing an order of expectations of how our fellow individuals will behave, so we can coordinate our actions and achieve our goals (as in the market), or not have certain seemingly unrelated problems (the incest taboo) harm society as a whole. As Horwitz (2015) put it, successful institutions are a balance of relatively rigid tradition and flexibility in adapting to new circumstances. The modern family is uniquely adapted and has evolved to perform this function in aiding humanity to make the transition between (or from)
15 Political Prescription of Behavior Ignores Epistemic Constraints
311
the face-to-face society of tribalism and the abstract society of impersonal cooperation. It does so by providing a unique and irreplaceable framework in which children can be gradually and safely exposed to the impersonal and seemingly threatening aspects of the abstract society, and learn coping strategies for dealing with them. Growing up in the family provides a “safe” and benevolent framework (especially compared to growing up on a bus or in a street corner gang) in which to gradually adapt to the foreign and frightening world beyond the family. It is an epistemic ecology designed to deal with the transition to contemporary adult life. Social institutions are ecologies, as Hayek and the Austrian school of economists have stressed: “Austrians see the economy and the larger society as epistemic ecologies where knowledge is created, discovered, and made available to others through traditions and social institutions, especially market prices. Fundamental to their approach are three claims about knowledge: it is dispersed, contextual, and often tacit” (Horwitz, 2015, p. 13). The family can be understood as a personalized school for children to learn tacit social norms, and especially for learning how to interact with the extended and abstract, impersonal society that is inevitably present outside the family framework. It is a safe environment—far different from Mr. Rogers’ Neighborhood, the helicopter parenting collectivist attempt to substitute for the family2 —in which to learn the rules of the abstract society. The family as an epistemic framework has a parallel in the tremendous benefits of the impersonal market order in society. Both institutions have been central to raising our standards of living and our physical well-being above the poverty and ill health that has been characteristic of the vast majority of humanity throughout the ages. Our vastly larger populations do not worry about their physical safety or where their next meal is going to come from: we have pulled ourselves out of poverty and created a world in which a substantial majority live healthier and longer lives in reasonable comfort. Largely unnoticed by politicians is the undeniable fact that as the market order has enabled us to more easily meet our material needs, it simultaneously has opened up the ability to pursue all kinds of nonmaterial goods and values. What we have come to call capitalism is responsible for our ability to spend time (and effort and money) on
312
W. B. Weimer
things that are not the basic necessities. We no longer have to spend everything we have on food, clothing and shelter—we buy ourselves toys, we indulge our hobbies, we give people gifts because we can now afford them, we travel, and we have the leisure time in which to learn more and more. It is this tremendous expansion of wealth that allows us to engage in improving education, art, and leisure and allows us the necessary freedom to do so. Even the relatively poor can get advanced education, training in vocational skills, have access to books, music, and art that was only available to the wealthiest of prior generations, and dozens of other things that are now widely available. Given the powers and benefits of the market order (capitalism), it is amazing that those who yearn for the return of the tribal order of the past under the dictatorship of one or another benevolent despot would accuse the goose that has laid the golden eggs for all of us of being concerned with nothing but money and profit. They have failed to understand that the market order has freed us from the focus upon the material (or the narrowly economic) by producing such an abundance of wellbeing and economic growth that such concerns are simply no longer central to our daily lives. It is the power of capitalism that has freed us to pursue our ever expanding range of nonmaterial interests, goals, and values. We have freed the family from its previous concern with our material survival, and in so doing have opened the space available for our deepest nonmaterial aspirations and endeavors. We are now free to marry for love and emotional satisfaction and “spiritual” well-being, rather than being constrained to a loveless life with a “partner” who does nothing but provide labor. Societies that have adopted the institutions of private property, contract, exchange, the rule of impersonal law, the values and norms of peace, tolerance, equality before the law, and respect for work and profit, as well as economic concepts such as sound money, have more than survived—they have flourished and gone on to heights that were simply unimaginable to prior generations. As Horwitz said, “The institutions and norms of classical liberalism and capitalism have made that possible and made the family into a far more humane institution” (ibid., p. 181).
15 Political Prescription of Behavior Ignores Epistemic Constraints
313
Is it not time to abandon traditional political “science” and begin to seriously study the epistemic framework in which all our goods and knowledge has arisen?
Sociology Has Lost Sight of Earlier Insights Despite lip service to Adam Ferguson as the founder of sociology, the field usually ignores that early history and starts with Emile Durkheim at the end of the nineteenth century, with the doctrine of functionalism. Sharing the earlier Scottish moralists’ concern with social structures as necessary constraints upon unfettered selfish individual action, Durkheim felt that our selfish desires would lead to the breakdown of society unless strong social sanctions are present to limit them. But Durkheim then made a mistake: he did not understand Ferguson’s insight about the results of action but not human design. He emphasized two social mechanisms as constraints—socialization (which for him was the explicit learning process in which we acquire society’s rules of conduct by conscious tuition), and social integration, which forms our ties to other groups (such as religion and the family). These constraints form a “collective conscience” to stabilize society. Their constant presence “creates a kind of cocoon around the individual, making him or her less individualistic, more a member of the group” (Collins, 1994, p. 171). But these constraint systems were presumed to be products of explicit rationality, deliberately designed in the past generations. Thus, Durkheim lost the unconscious impersonal social entirely, and retained only a social psychology of the individual as the basis for “sociology.” Lost was the insight that the social domain is tacit rather than conscious. Another flaw was the emphasis upon macro phenomena as “the” social, in opposition to the methodological individualism and “micro” approach of the evolutionary social theory from the prior century. Durkheim’s general approach was collectivist and deterministic, even though it opposed the collectivist “conflict theory” approach of Marx (which ranged over alleged social “wholes” or collectives such as classes, conceived to be causal agents). It remained for the later “symbolic interactionists” stemming from Herbert Blumer (1969) to attempt a
314
W. B. Weimer
reintroduction of an individual or micro approach to the general framework of functionalism. But symbolic interactionists are just as limited to explicit rational control in behavior as Durkheim. They argue that not only do we deliberately learn Durkheim’s roles that society has laid out for us, but that we deliberately modify them by negotiating their interpretation, through linguistic and symbolic procedures. The meaning we could take from social encounters varies depending upon how we consciously interpret them through such negotiated symbolic interactions. Once again the sociology of the symbolic interactionists is found only in social (individual) psychology: the tacit dimension of higher order constraints operating through downward causation is all but lost in such accounts. Perhaps the closest that most sociology comes to discussing the interaction of the individual with a tacit context of social constraints is in terms of utilitarianism. Here also the field tends to lump together incompatibles, such as Adam Smith (as a representative of the Scottish moralist tradition) with avowed rationalist constructivist thinkers ranging from Bentham and the Mills in the nineteenth century through to the positivistic social exchange theorists such as Homans (1961) or Coleman (1990), and the recently popular “rational choice” theory borrowed from economics. Here the confusion is between the original insight that the social is the result of human action but not deliberate design, with the idea that all rationality is explicit and conscious (examined in the appendix), so therefore the “social” must be the result of prior conscious deliberate choices and directives. Such approaches founder on the empirical fact that human behavior does not often follow the “rational expectations” established in advance by the experimenter. Instead of realizing that this is a refutation of the framework of rational expectations as the causal agency of human behavior, such theorists instead spend their time trying to explain why our behavior is “irrational” so much of the time.
15 Political Prescription of Behavior Ignores Epistemic Constraints
315
Notes 1. The original and still correct use of the term liberalism refers to a theory of social organization based upon the general evolutionary and epistemological approach of thinkers such as David Hume, Adam Ferguson, and Adam Smith, and their immediate intellectual descendents. As a political view, it is based upon individual liberty as freedom from unnecessary restraint (or as equal freedom of opportunity under the rule of general or impartial law). This is directly opposite the present usual meaning of “liberal” in the United States and the “Western” world as “socialist,” because—as Joseph Schumpeter (1954) clearly noted—the actual enemies of liberty and freedom have seen fit to appropriate its name for the political doctrine which should properly be called “progressivism.” Progressivism, based upon the justificationist metatheory of knowledge and rationality (examined next in the appendix chapters), has redefined freedom to mean “freedom from want” and the law as particular legislation designed to bring about ends articulated and specified clearly and “rationally” in advance. I have explored the failure of progressivism—primarily its inability to utilize the tremendous power of the abstract and impersonal order of society based upon market orders, and its ensuing inability to address genuine novelty and the growth of new knowledge and economic goods and opportunities—in two companion volumes in this Palgrave Macmillan series under the general title Retrieving Liberalism from Rationalist Constructivism, and because of space limitations must refer the interested reader to them for more adequate and detailed discussion. 2. It is not the case that the family is the only place in which such learning can occur. “The argument, however, is not that families are the only way to provide such learning. Rather it is that families are able to do so particularly well and are necessary, if not sufficient, to ensure that such socialization takes place.… Recognizing that truth does not mean that the family can be replaced by the village” (Horwitz, 2015, p. 175).
316
W. B. Weimer
References Alcorn, J. (2019). Markets and the Citizens’ Dilemma: The Case of Climate Change. In J. Alcorn (Ed.), Markets and Liberty (pp. 1–27). Shelby Cullom Davis Endowment. Blumer, H. (1969). Symbolic Interactionism: Perspective and Method . PrenticeHall. Butos, W. (2015). Causes and Consequences of the Climate Science Boom. The Independent Review, 20 (2), 165–196. Butos, W. (2019). Reminiscences and Reflections. In J. Alcorn (Ed.), Markets and Liberty (pp. 109–122). Shelby Cullom Davis Endowment. Butos, W., & McQuade, T. (2015). Government and Science: A Dangerous Liaison. The Independent Review, 11(2), 177–208. Coleman, J. S. (1990). Rational Organization. Rationality and Society, 2(1), 94–105. https://doi.org/10.1177/1043463190002001005 Collins, R. (1994). Why the Social Sciences Won’t Become High-Consensus, Rapid-Discovery Science. Social Forum, 9, 155–177. https://doi.org/10. 1007/BFO1476360 Dewey, J. (1935). Liberalism and Social Action. Capricorn Books. Ferguson, A. (1767). An Essay on the History of Civil Society. Public Domain Text, Available from Online Literature of Liberty, Liberty Fund. Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion. Pantheon Books. Haidt, J. (2018). Why Do They Vote That Way? Penguin Random House. Hannon, M., & de Ridder, J. (2021). The Routledge Handbook of Political Epistemology. Routledge. Hayek, F. A. (1973/2012). Law, Legislation and Liberty: Vol. 1: Rules and Order. University of Chicago Press. Now Rutledge Classics. Hayek, F. A. (1976). Law, Legislation and Liberty: Vol. 2: The Mirage of Social Justice. University of Chicago Press. Hayek, F. A. (1978). New Studies in Philosophy, Politics, Economics and History of Ideas. University of Chicago Press. Hayek, F. A. (1979). Law, Legislation and Liberty: Vol. 3: The Political Order of a Free People. University of Chicago Press. Hayek, F. A. (1989). The Fatal Conceit: The Errors of Socialism. The University of Chicago Press. Homans, G. C. (1961). Social Behavior: Its Elementary Forms. Harcourt, Brace.
15 Political Prescription of Behavior Ignores Epistemic Constraints
317
Horwitz, S. (2015). Hayek’s Modern Family: Classical Liberalism and the Evolution of Social Institutions. Palgrave Macmillan. Idso, C. D., Carter, R. M., & Singer, S. F. (2016). Why Scientists Disagree About Global Warming: The NIPCC Report on Scientific Consensus. The Heartland Institute. Judson, H. F. (2004). The Great Betrayal: Fraud in Science. Harcourt. https:// doi.org/10.1172/JC124343 Kevles, D. J. (1998). The Baltimore Case: A Trial of Politics, Science, and Character. W. W. Norton. Levendis, J., Eckhardt, R. B., & Block, W. Evolutionary Psychology, Economic Freedom, Trade and Benevolence. Review of Economic Perspectives/Narodohospodarsky, 19 (2), 73–94. Lomborg, B. (1998/2001). The Skeptical Environmentalist: Measuring the Real State of the World . University of Cambridge Press. https://doi.org/10.1017/ CB09781139626378 Lomborg, B. (2020). False Alarm: How Climate Change Panic Costs Us Trillions, Hurts the Poor, and Fails to Fix the Planet. Basic Books. Moore, P. (2010). Trees are the Answer. Beatty Street Publishing Inc. Moore, P. (2021) Fake Invisible Catastrophes and Threats of Doom. Independently Published. Polanyi, M. (1958). Personal Knowledge. Harper and Row. Polanyi, M. (1966). The Tacit Dimension. Doubleday (Penguin Random House). Russell, B. (1934/2009). Freedom and Organization, 1814–1914. George Allen & Unwin. Now Rutledge Classics. Shrirer, A. (2020). Irreversible Damage. Regnery Publishing. Singer, S. F., Legates, D. R., & Lupo, A. R. (2021). Hot Talk, Cold Science. Independent Institute. Weimer, W. B. (2022b).Retrieving Liberalism from Rationalist Constructivism: History and its Betrayal (Vol. 1). Palgrave Macmillan. Weimer, W. B. (2022c). Retrieving Liberalism from Rationalist Constructivism: Basics of a Liberal Psychological, Social and Moral Order (Vol. II). Palgrave Macmillan.
Part VI Appendix: The Abject Failure of Traditional Philosophy to Understand Epistemology
For at least the last two and a half millennia, Western philosophy has modeled its conception of knowledge and its acquisition upon its first scientific success—the development of mathematics. That development has been conceptualized as subsuming mathematical results under an umbrella of axiomatization. Mathematical consequences (the results) become theorems or “logical” consequences of a set of highest level statements that are assumed to be true and indubitable (axioms) in virtue of the meaning of their terms, and the theorems have that truth status rub off on them (through a transmissibility assumption) because they can be shown to be deducible or to logically “follow from” the axiom set. So, if the theorems are logically entailed consequences of certain truths, they are an ideal of knowledge, and the manner of their achievement of that status must likewise be an ideal of the acquisition of knowledge. This view immediately faces an insurmountable problem: nothing in the empirical realm is a priori or certain in any obvious way, and unlike numbers, the denizens of that realm (animals, trees, metals, clouds, etc.) are not constant. So the problem for epistemology has been to show that such issues can be overcome by some “method” or standardized procedure that will make the results of empirical inquiry have the
320
Part VI: Appendix: The Abject Failure of Traditional …
same exalted status of “proven truth” that was made available in mathematics by the axiomatic method. That quest for the perfect scientific method—one to show that “merely empirical” claims could be certified as true—has dominated philosophy, despite being repeatedly found to be impossibly difficult to achieve. In the last centuries, skepticism—the view that no genuine empirical knowledge can be achieved—has dominated philosophy (in the guise of instrumentalism or conventionalism or pragmatism). As a result, standards and critical assessment of positions in science and society have been abandoned in favor of anything goes— diversity and equality of outcome in a celebration of no meaningful difference between positions or claims. The chapters in this appendix cover some major issues of that traditional framework in suffiecient detail to show that its task can never be accomplished. Epistemology must be studied as an evolutionary domain—one of the life sciences—rather than as a quest to achieve an indubitable and infallible method to put empirical claims on the same footing as mathematics, and the rationality of inquiry can never be fully explicit nor instantly assessed like the truth of a mathematical derivation. Once one moves from the justificationist framework to one focused upon critical assessment of claims, the idea that no standards are defensible, and with it conventionalism, disappears. Both science and commonsense standards of conduct can be critically appraised (but never justified!) and improved or rejected. All claims to knowledge are not equal.
16 Induction Is an Insuperable Problem for Traditional Philosophy
Once epistemology and (such putative successors as philosophy of language or psycho-linguistics) are put aside, philosophy will no longer be thought of as providing a tribunal of pure reason which judges other areas of culture. The aim of “philosophy without mirrors,” he suggests, will be to continue the conversation which constitutes our culture, rather than to pronounce on its results from an ahistorical point of view. Alasdair McIntyre, dust jacket publicity for Richard Rorty, Philosophy and the Mirror of Nature
Traditional philosophy (dealing with only the rate-independent contents of conception), with its view of knowledge as justified true belief, has tried to deal with epistemological issues in only a few ways. When it became obvious that empirical knowledge could not be axiomatic, but was rather postulational, a flight of intellectual fancy rather than deductive certainty, it was at first claimed that this was alright, because knowledge could still be certified by basing it upon one of two familiar sources: rational intuition of the true nature of reality, or sense experience, which was presumed to either constitute knowledge (in phenomenalism) or to
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_16
321
322
W. B. Weimer
provide a basis (the so-called empirical basis) upon which the postulations could be “proven” or “verified” or in some manner “guaranteed” to be true or genuine knowledge (in the variants of empiricism). It was taken for granted that there must be some rational standard or method for certifying the results of science as genuine knowledge. But no proposed method to do this has ever been anything more than the most obvious sort of handwaving—as C. D. Broad lamented, the (presumed) glory of science was the scandal of philosophy. And that glory of science was its ability to use nondemonstrative (nondeductive) or ampliative inference to acquire new knowledge. Induction (this somehow ampliative inference) was everywhere, and it was not demonstratively certain, so how could it be knowledge? Given that Broad’s angst was representative, two questions become apparent: why was this method presumed to be so glorious, and what alternatives are available if it cannot be “demonstrated” or justified? It was presumed to be “glorious” because no one could conceive of an alternative to the idea that knowledge is equivalent to “justified true belief ” and, thus, based upon this apparent indubitable certainty that knowledge was justified true belief, there was no option. All putative knowledge claims had to be certified by some infallible method. If no such method was found, we would be in the position that Bertrand Russell (1944, 1948) articulated—either we can “prove” inductive method works to provide genuine knowledge, or it is obvious that science is no better at providing truth than, as he put it, moonshine. Following this conclusion has led those who think it is correct into skepticism—the idea that all traditions and ideas are equally tenable in their (in)defensibility, and so we should allow, say, cultural Marxism and the momentary political correctness doctrine of “wokeness” to be on a par with actual science, and allow a student’s feelings to trump facts when they are in conflict. If no infallible method could be found to show that claims are genuine knowledge, then the only answer ever proposed was that science was “merely an instrument,” exactly as Cardinal Bellarmino had argued
16 Induction Is an Insuperable Problem for Traditional Philosophy
323
against Galileo, and as such its so-called knowledge was merely conventional and thus neither true nor false. But if science is merely conventional, one must ask why bother to do it at all? Those who took this tack argued that surely science must be just a game, like cards or billiards, that may be “useful” as games are in passing time away, and might be fun for scientists to play.1 No doubt the best thing to do is simply to study the “language of science” (the language game of science) as exhibiting the rules of the game, and the only question that might still be interesting concerns why we should play “the language game of science” instead of some other game, such as that of irrational commitment to our favorite notions.
Is There a Foundation to Knowledge? Since rationalism gave up the ghost with respect to science long ago, when the discovery of “laws” of nature supplanted intuitions, all the effort was devoted to understanding how experience (neutrally construed) could either constitute knowledge in itself or provide some sort of basis upon which it could be built. Despite its persistence in physics (and “social physics” approaches in the social sciences), phenomenalism was found wanting in philosophy about 100 years ago. It was soon obvious that there is no “given” in experience (Mach’s experience, or Russell’s sense data, or Carnap’s erlebs, or any “purified” aspect of sensory experience that was certain or true to begin with). It became obvious that there was no possible “given” that could be “taken” as a basis from which to logically achieve knowledge. Thus, any program which would derive knowledge from empirical first principles or axioms or data would fail— without any certainty in what had to be equivalent to so-called empirical “axioms,” one could not derive by deductive logic any valid or certifiable conclusions. That was our first mention of logic, and it was a reference to the standard procedures of deduction that had been available since the time of the ancient Greeks. And that type of logic could not work as a method
324
W. B. Weimer
in empirical science. But is there some other kind of logic? One that would enable us to start from uncertain and unproven “merely” empirical propositions and get to genuine knowledge? Broad’s scandal of philosophy referred to the fact that everyone assumed that there must be another kind of logic, an inductive logic, that would build our “stairway to heaven” and get certain knowledge out of uncertainty. But it turns out there is not. We are in exactly the same situation as we are with respect to the applicability of mathematics to reality: in so far as mathematics or logic is certain it cannot apply to empirical reality, and in so far as it is used to describe reality, neither mathematics, statistics, nor logic can ever be certain or provide a guaranteed method for the achievement of justified true belief. An inductive “logic,” one that somehow goes from uncertainty to certainty, is a contradiction in terms, a square circle, or perhaps a colorless green idea sleeping furiously on the rug of the den of philosophy, and its pursuit is something that is so absurd that only fools or desperate philosophers would ever have considered it. Of course there is an alternative to this building block, from bottom up to certainty, approach. But it requires taking an evolutionary approach to epistemology that has no certainty or guarantees. And traditional epistemology, with its attempt to construct a firm edifice of scientific knowledge from some kind of basic building blocks on up to our most inclusive theories, is fundamentally incompatible with evolution. Nothing is certain in evolution: nothing is justified, nothing lasts forever. Evolution does not build from any firm foundation, but rather throws something together and allows it to be winnowed or criticized by the environment. And what seemed to be a solid or firm edifice (a long surviving species) may be extinct tomorrow. No species, no individual, can ever be “guaranteed” a ticket to survival by some magic method. Evolution does not proceed by any sort of inductive method, attempting to build toward invincible, completely adapted organisms. All it can do is allow organisms to face the fate of the unknown and unforeseen. And that is exactly what science does in conjecturing new theories. Science, as Popper so tirelessly emphasized, is conjectures held in check by winnowing attempts at refutations.
16 Induction Is an Insuperable Problem for Traditional Philosophy
325
Why has none of this seeped into traditional philosophy? Largely because even the sadder but wiser justificationists who have realized that one or more facets of the overall program can never be realized have all clung to one further fusion of concepts: the conception of “eternally” valid knowledge gradually accumulating (through induction) into an ever increasing body of scientific knowledge. Once a scientific knowledge claim is certified, by whatever epistemic authority, it remains certified forever. Thus, progress must incorporate more and more certified or true propositions. Justificationist historiography enshrines this “cumulative record” position—and because of that it rewrites history backwards, to guarantee the continuity of scientific progress pointing right up to the present moment.
From Certainty to Near Certainty or Probability The classic justificationist wanted a method to prove that the propositions of science have the same probability: the probability value 1 or certainty. For every putative proposition h, given the evidence relevant to it e, the probability P of that proposition must, according to the demands of justificationist rationality, be shown to be expressed as P (h, e) = 1. In other words, the probability of hypothesis h being true on the basis of evidence e is 1 or certainty. To show this, philosophers absolutely had to develop a logic of scientific inference and assessment. This justified true belief approach demands that certain things must obtain. First, there must be some “empirical basis” of facts that are to be known for certain. This is the foundation of empirical knowledge. Second, the postulated theories of science are regarded as second-class citizens compared to the basis, since they are “derivative” from facts and the accumulated generalizations based upon them (i.e., they are inductions based upon inductions). Third, science must be cumulative and its growth must be gradual: facts must be piled upon facts to construct the edifice of knowledge. Fourth, the meaning of any fact must be fixed independently of theory and must remain invariant. Fifth, explanation
326
W. B. Weimer
consists in showing that a certified factual proposition follows deductively from the propositions of the theory, so “logic” is what explanation consists of. Sixth, evaluation is monotheoretical: science evaluates one theory at a time. All of these demands have been shown not to be achievable, almost always by the most skilled justificationist philosophers themselves. The result has been that skepticism—the thesis that genuine knowledge is not possible—has now become the default position. By consistent application of their own criteria, arch justificationists have shown that their trust in empiricism (assumed to be the basis of science) was not justified. Perhaps most crucial was the fall of the notion of an eternal or unchanging factual basis. Early in the twentieth century, Duhem (in 1904) noted science is fact correcting in its nature rather than fact preserving. New theories often refute the facts of older theories and thus there is no eternally valid factual basis. It was once a “fact” (in the Encyclopedia Brittanica) that phlogiston was given off by burning substances. That “fact” is no longer a fact. Neo-justificationists, at least the better ones, fully admitted this, and took it as a badge of honor that they were “sophisticated” and sadder but wiser. For instance, Wilfrid Sellars (1963) noted that empirical knowledge, like its” sophisticated” extension, science, “is rational not because it has a foundation but because it is a self-correcting enterprise which can put any claim in jeopardy, though not all at once” (p. 170). That point is an integral part of non-justificational positions. Nonetheless, Sellars, like the majority of traditional philosophers, still assumed that inductive logic could be salvaged, and regarded its utilization as essential to science. By that point neo-justificationism had attempted to work around this problem, by watering down the requirement of proof to probable. This is the framework in which probabilistic inductive logic has been interpreted and developed, chiefly at Cambridge in the 19th and early twentieth century. The goal, then as now, is a “confirmation theory” that will assign a probability ranging from 0 to 1 in the new formula P (h, e) = C (where C is equal to degree of confirmation). A probability number (or weight) is now a degree of confirmation. So now the body
16 Induction Is an Insuperable Problem for Traditional Philosophy
327
of accepted scientific propositions is no longer composed of indubitable factual propositions, but rather contains highly probable (exactly how highly probable is never very clear) propositions. Impressed by the role of probability in quantum mechanics, Hans Reichenbach (1938) deemed this requirement of weight (stated as a probability), as the sole predicate of scientific propositions in an epistemology book: The predicate of truth-value of a proposition therefore is a mere fictive quality; its place is in an ideal world of science only, whereas actual science cannot make use of it. Actual science instead employs throughout the predicate of weight. We regard a high weight as equivalent to truth, and a low weight as equivalent to falsehood;… The conception of science as a system of true propositions is therefore nothing but a schematization. (p. 188)
Thus, we see the central fusion of concepts introduced by neojustificationists is the definitional fusion of induction with probability, now requiring only probable truth rather than certainty.. The aim now is at the “next best” thing: the near certain instead of certainty. And the near certain has simply been defined in terms of the calculus of probability. Probabilistic inductive logic is the result. It is alleged to be an algorithmic assessment procedure (i.e., still absolutely certain qua procedure!), but now for probable knowledge claims instead of for certain knowledge claims. Now the authority of knowledge must be relocated within the “certain” probabilistic inductive procedure. Otherwise there is no authority, no “proof,” in science, and it is left exactly where Hume (1888) left it nearly 3 centuries ago, as “mere animal belief.”
The Retreat to Conventionalism in Sophisticated Neo-Justificationism What has happened is entirely predictable: since it is no more possible to prove that science is probable than it is to prove that it is proven, there has been a retreat to a sophisticated conventionalism of one form or another whenever someone has challenged one or another account
328
W. B. Weimer
of how “inductive logic” could function. Often this has eventuated into turning the problem into redirecting or redefining the aim of philosophy or of science (as in the “watered-down” quest of Rorty in the epigraph). Anything that diverts attention into treating some small and supposedly manageable issue, or redefining the problem out of existence, is a welcome change from staring into the abyss of embracing an irrational or “animal faith” picture of science and its knowledge. A well-known example is found in the work of A. J. Ayer, who in the mid-1930s had been responsible for introducing logical positivism to many English-speaking philosophers in Language, Truth and Logic (1936). Twenty years later, he had been forced to abandon the brash irresponsibility of early positivism in favor of watered-down logical empiricism. In The Problem of Knowledge Ayer had to make the skeptic’s acknowledged victory as painless and bloodless as possible: “We should not be bullied by the skeptic into renouncing an expression for which we have a legitimate use. Not that the skeptic’s argument is fallacious; as usual his logic is impeccable. But his victory is empty. He robs us of certainty only by so defining it as to make it certain that it cannot be obtained” (Ayer, 1956, p. 68). Ayer felt this does not mean that “scientific method” is irrational. He made the claim that it could be irrational “only if there were a standard of rationality which it failed to meet; whereas in fact it goes to set the standard: arguments are judged to be rational or irrational by reference to it….the skeptic makes his point. There is no flaw in his logic: his demand for justification is such that it is necessarily true that it cannot be met. But here again it is a bloodless victory” (ibid., p. 75). One should note the obvious slight of hand of declaring science to be its own standard, thus no longer in need of an independent epistemic authority. Clearly, this is a retreat from a defence of empirical science to a faith in it. This is actually a blatant abandonment of neo-justificationist rationality rather than a defense of it. Earlier, Russell (1948) had been equally sad and seemingly wise as a result: “Empiricism, as a theory of knowledge has proved inadequate, though less so than any other previous theory of knowledge. Indeed, such inadequacies as we have seemed to find in empiricism have been
16 Induction Is an Insuperable Problem for Traditional Philosophy
329
discovered by strict adherence to a doctrine by which empiricist philosophy has been inspired: that all human knowledge is uncertain, inexact, and partial” (p. 507). The problem that these sadder but wiser would be empiricists have encountered is that for them genuine theoretical knowledge is not possible to achieve or to certify. As Agassi (1966) put their dilemma: “If we do not go beyond sense experience we have no theoretical knowledge of the world, while if we do go beyond it the margin is not contained in sense experience, and is, thus, a priori….This is the logic which led thinkers to abandon empiricism in favor of either a priorism or conventionalism” (p. 7). Conventionalism (since a priorism is on a par with rational intuition) simply says that our theories and their concepts are flights of intellectual fancy, mere instruments of calculation or codification: In admitting that theories go beyond experience, conventionalism empties theories of all factual or empirical content. It denies that theories are empirical or factual or informative. It claims that a theory is not informative knowledge but our way of looking at particular facts, our way of classifying particular observed facts.… Nothing in reality strictly corresponds to abstract or imagined theoretical concepts like “space curvature” or “atom.” These words are no more than shorthand symbols with no independent meaning (their meanings are given by implicit definitions), and statements containing them impart no more information than the information procured by sensation alone. (Agassi, 1966, pp. 4–5)
Conventionalism replaces justified true belief, and, hence, genuine knowledge, with an “instrument.” Conventionalism can thus become the most sophisticated form of skepticism. It can say that theories are wonderful constructions and that we don’t have to try to justify them all at the same time. This provides the beleaguered “wannabe” neojustificationist with a new strategy: one can claim to be working to improve the methods and foundations of “inductive logic” until, when criticized, one can retreat to commitment in conventionalism and its instrumentalist conception of science and its knowledge. This gives rise to a new strategy: a return to a watered-down pragmatism, the doctrine
330
W. B. Weimer
which says “if it works, steal it,” and use it until you find something better. Just don’t expect truth. In the influential approach of Richard Rorty, just have a never-ending dialog, talk the problem to death, and don’t bother with the “outmoded” attempt at finding truth, discard the attempt to make philosophy a “mirror of nature,.” which is the outmoded quest for truth.
Hermeneutics and the New Pragmatism The “new” pragmatism combines two defensive strategies at once. First, it admits, even wholeheartedly embraces, the correctness of conventionalism (instrumentalism). Second, it adds the twist, first employed by the later Wittgenstein (1953), that what philosophy is really about is “to show the fly the way out of the fly bottle.” The metaphor is extremely apt: a fly, stuck inside a bottle, buzzes helplessly against the sides, even though the top is unstoppered. The task of the philosopher is no longer to seek truth about reality, or to explicate how inference can lead to knowledge, or anything of the sort. The task of philosophy is now simply to show the fly how to get out of the bottle. And how would one do this? By telling the fly to go out the open neck at the top of the bottle. Clearly no “truth” in that, obviously. But since it was assumed that philosophers have slightly more brainpower than flies, it was necessary to translate this approach to the philosophical predicament. What was proposed was an analysis of the language of science to show the philosopher (who was obviously talking too much and saying too little) how to get out of his or her trap in the flybottle of the justificationist metatheory. For the early ordinary language analysts (Wittgenstein, Austin, Ryle, and their cohorts) this meant showing that the problems could be made to disappear (Ayer’s skepticism to be “bloodless”) by the exhaustive analysis of our ordinary language. Several decades later, this approach had morphed to include a more hermeneutic analysis of the existential predicament facing the philosopher. A high point of that approach (in the United States) was found in the work of Rorty (1979), in Philosophy and the Mirror
16 Induction Is an Insuperable Problem for Traditional Philosophy
331
of Nature. Based upon “The hermeneutic point of view, from which the acquisition of truth dwindles in importance,…” (p. 365), Rorty advocated unadulterated conventionalism just to pass the time: The point of edifying philosophy is to keep the conversation going rather than to find objective truth. Such truth, in the view I am advocating, is the normal result of normal discourse. Edifying philosophy is not only abnormal but reactive, having sense only as a protest against attempts to close off conversation by proposals for universal commensuration through the hypostatization of some privileged set of descriptions. (p. 377)
Shorn of silly jargon we see Wittgenstein’s task of showing the fly out of the fly bottle is to be never-ending. In this framework, one can avoid the problems of the nature of knowledge and its acquisition forever, since it will in fact never be necessary to get back to reality, to the issues from which the whole approach began. Just keep talking. About anything. One can see how easy it is to get lost in this sort of nonsense: To see keeping a conversation going as a sufficient aim of philosophy, to see wisdom as consisting in the ability to sustain a conversation, is to see human beings as generators of new descriptions rather than beings one hopes to be able to describe accurately. To see the aim of philosophy as truth—namely, the truth about the terms which provide ultimate commensuration for all human inquiries and activities—is to see human beings as objects rather than subjects, as existing en-soi rather than as both pour-soi and en-soi, as both described objects and describing subjects. To think that philosophy will permit us to see the describing subject as itself one sort of described object is to think that all possible descriptions can be rendered commensurable with the aid of a single descriptive vocabulary—that of philosophy itself. For only if we had such a notion of a universal description could we identify human-beings-under-a-given-description with man’s “essence”. (ibid., p. 378)
This appears to be a “word substitution” statement of the logical empiricist thesis of the unity of science as the unity of the language to be used for all science, and then a rejection of that thesis. But there are many
332
W. B. Weimer
clearer ways to show the inadequacy of the thesis of unity of language and reductionism (e.g., Radnitzky & Bartley, 1988), that do not abandon the quest for an adequate theory of reality. It is not surprising that due to the popularity of Rorty’s book, and others presenting a similar picture of our alleged epistemic predicament, that what the earlier logical positivists had called “Continental muddle headedness” has returned to its present popularity. Since the crisis within justificationist rationality, which seemingly legitimizes any position whatever because no standards or criteria of competence or correctness are allowed to be accepted, nonscientific philosophy has increasingly gone back to just a lot of talk, with no expectation of ever resolving any issue, since clearing up an actual issue would contradict the new goal of ceaseless explication. The failure of justificationist inference in philosophy and rationality has affected philosophy of science in a slightly different manner. Here the breakdown in the attempt to build and justify an adequate inductive logic has coincided with acceptance of the doctrine of phenomenalism stemming from Mach and Bohr and the spectacular rise of quantum mechanics up to the present. Thomas Hickey (2005) provides a clear example of the new “pragmatism” as an approach to the philosophy of science. Having rewritten the history of science in Twentieth-Century Philosophy of Science as the pragmatic search for what works rather than the quest for a true description of reality, there was no need to discuss the traditional problems in the history of science from the standpoint of epistemology at all. Issues such as the nature of what inference is, what could constitute an inductive logic, and whether or not science is a representation of reality are simply abandoned in favor of description of science as what has happened and “what has worked.” So in treating Heisenberg’s role in twentieth-century science, one simply notes that he moved from a phenomenalistic idealism toward a more orthodox realism at the end of his career. But one should do no more than note this: why Heisenberg changed his mind, or whether either position is actually correct, is beyond the pale of the new philosophy of science. Science isn’t a quest for truth, it’s just a history of what has worked so far. Don’t worry about knowledge, just be happy that some things seem to work thus far. We can continue with the construction of a “cumulative record” simply by
16 Induction Is an Insuperable Problem for Traditional Philosophy
333
“rationally reconstructing” history to be “rational” after the fact, without the intrusion of anything as messy and problematic as the quest for a theory of actual reality.
Realism Is Explanatory, Instrumentalism Is Exculpatory One simple argument shows that pragmatism, instrumentalism, or any other attempt such as just “saving the appearances” (as Bellarmino would have restricted science to doing against Galileo) won’t work in historical reconstruction. Realism has an explanatory power these other positions cannot ever duplicate. None of them are able to explain the partial success of some theories we have developed or the failure of others. Failures are informative, and cannot simply be written out of history as pragmatism does. Why do some theories fail and others succeed in accounting for aspects of reality? Why are some theories better at being “instruments of prediction” or “saving the appearances” than others? The only answer that works relies on the superior explanatory power of realism and the correspondence conception of truth (and the causal theory of perception) upon which it relies. Some theories work because they correspond more completely to the truth than others which do not. Only realism can make sense of this. It has to be ignored by pragmatism and instrumentalism. Evolution relies on realism in similar fashion, in the fit of organism to environment, which varies from individual to individual and species to species. Pragmatism can never explain how or why this process occurs, or why there is variation in relative success. So if there is no inductive logic in science, what “logic” could there be? Only plain old deductive inference. There is no such thing as a logical “confirmation theory.” There is only an evolutionary framework of fallible conjectures held in check by the refuting attempts of sincere criticism. There is a logic (more properly, a logical form or mood or schema) in inference, but it is never definitive. Consider it now, in the next chapter.
334
W. B. Weimer
Notes 1. The retreat from realism to conventionalism opened the door to a combination of factors in the field of the sociology of science. If methodology could not show that science was a search for truth—that evaluating the history of scientific theories did not disclose ones that were “closer to the truth” than others—one should have a “methodology” that evaluated the history of science without regarding theories as true or false. Such a research strategy arose at Edinburgh in the 1960s under the leadership of David Bloor and David Edge, editors of the Journal Science Studies. Bloor (1974) proposed what came to be known as the “strong program” for the sociology of science, which was to study history of science like all other historical endeavors, as the outcome of social, political, psychological, economic, and any other “noncognitive” factors, especially the ideology of the intelligentsia who did science during a particular period. Thus this program accepts the “sophisticated skeptical neo-justificationist” position that there is no genuine knowledge that can be “proven” to be true or real, and that science and political history were thus on a par. This was Bloor’s “symmetry thesis,” that in explaining successful scientific theorizing one cannot show it to be more “rational” than unsuccessful scientific theorizing. Given the prevalence of socialist (specifically Marxist) ideology in academic Europe, this position immediately provided a cultural Marxist interpretation of science, and, because of the popularity of Kuhn’s then recently published The Structure of Scientific Revolutions, which supported some of the sociological and psychological factors as central to scientific practice, immediately branded Kuhn’s position as Marxist and irrationalist. This is the framework in which refutations of the justificationist approach to science by Kuhn and Feyerabend immediately branded them as darlings of the “New Left.” While that was true of Feyerabend, Kuhn remained studiously apolitical and did not endorse ideological interpretations of science. The motivation for the “strong program” disappears when one realizes that the justificationist framework from which it stems is untenable. The task of these appendix chapters is to show why that is so.
16 Induction Is an Insuperable Problem for Traditional Philosophy
335
References Agassi, J. (1966). Sensationalism. Mind , N. S., 75, 1–24. Ayer, A. J. (1936). Language, Truth and Logic (2nd ed., 1946). Dover Publications. Ayer, A. J. (1956). The Problem of Knowledge. Penguin Books. Hickey, T. J. (2005). History of Twentieth-Century Philosophy of Science (also revised ed., 2012, 2019) In eLib.at(Hrg), 9 December 2021. https://elib.at/ Hume, D. (1888). Hume’s Treatise of Human Nature (L. A. Selby-Bigge, Ed.). Oxford Clarendon Press. Radnitzky, G., & Bartley, W. W. (Eds.). (1988). Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. Open Court. Reichenbach, H. (1938). Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge. University of Chicago Press. Rorty, R. (1979). Philosophy and the Mirror of Nature. Princeton University Press. Russell, B. (1944). Reply to Criticisms. In P. A. Schilpp (Ed.), The Philosophy of Bertrand Russell (pp. 681–741). Northwestern University Press. Russell, B. (1948). Human Knowledge: Its Scope and Limits. Simon and Schuster. Sellars, W. (1963). Science, Perception and Reality. Rutledge & Keegan Paul. Wittgenstein, L. (1953). Philosophical Investigations. Basil Blackwell.
17 Rhetoric and Logic in Inference and Expectation
The inducement that rhetoric is concerned with is present in all messages and all meaning. Given this, the question of rhetoric’s relationship to knowing becomes a specious question, for the rhetorical function of inducing is a part of all knowing. R. B. Gregg What one must understand… is the manner in which a particular set of shared values interacts with the particular experiences shared by a community of specialists to ensure that most members of the group will ultimately find one set of arguments rather than another decisive. T. S. Kuhn
There is no logic of discovery. Whether a conclusion expresses a “discovered” proposition or not inevitably depends upon extra-logical factors, never on logical relations within or between the propositions. Logical rules (permitted valid inferences to conclusions) alone can never determine if a proposition expresses novel content relative to the conceptual scheme of the individual (or even community) using that proposition. Syntax alone cannot determine semantic and pragmatic content. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_17
337
338
W. B. Weimer
Even if, per impossible, such a logic existed there could still never be any firm or certified foundation for a knowledge claim, because of factual relativity: if the facts change in revolutionary reconceptualizations there is no empirical given that can be taken as any basis for knowledge. Explanation cannot be deduction because it would commit the “fallacy of four terms” (Quaternio Terminorum) every time it “explained” something new (which would involve the entrance of that fourth term). And since the disconnection of theory and raw experience is, as Körner (1966) emphasized, total and complete, there can be no explanation-asdeduction either, as an empiricist who holds that view has to give up any notion of empiricism and embrace rationalism instead. The paragraph above destroys the traditional account of “logic” in inference and explanation. Where do we go from that point? First would be a discussion of the amount of pragmatic and semantic interpretation that is presupposed and never discussed in the use of logistic systems. In an article as fascinating as it is ignored, Phillips (1972) explored the degrees of interpretation that must be presupposed in order to use logic in the practice of reasoning. The upshot is that formalization requires us to pay a very steep price: removal of an enormous amount of information from the object language of any formal system. There is only one place that information can go if logic is to be used by pragmatic beings— into the pragmatic, semantic, and syntactic metalanguages that surround the system. There is a trade off: a simple and elegant formalism can be bought only at the expense of a (largely tacit) richly detailed metalanguage that specifies what it means (provides the interpretation) and how to use it. The pragmatic and semantic aspects of “the language of science” that determine when (and if ) we attempt to use formal logical systems are more rhetorical and suasory, involving higher degrees of interpretation (as Phillips would say), than the logic itself. That is why even “logic” can be revised and reinterpreted, as has been noted by, e.g., Duhem (1914/1954), Quine (1953, 1960), and Bartley (1984).
17 Rhetoric and Logic in Inference and Expectation
339
The Functions of Language About a century ago Karl Bühler (1918, 1934) analyzed hierarchical functions of language. First is an expressive or symptomatic function, serving to express a speaker’s state (thoughts, emotions, etc.). Second is a stimulative or signal function, in which language triggers or releases effects and meanings within a hearer (as well as a speaker). Bühler noted a third, present only when the lower two were already there, in the descriptive function, when language describes some actual or potential state of affairs. Karl Popper, one of Bühler’s students, extended this analysis to a fourth, which presupposes the descriptive function: the argumentative use of language. An argument, for example, serves as an expression insofar as it is an outward symptom of some internal state (whether physical or psychological is here irrelevant) of the organism. It is also a signal, since it may provoke a reply, or agreement. Insofar as it is about something, and supports a view of some situation or state of affairs, it is descriptive. And lastly, there is its argumentative function, its giving reasons for holding this view, e.g., by pointing out difficulties or even inconsistencies in an alternative view (Popper, 1963, p. 295).
I later generalized this formulation from language to all action or behavior in a functional or intentional (teleological) mode. For example, a famous philosopher once ended a “discussion” with me by rapping me on the knee with his cane, an effective use of the argumentative mode of behavior in the absence of overt linguistic accompaniment. Language is just one form of behavior, all of which can be argumentative.
Criticism Is Argument, Not Deduction If one abandons justificationism all criticism is argumentative, but not all arguments are deductive. Criticism does not prove a conjecture right or wrong, but attempts to further articulate it and bring it into conflict with other conjectures to “test” it by seeing which is more survival worthy.
340
W. B. Weimer
Criticism inhabits the argumentative mode. Criticism is an essential negative: one may oppose a position without simultaneously endorsing any given rival. The effect of criticism does not presuppose a unitary framework held by all—only the ability to acknowledge disagreement (incompatibility). Criticism does not even presuppose consistency—that is a requirement of justificationism. Most importantly, criticism has nothing to do with deducibility. Justificational views presuppose what Bartley called the “transmissibility assumption”—that all properties or measures of intellectual worth or merit such as truth are passed through from premises to conclusions via “logical” transmissibility (meaning logical derivability). But the only criterion that is essential in empirical domains—testability—is not logically “transmissible.” Testability is an argumentative, rhetorical, and critical concept, not a (deductive) logical one1 . What counts as criticism cannot be specified in advance. A number of factors are obvious in the history of science: experience (the facts or data relevant to a theory); theory (an alternative theory to one in question); problems (the view in question should relate to recognized problems, or create new ones that the research community recognizes); and logic (logic is a component of criticism, just not the only one). Feel free to look for other types of criticism relevant to the acquisition of knowledge.
Theories Are Arguments, and Have Modal Force Theories used by subjects of conceptual activity for a purpose always have modal force: the theory says that reality must of necessity be the way their patterns or models represent it to be. Theoretical explanations defend their conclusions, they do not deduce their “truth.” What is referred to as a scientist’s “commitment to a theory” is a reflection of that modal force. In embracing a theory he or she is saying “This is the way reality has to be.” That sort of adherence is more that logic, in the form of traditional hypothetical (if–then) reasoning, can provide. Empiricists tend to propose that this modal force is due to nomological necessity, the idea that the laws of nature have modality built in to
17 Rhetoric and Logic in Inference and Expectation
341
them, and after noting that this is incompatible with empiricism, go on to propose it as a “refinement” of the nomological empiricist account of explanation (see, e.g., Rescher, 1970). But why should “laws” have any modal force? The answer is obvious—they have only derivative necessity, from the theories as arguments from which they were proposed as “laws” in the first place. It is the argumentative use of behavior which is intrinsically modal in force, not the postulated “laws of nature.” Laws are just statements of regularity. As such, laws are not arguments. All arguments have modal force. The pragmatic context of human inquiry and conceptualization is intrinsically a modal context, guaranteeing that theories are arguments and that the “logic” of science is a modal logic. To account for that logic we have to move beyond the logician’s standby, the concept of material implication (the if–then connector). Science requires a different mood or inference schema that captures that modal force.2
Adjunctive Reasoning in Inference The traditional hypothetical, the if–then connector or propositional operator, is not the only one available. The ancient Stoic logicians had a promising alternative that has that modal force. But it was abandoned (or submerged) by the time of Aristotle and the interest in the class logic of if–then. The Stoics, in contrast, were concerned with a propositional calculus (rather than with a logic of classes, as was Aristotle) that represented the argumentative form of explanatory reasoning. The so-called laws of thought were in various moods (modalities) and had different schemata to represent the moods. So Stoic logic differed from the Aristotelian approach in two chief respects: first, it was a propositional calculus instead of a logic of classes; second, Stoic logic is a theory of inference schemas, whereas Aristotelian logic is a theory of logically true or valid matrices. For the Stoics, an argument was a system of propositions composed of premises and a conclusion. A valid argument was one the negation of whose conclusion is incompatible with the conjunction of its premises. There is a conditional proposition corresponding to every argument, but conditionals are not arguments. Arguments are
342
W. B. Weimer
not put together by connectives, while propositions are compounded by conditionals. A valid argument form may be either true or false: a true argument is defined as a valid one with true premises; a false argument has either invalid form or is a valid argument with false premises. Stoic logic becomes relevant for inference when we examine the types of propositional connective they allowed (recall that for every valid argument or inference form there is a corresponding conditional). Like their contemporaries, the Stoics acknowledged implication (such as Philonean implication—identical to present material implication), disjunction (both exclusive and inclusive), and conjunction. But importantly, they also had an inferential, or adjunctive, proposition. The adjunctive propositional form is represented by since-necessarily: Since the first, the second necessarily and is valid only when the antecedent and consequent are both true. This has the strength to capture scientific argumentative claiming: Since my theory is true, the world is necessarily the way it is.
The adjunctive connector has the same truth table as conjunction. An adjunctive proposition is false when it either begins with a false premise or ends with a consequence which is not true. But conjunction is a commutative relation: in the terminology of Table 17.1, a dot b is the same relation as b dot a. Adjunction is noncommutative: a diamond b is not an implication of b diamond a. The truth tables for adjunction and the more traditional conditionals are found in Table 17.1. Table 17.1 Truth tables for common propositional forms compared to the adjunctive conditional form a
b
a∨b
a·b
a⊃b
a≡b
ab
T T F F
T F T F
T T T F Disjunction
T F F F Conjunction
T F T T Material implication
T F F T Equivalence
T F F F Adjunction
17 Rhetoric and Logic in Inference and Expectation
343
The adjunctive conditional is a good model for science for two reasons: first, the well-known inadequacies of the hypothetical (material) conditional; second, adjunction captures the modal force missing from other inference schema, and which is so obviously present in scientific reasoning. Consider these points in turn. The hypothetical is a poor model because it is valid under conditions we know do not hold in science: for instance, it is valid when both the antecedent and consequent are false, and also when just the consequent is false. Second, there is absolutely no necessity in the relationship of antecedent to consequent. But what we say in science is, if our theories are true, then the laws of nature have to be what we say they are. These are the reasons most writers give for saying that science is not logical—that its inferences do not fit the amodal hypothetical conditional schema. In favor of the adjunctive conditional is precisely that necessity between antecedent and consequent, and the fact that mere observation of the consequent does not confirm an adjunctive proposition: affirming the antecedent is still a fallacy (due to the noncommutativity of the proposition). On the other hand, observation of a single refuting instance does refute an adjunctive proposition: falsifiability is retained (in principle). Science can be “logical,” and the falsificationist approach can be retained, but only with the adjunctive conditional. We should note a final strong point in favor of adjunctive reasoning: Only theories (conjectures) can be the antecedent of an adjunctive proposition. Theories can entail that a consequent obtains, but neither laws (whose derivative modal force stems from the theories in which they are embedded) nor factual propositions are “strong” enough to exhibit conceptual necessity. It is conceptual necessity, possessed by a theory, which makes an explanation of a conjunction of events instead of being nothing more than a statement of their conjunction. Theories are modal, facts and even laws are not.
Science Is a Rhetorical Transaction Rhetoric has been traditionally identified with persuasion, flattery, and ornamentation: the “window dressing” necessary to make an audience
344
W. B. Weimer
accept a tenuous position in the absence of “rational” (i.e., logical) reasons. Thus “mere rhetoric” is what a politician uses to camouflage the promises that will never be delivered. Rhetoric was not allowed to be an epistemic process, whereas logic and “dialectic” (opposing a thesis with its antithesis in order to get to the truth) were considered as epistemic activities. Abandoning the justificationist approach allows a less biased appraisal of rhetoric. Within the pragmatic domain, logic and dialectic become rhetorical techniques that apply to limited domains of the full range of argumentative claiming. Traditional logic captures the inferential or conclusion reaching aspects of the formal sciences, but it is not of much value in the empirical realms. Similarly, dialectic is an apprehension of the nature of opponent process regulation in complex phenomena such as scientific inquiry, but there is no “algorithmic” interpretation comparable to “synthesis” of opposites (as Hegel proposed). Rhetoric is broader and more inclusive of diverse aspects of inquiry once one abandons justificationism. As Richard Gregg (1984) put it: “’Reality’ is symbolic reality. That is all we have. Our comprehension of how the world works and how we should function in relation to it is dependent on how our brains are organized to process the contents of our experiencing (p. 133).” This ties together structural realism with the functional domain as inherently symbolic (rather than physical), having modal force, and rhetorical. The scope of rhetoric is the entire realm of argumentative claiming, or all behavior in the argumentative mode. For our purposes, the classic work of Kenneth Burke (1945, 1966, 1967) emphasizes the crucial aspect of rhetoric: symbolic inducement. Gregg put it this way: “Inherent in all symbolic activity is the function of inducement; it is the invitation at all levels of symbolic activity to participate in the symbolizing, to join and in some way become consubstantial with particular modes of symbolizing (ibid., p. 143).” Symbolizing in any form, like all knowing, is inducing—both an invitation and an argument to the audience (whether one’s self or others) to participate or join in making the claim in question (a semantic or semiotic closure, in Pattee’s terminology). Whenever we entertain a claim we are being rhetorical, using the power of symbolic inducement. Logic and dialectic are restricted facets of rhetoric, because they employ limited characteristic patterns of symbol
17 Rhetoric and Logic in Inference and Expectation
345
inducement techniques. The concept of persuasion, usually regarded as definitive of rhetorical technique, is merely a limited reflection of the full range of symbolic inducements that are possible. Rhetoric is epistemic—generative of knowledge—in ordinary discourse and in science (Scott, 1967; Simons, 1980; Weimer, 1977b). All human knowledge arises as a function of argumentative claiming and is thus properly rhetorical. All learning, and thus the acquisition of knowledge, is rhetorical (even when tacit). Learning is always doing, an active process of conjecturing (again, even when done tacitly), that inevitably involves argumentative claiming and symbolic inducement. From that perspective, science must be a rhetorical transaction rather than a “logical” or “dialectical” one. Both scientific tuition and communication are argumentative, a matter of following the injunctions for perceiving and conceiving laid down according to the sociological “paradigm” in normal science (see Kuhn, 1977; Spencer Brown, 1969), and of arguing for or against that paradigm with all the available means of persuasion in periods of revolution. Similarly, the process of criticism as a non-justificational attempt to weed out the errors among our conjectures is rhetorical (and modal) in all its phases. We cannot be critical without arguing and symbolically inducing. And it is obvious that the individual in the audience is as active in the scientific research community in the shaping of knowledge claims and their content as the speaker—as was long emphasized by Polanyi (1951, 1958) and Campbell (1975). In so far as the construction of knowledge is inevitably a social process involving a complex interplay between researchers and the spontaneous social orders in which they are embedded, it is another manifestation of the essential tension between the argumentative and suasory power of the individual (as the revolutionary) and a tradition of reluctance found in the community as audience. The context of constraint that constitutes the rule structures of the scientific endeavor has evolved as a result of rhetorical interchange. The essential tension of science that Kuhn (especially 1977) emphasized is a rhetorical phenomenon. Consider a related point: Burke argued in many locations that wherever there is meaning there is persuasion and thus there must be rhetoric.
346
W. B. Weimer
Meaning can exist only in the pragmatic framework of a subject’s intention and action, and it can be conveyed only by employing the full structure of language (or thought) through its argumentative function. Meaning, as our conceptual creation, very literally is “what we make it,” and in both comprehension and communication (to ourselves or to others) our “making it” is a matter of rhetorical technique and argumentation. Even our most basic activity, such as naming an object, or identifying an instance of a thing-kind classification, whether done tacitly or explicitly, is a matter of rhetorical persuasion, of arguing for the appropriateness of that name or classification to ourselves and to others. The manifestation of meaning, even in its most primitive and tacit forms, requires pragmatic context that is intrinsically persuasive and rhetorical. Wherever there is meaning, there is the argumentative function of conceptualization, and thus both rhetoric and cognition.
Notes 1. The transmissibility assumption is inextricably linked to the justificationist metatheory. It is held tenaciously, because the earliest attempts to provide criteria of evaluation were truth conditions. And truth is transmitted from premises to a conclusion (falsity is the reverse—retransmitted from the conclusion back upon the premises). Historically, it turned out that separating good from bad theories or ideas coincided with truth and falsity. But deciding truth was often impossible or completely impractical. So a search for a truth substitute that, although weaker, would be attainable, was undertaken. This is how the probability calculus came to substitute for truth. And as a matter of purely historical accident, like our early success with geometry and mathematics, it turned out that truth and probability were transmitted like a logical conclusion. Thus, it was simply assumed that any property of intellectual merit would satisfy the transmissibility assumption. At that point transmissibility became taken for granted, and some measures, like probability, are retained just because they are transmissible, and not vice versa. This was the birth of neo-justificationism as the quest for probable knowledge to substitute for proven knowledge. Logical transmissibility is simply expected of any “good” property of evaluation.
17 Rhetoric and Logic in Inference and Expectation
347
Logical transmissibility is still expected of other evaluatory properties and tokens without regard to their real logical capabilities; and it is also demanded that evaluations be made in terms of probability without regard to its evaluative capabilities. Hence, the heroic yet futile attempts to retain probability as a positive evaluational property. (Bartley, 1984, p. 263). 2. Making an argument presupposes some amount of logic as unrevisable during the making of that argument. This does not lead to a retreat to commitment through this back door (of whatever logic is held in this case). Should an irrationalist make such an argument, he or she would equally have to presuppose a minimum logic in order to make it. When one makes any kind of tu quoque argument in a retreat to commitment in ultimate standards (the topic of rationality, examined in the next chapter) in defense of skepticism or irrationalism, one is doing so “logically,” and one must distinguish between the use of logic and a commitment to it. Logics are tools which one is free to use without committing one’s self to that logic.
References Bartley, W. W., III. (1984). The Retreat to Commitment. Open Court. Bühler, K. (1918). Kritische Musterung der neuen Theorien des Satzes. Indogermanisches Jahrbuch, 6 , 1–20. Bühler, K. (1934). Theory of Language: The Representational Function of Language. John Benjamins Publishing Company. Burke, K. (1945). A Grammar of Motives. University of California Press. Burke, K. (1966). Language as Symbolic Action. University of California Press. Burke, K. (1967). On Human Nature: A Gathering While Everything Flows. University of California Press. Campbell, D. T. (1975). On the Conflicts Between Biological and Social Evolution and Between Psychology and Moral Tradition. American Psychologist, 30 (12), 1103–1126. https://doi.org/10.1037/0003-066X.30.12.1103 Duhem, P. (1914/1954). La Theorie Physique: Son Objet, sa Structure: Translated as The Aim and Structure of Physical Theory. Princeton University Press.
348
W. B. Weimer
Gregg, R. B. (1984). Symbolic Inducement and Knowing: A Study in the Foundations of Rhetoric. University of South Carolina Press. Körner, S. (1966). Experience and Theory. Humanities Press. Kuhn, T. S. (1977). The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press. Phillips, J. N. (1972). Degrees of Interpretation. Philosophy of Science, 39 (3), 315–321. https://doi.org/10.1086/288453 Polanyi, M. (1951). The Logic of Liberty. The University of Chicago Press. Polanyi, M. (1958). Personal Knowledge. Harper & Row. Popper, K. R. (1963/2014). Conjectures and Refutations. Harper & Row. Now Rutledge Classics. Quine, W. V. O. (1953). From a Logical Point of View. Harvard University Press. Quine, W. V. O. (1960). Word and Object. MIT Press. Rescher, N. (1970). Scientific Explanation. Free Press. Scott, R. (1967). On Viewing Rhetoric as Epistemic. Central States Speech Journal, 18(1), 9–17. https://doi.org/10.1080/10510976709362856 Simons, H. W. (1980). Are Scientists Rhetors in Disguise? An Analysis of Discursive Processes Within Scientific Communities. In E. F. White (Ed.), Rhetoric in Transition: Studies in the Nature and Uses of Rhetoric. University Park: The Pennsylvania State University Press. Spencer Brown, G. (1969). Laws of Form. George Allen and Unwin Ltd. Weimer, W. B. (1977a). A Conceptual Framework for Cognitive Psychology: Motor Theories of the Mind. In R. Shaw & J. D. Bransford (Eds.), Perceiving and Acting and Knowing. Erlbaum Associates. Weimer, W. B. (1977b). Science as a Rhetorical Transaction. Philosophy and Rhetoric, 10 (Winter), 1–29.
18 Rationality in an Evolutionary Epistemology
It is untrue that the choice between competing moral, religious, political, or philosophical creeds—such as Christianity, communism, and empiricism—must be fundamentally arbitrary. William W. Bartley, III
Theory of rationality fits into the evolutionary paradigm—not the traditional justificationist framework proposing knowledge-as-justifiedtrue-belief as a result of being judged and certified by the gold standard of one or another putative authority. The question of justification of an action or opinion or a position does not matter at all in evolution: species and mutations are never “justified,” they are either relevant to adaptation to an econiche or not. The only issue is how viable the given mutation is within the species, and the species adapted to the econiche. This is answered by the ever present, ever changing pressure of environmental selection, not by comparison to some standard of what “justifies” viability. And as our evolutionary history shows, what is viable today need not be so tomorrow.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4_18
349
350
W. B. Weimer
How can rationality fit in the evolutionary blind-variation-andwinnowed-retention framework? We need to examine the three classes of theories of rationality that arose in the history of Western philosophy. What follows is due to W. W. Bartley III, primarily from 1962 in The Retreat to Commitment (Bartley, 1984), as expanded in Weimer (1979).
Comprehensive Views of Rationality Historically dominant in Western philosophy, comprehensive rationality underlies the classic form of justificationism. It combines two requirements: (1) a rationalist is one who accepts all positions that can be justified or proven true by appeal to a stated rational authority; (2) a rationalist accepts only those positions that can be justified by appeal to the authority. Traditional philosophies are demarcated by the nature of the putative authority they accept. There were two main classes of authority in Western thought: Intellectualism—the appeal is to intuition or some faculty of the mind called reason (conceived as an authority—Reason with a capital R). Intellectualism proposes the proving power of the mind. Empiricism—the rational authority lies somewhere in the deliverances of our senses (sensationalism, or phenomenalism, or presentationalism). Both types of position identify critical assessment of a claim with the attempt to justify it by the authority. Both these traditional authorities have long been known to be failures. First, both are subject to error —both reason and observation often are fooled, so they cannot be infallible or gold standards against which to judge all possible claims. Second, the standards are simultaneously
18 Rationality in an Evolutionary Epistemology
351
too narrow and too wide: reason leads to paradoxes, and sense experience, as David Hume noted, is too narrow to justify any inference to the external world or to its properties. Third, putting together the two requirements—all and only those propositions that can be justified by the rational authority are to be accepted—is contradictory. If you accept the “only” proposition then it is necessary to justify the “all” one—which can never be done (both historically, and also conceptually). And even if it could be done one would have to presuppose exactly what is at issue here—which is to show that it must be accepted. This second requirement cannot be “justified” by appeal to rational authority, so it asserts its own untenability, and thus if correct must be rejected. Fourth, the tu quoque argument about the limits of rationality cannot be overcome by comprehensive rationalism. The tu quoque claim is responsible for the ubiquitous prevalence of relativism and the abject abandonment of standards that is so characteristic of “postmodern” thinking (so called postmodern thought, the Frankfort “critical theory” and hermenutics, etc., the “anything goes” rejection of social mores). When the power of the tu quoque argument became widely recognized in the World War II era, a series of stopgap proposals for one or another form of “limited” rationality arose. Sadder but wiser, these “sophisticated” relativists say “one should not throw stones since we all are forced to live in the same glass house,” because the tu quoque says all discussion must come to a halt at some unjustified final “justifier” or standard. No standard is any more acceptable than any other, because all rest upon a similarly unjustified and unjustifiable final arbiter that someone happens to, for totally irrational reasons, choose to accept as a final stop point. Why is this so? Any view can be challenged: questions such as “Give me a reason to believe that” or “How do you know” or “Show me a proof ” are always present or easily proposed. Whenever one responds to such a question it is subject to the same challenges, and so on, forever. Thus, there is an infinite regress in the process of justification of standards to challenges to any claim. All quests for justification go on forever until an arbitrary, and hence unjustified and therefore irrational, stop point is
352
W. B. Weimer
proffered and finally, no doubt out of exhaustion, accepted. Here we are led to ultimate relativism as a result—to the position of Rorty in chapter 17—there is no final common standard to settle claims among competing “ultimate” stopgap points. Different ultimate stopping points simply demarcate points of view that are held, in the end, only as a result of a fidistic commitment. This is the retreat to commitment . Since this limitation on rationality is logical—no one can escape this subjective leap of faith—it follows that no one can ever be criticized for having made an irrational commitment. To any and all criticism the irrationalist simply replies “tu quoque.” Any irrationalist (or now, “post-rationalist”) has an infallible excuse for a purely subjective, fidistic commitment. And so, what we find most important to our feelings, beliefs and positions is that they cannot be rationally questioned or challenged. The walls of the glass house are vanishingly thin, but infinitely high. So best not throw any stones—no one can say anything against anyone else or their standards. You do your thing, I’ll do mine. Your feelings are as good as my facts. Is there a real alternative? Some “sadder but wiser” thinkers have proffered a “neo-” or “less than comprehensive” formulation to replace comprehensive accounts of rationality.
Critical Rationalism Starts with the Failure of Comprehensive Critical rationalism holds that: 1. Rationality is limited, because some things (e.g., standards of rationality) cannot be justified by appeal to rational standards: 2. This concession is claimed to be unimportant, in that the skeptic’s victory is somehow “bloodless”; 3. The ultimate basis for any position is the tu quoque—a retreat to commitment in personal or social standards that are assumed to be beyond challenge.
18 Rationality in an Evolutionary Epistemology
353
Two variants of such a position were common from the mid-twentieth century. Most pervasive was the position taken by the logical empiricist philosophers (“sadder but wiser” than their brash predecessor logical positivist selves), and shared by the British “analysis” philosophers and American pragmatist or instrumentalist philosophers. This view argues that while standards are needed to justify everything else, the standards themselves need not be justified, but only require description or explication (Rorty, following Ayer and the Wittgensteinians took this over). Thus arose the “analysis” of the “language” of a domain in which justification disappeared unaddressed. This position, being a fidistic commitment to some unjustifiable standard, obviously begs the question at issue with respect to limits of rationality. This view also abandons the possibility of criticism, as either the attempt to prove a position or even to improve it, since now the task of philosophy is only to explicate the “language game” of the discipline in question in as subject-neutral a way as possible, simply to classify where its set of standards fit into the entire panoply of standards potentially available. This position is found in Ayer (1956), Wittgenstein (1953) and the ordinary language analysts, even Quine (1951, 1960) and Rorty (1979), as well as lesser-known contemporaries such as Hickey (2005). In terms of theories of inference and science, the claim shifts from “knowledge is certain” to “knowledge is probable,” and the entire field now relies on probabilistic inductive “logic” as a result. All this shift does is make the problem of justifying inductive inference appear to be one of probability rather than certainty. The skeptical critique of induction centuries ago from Hume is not overcome, it is simply ignored. A second variant of critical rationalism was due to Karl Popper. First articulated in The Open Society and Its Enemies in 1945, he claimed that rationalist identity is a matter of fidistic binding of one’s self to reason. His position was that one should be constantly critical—ready to challenge everything. If this position differs from his empiricist contemporaries, it is on the requirement to criticize one’s own commitments (standards) rather than just explicate them. Otherwise it is no different from the views of the logical empiricists. This position is equally faced with the problem of trying to criticize one’s theory of reason while simultaneously being irrationally bound
354
W. B. Weimer
to it. And as such, it is incapable of answering the issue of limits to rationality. With respect to the problem of scientific methodology this position became the famous dictum that since confirming “proof ” was never possible, science grows by looking for falsifications, data that refute a theory, in order to learn that a theory is wrong when one can never know that a theory is correct. So the logic of scientific discovery is “oldfashioned” deduction rather than any chimerical induction, and it works only in the logical schema modus tollens, to show that a conclusion does not logically follow from the stated premises. The obvious refutation of this view is that we often do in fact reject the “refutation” and choose to keep the theory (the premises) instead. The hundred-year odyssey of Prout’s hypothesis is a clear example of this situation (see Agassi, 1966). A genuinely new theory of rationality was proposed by Bartley that does overcome the limits of the justification-rationality problem that had scuttled all previous attempts to specify an adequate account of rationalist identity. Bartley relocated the essence of rationality from the quest for justification (or proof or truth) by an ultimate authority to the process of critical assessment. This process of critical assessment need never come to an end—like evolution, it is continually ongoing, and usually involves the clash of competing ideas of many individuals.
Comprehensively Critical Rationalism The rationalist is not one who has any guarantees or authorities or theories of justification (whether certain or probable) at all . Rationality is located only in the function of critical scrutiny. A rationalist holds all feelings, beliefs, opinions, and claims of any kind open to the possibility of critical assessment and replacement as a result of such criticism. There is no commitment to anything in the way of a standard because criticism is totally divorced from justification. This is the crucial break from any form of justificationism. Criticism is always from some point of view. For the purpose of any given critical assessment, that critical framework is presupposed as a given. But in any other context that critical framework may in turn be criticized (and perhaps found wanting and then abandoned). But there
18 Rationality in an Evolutionary Epistemology
355
is no infinite regress of standards involved. No knowledge claim or position is ever excepted as beyond further criticism. Rationalists reserve the right to criticize any position at any time, but they realize that not all things can be criticized at once. Any conceivable criticism must presuppose a point of view from which that criticism occurs, but that point of view is only exempted from scrutiny when it is used for that particular purpose (or instance) of criticizing another position. Criticism is an infinite process, it is not an infinite regress. Within this position the traditional preoccupation with “how do you know?” becomes unimportant. It is unimportant because we never know anything in the sense of “know for certain” or can “prove” or “probabilify.” Our knowledge claims are fallible conjectures, and always remain so. And they are held rationally to the extent that they are (and can in the future be) subjected to sincere attempts at refutations and have thus far survived. It is even rational to hold a theory that is known to be factually incorrect if it succeeds better than any alternative in its domain. An interesting example of this important point was emphasized by Feyerabend (1965). Einstein’s theory of relativity is often supposed to be “supported” or “confirmed” because it predicts the precession of the perihelion of Mercury as it orbits the sun. Since Mercury’s orbit does indeed precess, and Newtonian mechanics does not say it should (or should not: it doesn’t have any prediction here at all), this is taken as “proof ” of the correctness of Einstein’s special theory of relativity. But the observed precession exceeds the special relativity theory predicted precession by a factor of 10, which is vastly greater than any measurement error possible in the experimental results. So Einstein’s theory is, all things considered, simply false. But since after more than 100 years we have nothing better to offer, it is perfectly rational to continue to hold onto it. But it is not now nor ever will be “proven” true.
Rationality Is Action in Accordance with Reason Let us refocus—upon the tacit dimension of rationality, and the complexity of actual behavior. Rationality pertains primarily to the
356
W. B. Weimer
conduct of human (or other) agents rather than to beliefs or statements or propositions. Rational conduct is of necessity partially tacit and can never be fully explicit. In the complexity of our unpredictable and indeterminate world, no instant rational assessment is possible—rationality emerges only in the long run as a result of following abstract rules that provide a context of constraint in terms of which we cope with the unknown and unforeseen. To repeat: rationality in complex orders is never fully explicit nor instantly specifiable. Rationality is an evolutionary concept: as an unending quest, its achievement is never is finalized.
Rationality Does not Directly Relate to Truth or Falsity Rationality is conduct: behavior or action. It is not statements or propositions, whether they be true or false. Propositions (as the form of statements most often bandied about by philosophers) simply are (in the sense that they just are or exist and nothing more). Take a proposition we assume to be true: human beings cannot survive without oxygen. How does this relate to rationality? The answer is only indirectly if at all. It entails, for example, that it is impossible for us to live on the moon (without spacesuits, etc.). But is it therefore rational or irrational? What is the “it” in that question? “It” refers not to the propositional content but to statements about human conduct—that, e.g., it would be irrational to travel to the moon without a functioning spacesuit. Propositions can provide information that can guide conduct that is in turn rational or irrational. But rationality pertains to the action, not to its informational basis. We ask (in science and society) if action is rational or irrational, not if it is true or false. Our concern is with conduct—the behavior specified in the statement, not the truth value of its content. What about beliefs? Traditional philosophy assumes that knowledge is “justified true belief .” The knowledge we possess (or think we possess) at a given time relates directly to our beliefs and actions, and the latter in turn may be judged to be rational or irrational. The well-known Popperian horror of “belief epistemology” relates to the confusion of belief
18 Rationality in an Evolutionary Epistemology
357
(as a psychological state of affairs in the rate dependent realm) with knowledge (as propositional content in the rate independent realm of conception). Knowledge is not, and could never conceivably be, justified true belief. Knowledge is a matter of conjectures continually held in check by attempts at their refutation. One should not obscure the distinction between psychological belief and trans-subjective knowledge claims. While both beliefs and knowledge can guide action, that does not make them epistemically equivalent, nor does it relate them in the same way to rationality. This seemingly trivial distinction allows one to assess some claims that have been made about rationality and truth or falsity and belief. Consider this claim: a statement that has been falsified is not rationally acceptable. Is this claim true or false? If it is applied to empirical science then it is false, since all theories are (logically speaking) refuted by recalcitrant facts, often facts known before they were proposed: recall the example of the special theory of relativity in connection to the precession of the perihelion of Mercury to understand this. There are numerous false claims (e.g., light moves in straight lines) that, although false according to our best corroborated theories, it is rational even for scientists to accept not only in daily life (as practical shortcuts) but in their conduct of scientific inquiry. The claim above, which would render all empirical science rationally unacceptable, ignores the context sensitivity of both the ascription of truth and falsity, on one hand, and rational acceptability, on the other. The moral here is obvious: rational acceptability of a statement as a guide to action does not relate in determinately specifiable fashion to truth or falsity of the proposition(s) contained therein. There is no “instant” rational assessment of what constitutes rational assessment. Rationality emerges only over time, as a result of consistently following general rules of conduct. Human action can be perfectly rational even though the participants know neither exactly what they are doing, nor why they are doing it, nor what will result from their conduct—a point the Scottish moralist philosophers emphasized centuries ago in articulating the first evolutionary approach to society.
358
W. B. Weimer
Action in Accordance with Reason Is a Matter of Evolution within the Spontaneous Social Order Since it seems counterintuitive or “paradoxical” (to those accustomed to regarding rationality as a matter of explicit and conscious control of conduct) to claim that rationality need not involve conscious awareness or explicit control, let us develop the point in evolutionary perspective. The evolution of society is historically obvious but virtually invisible in any given present time. The evolutionary selection of rules of conduct operates through the viability of the social order that the rules produce— if the resultant order is stable and productive, the rules will be selected for survival. This is a higher order context of constraint harnessing our behavior through downward causation. Group selection is not guided by any individual’s explicit reasoning. The system of rules of conduct must slowly develop as a whole through the interaction of many people, and a given individual in a society will have little knowledge of anything beyond the particulars with which he or she is directly aware. At each state of development, the overall prevailing order determines what effect, if any, changes in a given individual’s conduct will produce in that overall order. We can judge and modify conduct only within a framework which, although the product of gradual prior evolution, remains for us in the present a relatively fixed result of prior evolution. This framework becomes, very literally, a context of constraints upon action. It is the equivalent of comprehensively critical rationality requiring that we accept, for the moment as a given, the framework from which one operates. In summarizing to ourselves the effects of the social system upon individual conduct we arrive at a set of particular descriptions of conduct, largely in terms of prohibitions to action, that instantiate abstract rules of which we are unaware either of their formulation, their actual effect, or survival value. As Hayek remarked, in current society we are literally like the “primitive” studied by an anthropologist. An individual may have no idea what overall order results from his observing rules such as those concerning kinship and intermarriage, or the succession to property, or any idea of what function this overall order serves.
18 Rationality in an Evolutionary Epistemology
359
Yet the individuals of the species which presently exist in that social order will behave in that manner, because those groups of individuals which behaved in that manner have displaced those which did not do so (see Hayek, 1967, p. 70). This makes each individual an inherently social animal. The higher cognition that finds its expression in explicit rationality, while undoubtedly the greatest flowering of humankind’s achievement for each individual , is the product of our cultural evolution and not the designer or creator of that evolution. Conscious thought did not make our present civilization. Mind and culture developed (and are continuing to develop) concurrently, not successively. With the development of culture a tradition, in the form of (often implicit) rules of conduct that existed independently of any of the individuals who learned to follow them, began to govern human action. These rules have been embodied in the conduct of previous members of the culture in question. These largely implicit rules are embodied in individuals’ conduct and do not exist independently in any disembodied Platonic realm. These learned rules (for prior generations) formed the basis for prediction of regularity and the anticipation of the consequences of action upon the environment. What we call conscious reason emerged when we became aware of the capacity to model the environment that was provided by that framework of learned rules of conduct. Reason allows the individual to become a more adequate theory of its environment, an increasingly more conscious and explicitly formulated theory, than is available to any individual in isolation. As Hayek put it, “The mind is embedded in a traditional impersonal structure of learnt rules, and its capacity to order experience is an acquired replica of cultural patterns which every individual mind finds given (1979, p. 157).” This means, once again, that rationality emerges only over time as a result of following general rules in the unforeseen welter of particular cases that confront us. Like the primitive following kinship taboos, our action may be rational even when we know explicitly neither what we are doing nor why we do it. Rationality is action in accordance with reason. Reason is an evolutionary result of the interplay of many minds in a traditional or spontaneously grown order, and it gradually is led from its present position into the unknown and unforeseen.
360
W. B. Weimer
Human beings did not first have reason and then consciously or explicitly decide to act rationally. Instead our reason has evolved with our social and conventional orders, analogously to our language. Adam Ferguson put it this way in An Essay on the History of Civil Society (over 2 1/2 centuries ago) in a few sentences: The artifices of the beaver, the ant, and the bee are ascribed to the wisdom of nature. Those of polished nations are ascribed to themselves, and are supposed to indicate a capacity superior to that of rude minds. But the establishments of men, like those of every animal, are suggested by nature, and are the result of instinct, directed by the variety of situations in which mankind are placed. Those establishments arose from successive improvements that were made, without any sense of their general effect; and they bring human affairs to a state of complication, which the greatest reach of capacity with which human nature was ever adorned, could not have projected; nor even when the whole is carried into execution, can it be comprehended in its full extent (text in public domain, originally p. 267).
Hayek revived a crucial distinction between two types of organizational arrangement that must be understood in order to see what is involved here: the directed or commanded taxis and the spontaneous cosmos. This is his definition: “An arrangement produced by men deliberately putting the elements in their place or assigning them distinctive tasks (the Greeks) called taxis, while an order which existed or formed itself independently of any human will directed to that end they called cosmos (1978, p. 73).” In the domains of complex human interaction that are spontaneously ordered (the social cosmos), no single individual (or computer network or other centralized control structure) can ever be aware of all the particulars involved. Although governed by abstract rules of determination, cosmic orders are inherently unpredictable at the level of their particularity. Genuine novelty is the rule rather than the exception. Rational conduct in such spontaneous orders (such as in science and society) consists in submission to general rules whose origin, exact nature, and consequences we can probably never fully formulate and never foresee in their entirety. As Hayek put it:
18 Rationality in an Evolutionary Epistemology
361
Since our whole life consists in facing ever new and unforeseeable circumstances, we cannot make it orderly by deciding in advance all the particular actions we shall take. The only manner in which we can in fact give our lives some order is to adopt certain abstract rules or principles for guidance, and then strictly adhere to the rules we have adopted in dealing with the new situations as they arise (1967, p. 90).
Actions form coherent and rational patterns, only because in each choice, each successive decision, we constrain the range of choice by the same abstract rules.
Rationality and Its Relativity The growth of knowledge is always “paradoxical” when the triad of theory, background assumptions and observation are in conflict. Although difficult in ongoing research, we resolve each such “paradox” in similar fashion: by adopting a new conceptual scheme that rejects (or replaces) at least one member of the triad. What we do, that is, is to reject some claim that previously had the status of an analytic truth (an instance of “this is equal to that”).We adopt a new conceptual scheme (in science a new theory) that rejects what the previous scheme did not question. Scientific “revolutions” are revolutionary in the sense that they replace one conceptual system’s analytic truths with new ones. The key is this context sensitivity of the ascription to a claim of truth status. What if we apply this context sensitivity to the theory of rationality? As a domain of inquiry rationality has a comparable conceptual framework consisting of data of various sorts (instances of behavior), more or less articulated theories, and unquestioned background assumptions. When a new theory of rationality rejects commonplaces (such as the justificational nature of criticism, or the tacit assumption that rationality should be totally explicit, or the idea that everything of intellectual merit must be transmissable from premises to conclusion in logical derivation), it flies in the face of the heretofore obvious, and is thus “paradoxical.” This sort of paradox evaporates when debate within the relevant research
362
W. B. Weimer
community changes the background assumptions as a result of the critical scrutiny focused upon them: they cease to be analytic truths and are reformulated or rejected. So the charge of paradox against any new theory of rationality is both true and innocuous—every fundamentally new theory in any domain is paradoxical in this fashion. This must be so because novelty is always deviation from (prior) expectation. Any absolute claim can be so only relative to a delimited context. The context for the theory of rationality is the practical domain of human action.
Rationality Is Neither Instantly Determined Nor Explicit The desire for instant gratification cannot be met. This has been one of the outstanding results of non-justificational philosophy—a result that is most fully apparent in the analysis of scientific inquiry. Scientific rationality has an intrinsic tacit dimension. Implicit in non-evolutionary or classical conceptions of rationality is the assumption that it must be explicitly and consciously specifiable. Rationality was considered to be restricted to deliberately effected individual actions and their extension in a delimited organization (a taxis) fulfilling conscious aims. Historically, the quest of methodology has been to turn the spontaneous cosmos of science into a taxis. Unless scientific practice could be restricted or impoverished into a taxis structure with its organization specified in advance neither the comprehensive nor the critical rationalist could regard it as rational. Thus critical rationalists, like their comprehensive precursors, were “explicit” methodologists who told researchers where to go and what to do—as exemplified in the slogans characteristic of the early writings of Popper. These thinkers very explicitly prescribe what one must do in order to be considered “rational,” whether in the practice of science or the conduct of daily life. In similar fashion, Kuhn’s conception of normal science as paradigm-based puzzle solving is a “quasi-” empirical description of the taxis approach to scientific organization (and an account of how the taxis approach breaks down when
18 Rationality in an Evolutionary Epistemology
363
confronted with the intrusion of the inevitable anomalies at the “cosmic” fringe), and the charge of irrationalism against Kuhn by traditional philosophers is really directed to his claim that explicit methodological directives cannot improve upon the spontaneous organization of science as an essential tension between tradition (the normal science taxis framework) and innovation (when a revolutionary sees beyond the taxis to the cosmos through the anomalous fringe). The problem with explicit rational methodological prescriptions (for either science or society) is that they fail to acknowledge the inevitability of our ignorance of particulars in a spontaneous order. Such accounts could work only in a delimited taxis that, being limited to a particular plan specified in advance, could never allow fundamental novelty or creativity to be rational. This fundamental point escaped most theorists. As such, Popper’s emphasis upon novelty and revolutions was in conflict with his critical rationalist methodological prescriptivism.1 The general rules regulating conduct in society and science cannot be directed toward particular ends or positive prescriptions of what must be done. We can never specify in advance in either the court of law or science lab what particular results or ends must be achieved, but only note some general rules that must be followed. General rules will be negative or prohibitory of classes of conduct, but will be positive or creative in the sense that, while they forbid certain classes of conduct they do not tell us what we must do in any particular case. Thus, they allow us to utilize our creativity, to take advantage of our momentary knowledge of particular circumstances, in order to achieve our (often diverse and conflicting) ends. They even permit us to freely employ what are traditionally regarded as irrational or non-rational factors, such as emotional, aesthetic and even moral considerations. The only methodological directives that are tenable tell us not to commit types of error (e.g., do not block inquiry, do not fabricate data), but not what positive results must be achieved. This is an instance of the superior power of negative rules of general order over positive prescription of particulars (Weimer, 2020). Rationality (the task of being rational) is a means, a never ending or standing obligation, not an end . It is a matter of following general principles in the constant flux of unanticipated occurrences–not a matter specifying in advance particular ends to be achieved.
364
W. B. Weimer
The justificationist “we want instant assessment” sentiment that comprehensively critical rationality is too “vague” to be of any positive use (i.e., that it fails to specify particulars that must be achieved) is a residual aspect of the engineering mentality of the taxis organizer or social planner who wants to plan out everything in advance to be “rational” and explicit. It is an aspect of what Hayek and I have called rationalist constructivism, the desire to remake all cosmic or spontaneous structures into taxis or directed organizations. It remains an attempt to turn the cosmos of science into a taxis in order to avoid vagueness and abstraction, a quest for instant rational assessment fundamentally at variance with evolutionary history and non-justificational philosophy and rationality.
Like the Market Order, Rationality Is a Means, not an End Rationality must be “vague” rather than prescriptive because criticism is a means to the end of rationality, and thus, like any general-purpose tool, its specific shape and method of employment cannot be prescribed in advance. What counts as effective criticism in the given case cannot be delimited precisely and exhaustively in advance. Like all other specifications of abstract rules and spontaneous complex orders, all we can specify about criticism will be general “negatives.” Once one breaks the justificational fusion of criticism with proof it is no longer an end in itself, but a “mere” means. As a mere means to the end of rationality criticism becomes a problematic (read: empirical) issue rather than something given in advance. The most pressing task facing the theory of rationality and the methodology of science is to provide an adequate conception of what constitutes criticism in the welter of particulars in knowledge claims.
18 Rationality in an Evolutionary Epistemology
365
Comprehensively Critical Rationality is Rhetorical (and so Is All Knowledge Claiming) Like all theoretical claiming, comprehensively critical rationalism is indeed thoroughly rhetorical (as opposed to the traditional categories of logical or dialectical). Theories are cast in an argumentative mode of behavior, and anything in that mode is properly rhetorical and suasory, and indispensably so rather than incidentally. Knowledge is a matter of consensus depending for its existence upon both an audience and a context. There is no way for social interchange in the scientific cosmos to shape a consensus to the point required for knowledge to exist without a rhetorical and suasory process being intrinsic to the construction. Science is a rhetorical transaction (Scott, 1967; Weimer, 1977) in both research praxis in normal science periods (e.g., tuition and communication within a uniform research community), theoretical claiming and confrontations and revolutions (so-called paradigm clashes), and the presentation of science to the wider audience (e.g., to the taxpayers and to funding agencies). In all cases, scientists and the public face arguments (in news presentations, in print, and experimentation), never “data” or “logic” or proven assertion. The task facing methodology was aptly summarized by Kuhn (1970): “What one has to understand…is the manner in which a particular set of shared values interacts with particular experiences shared by a community of specialists to ensure that most members of the group will ultimately find one set of arguments rather than another decisive” (p. 200). Comprehensively critical rationalism is unique in the extent to which it is compatible with and fosters such a rhetorical analysis of conduct, which is sadly lacking in all prior theories of rationality and methodology. Science, like the spontaneous market order, can exist only if its participants are forced to obey the constraints and rules that allow it to function. It does not matter that scientists “freely choose” to engage in science—the point is that their choice requires adherence to the “rules of the game.” Spontaneous complex or cosmic orders can exist only if a framework of general rules of conduct, binding equally, and in that sense alone “irrationally” upon all individuals, is observed. The rules of the game of science need not be explicit or conscious, but they must
366
W. B. Weimer
be relatively consistent over time. As in language, the market, kinship, etc., all that is required is a set of general negatives, prohibitions that prevent one from infringing upon the protected sphere of others, or breaking accepted rules of practice, which prevent the recurrence of errors that have been discovered previously. The individual (whether scientist, entrepreneur in society, or whatever) need never know explicitly how his or her behavior functions to maintain the order, or what the outcome of the scientific or market game may eventually be. We have evolved a complicated pattern of behavior that cannot be explicitly taught and that we do not understand much better than the “primitive” understands kinship rules or agricultural techniques. But for the acquisition of knowledge, that pattern of behavior—scientific research—which has evolved out of common sense inquiry, has given us what little explicit knowledge of the universe that we possess.
Rationality in the Complex Social Cosmos Since we do in fact assess certain actions as irrational, it is obvious that we do so on the basis of a particular theory of rationality: conduct is judged to be rational or irrational. Thus judging some action to be irrational means that it is not rational according to our theory. Until very recently all writers tacitly held a conception of rationality that is both justificationist and rationalist constructivist. It is justificationist in that rationality is defined as action that is in accord with justified (and justifying) standards. It is “rational” for the justificationist to act on what is assumed to be “justified true belief,” since that constitutes knowledge (and everyone assumes that conduct should be guided by “positive” knowledge rather than “negative” ignorance). But that received view of rationality is also rationalist constructivist in that it accepts the Cartesian ideal of explicit analysis by a single mind to foresee all consequences as a necessary part of “ideal” rationality. It is assumed that rational conduct is fully self-conscious and designed to fulfil aims that have been clearly delimited in advance—or that rationality is increasing to the extent that conduct becomes more explicit and subject to conscious criticism
18 Rationality in an Evolutionary Epistemology
367
within a taxis framework. The ideal is that of Dodds’s (1951) conception of a fully open society as one “whose adaptations were all of them conscious and deliberate” (p. 255). This has been the ideal of the rationalist constructivist since the ancient Greeks through to Descartes and Comte, then to Marx, then the Bloomsbury intelligensia such as Russell, through to almost all contemporary “intellectuals” and the intelligensia of the news media and politics. This chapter criticizes and replaces the “false enlightenment” constructivist aspect of this theory of rationality with a more adequate theory that is consonant with the evolution and functioning of complex spontaneous orders. Science and society must be provided a more tacit and spontaneous conception of action in accordance with reason—one that reflects lessons learned from the study of classical liberalism, the tacit dimension, the social as the result of human action but not design, comprehensively critical rationalism, and more. One can begin to do this indirectly, by criticizing as in fact not rational the stricture of “irrationality” that constructivist accounts impose upon conduct and spontaneous order that is not fully explicit, conscious, or taxis-purpose oriented. The Cartesian ideal is false because it regards as irrational an enormous amount of our behavior that is rationally adapted to the spontaneous social cosmos of our abstract society. Rationality is an evolutionary by-product of our increasing ability to deal with the novelty of our world of ever changing and unforeseen particularity. As such, rationality has to do with the means according to which goals are reached in a cosmos rather than the (both actually and potentially infinite) ends of action. Let us summarize this and then apply it.
The Ecology of Rationality Consider the ecology rationality. The context in which theory of rationality becomes an issue is the practical arena of the conduct of human affairs (both individually and socially). Rationality is action in the argumentative mode of behavior. Rationality has nothing to do with language “games,” statements, or propositions (whether true or false). Instead rationality has to do with claiming and arguing (whether verbally or
368
W. B. Weimer
behaviorally, consciously or tacitly). Rationality is not concerned with the contents of either the mind or behavior, but rather with the way in which those contents are held by individuals, and in the manner in which they are acted upon. Rationality is always context (time and knowledge) dependent. One may rationally hold a false proposition (for example) as a basis for conduct. It was not irrational in the Middle Ages to assert that the world was flat, or that the plague was caused by God’s displeasure. There was not enough knowledge to do otherwise. Likewise it is rational for a physicist to use the false proposition—that light moves in “straight lines”—in terrestrial research and indeed in some aspects of cosmology. While truth or falsity depends upon the propositional content of a belief ’s (or statement’s) correspondence with reality (the actual state of affairs), rationality is not so straightforwardly related. Rationality pertains to action, not the informational basis of belief, or the particular language in which we cast discourse. Indeed, it is perfectly rational to act upon ignorance rather than solely upon positive knowledge, as when we consistently follow a general rule prohibiting certain classes of action in ever-changing particular circumstances. If rationality is action, what is reason? In the rate-independent realm reason has to do with such things as the assessment of claims, and the determination of means to given ends. John Locke was correct three centuries ago when he made the twofold claim that reason is “principles of action” and that reason is the source of virtue and morality. What is emphasized in this chapter is that our reason is neither complete and explicit, nor did it precede our conduct. Reason is an evolutionary result of the interplay of many individuals and minds in a framework of traditional order. We did not, as classic accounts would have it, first have reason and then decide consciously to act rationally. Mind and culture, reason and rationality, developed (and continue to develop) concurrently rather than successively: rationality is fundamentally an emergent phenomenon—what we call reason emerges when we became aware of our capacity to model the environment. Ecologically, this occurs within the framework of learned rules of conduct, consisting mainly of general rules that prevent us from making what have in the past been found to be
18 Rationality in an Evolutionary Epistemology
369
errors or mistakes. Thus, reason and rationality are evolutionary byproducts of our never ending quest to survive and to improve our position in our econiche. The domain for theories of rationality is pragmatics rather than syntax or semantics.
Science and Our Knowledge Must be Both Personal and Autonomous Consider what appears to be a tangent: do we create the concepts and knowledge with which we make science, or are we discovering aspects in nature that were always there, hidden all the time? This is a popular and perennial issue. As Bronowski (1978) noted, the world divides into those “who would like to think that our analysis of nature is a personal and highly imaginative creation and those who would like to think that we are simply discovering what is there” (p. 55). How can knowledge disclose an impersonal world that has no human qualities and, in the natural sciences at least, is not concerned with animate things, and yet be said by some to be personal, imaginative, and based on values? This attitude of bifurcation results from an error—the failure to realize that we must include both the knower and the known to have knowledge. When we look at science as both a human activity and as objective, autonomous knowledge products it is clear that it is always and inevitably both at once. We are always discovering things that were there, “hidden” in nature in the sense of not yet recognized or explained, and we are doing so with concepts that are the creation of our imagination and intellect. The knower and the known are complementary, not either-or. All knowledge is the result of the activity of our nervous system: the capacity for classification and capacity to model are the basic building processes on which any possible knowledge and cognition rests. Science is in the business of modeling the cosmos, and the theories and conjectures with which it attempts to do so are, for the individual, highly personal and imaginative achievements. The same theories and conjectures, when viewed impersonally from the standpoint of the objectivity of the research community, are trans-subjective attempts to portray what the world really is truly like. Especially during periods of normal science
370
W. B. Weimer
when our theories are generative conceptual systems that are unknown in their entirety and at best partially known even to the individuals who have created them, we see our discoveries as “uncoveries” of what constitutes a relatively passive natural order that is clearly independent of our inquiry. But while as realists who argue that there is a realm that is independent of our inquiring mind (try running through the nearest wall if you doubt that independence), one need only reflect on the conceptual revolutions of the last century to see that what we know of that independent reality is conditioned by not only our physiological makeup but also by the state of our theories. Almost no significant theory survives for more than a century without having its conception of “uncovered nature” refuted and reformulated. What we take to be objective reality is a construction in the minds of humans. And it could not be any other way in a cosmos of infinite particularity and complexity. It is because our reason is evolving, being constantly led into the unknown and unforeseen, that what we perceive to be "really there" must constantly change: knowledge is objective and autonomous but its creation is personal, a result of the tacit dimensions of cognition and community structure. So long as it is human, knowledge could be achieved in no other way. This essential tension, another inescapable dualism, is part of our existential predicament.
Rationality and The “New” Confusion About Planning in Society The role of planning for the future in a cosmos engenders a confusion if one is not clear on the difference between individuals and the cosmic structure that their actions produce. One can avoid a prevalent confusion by limiting what “planning” is taken to refer to. The “old confusion about planning centered around the possibility of planning society’s progress directly by the establishment of a centralized control structure, and this quest was abandoned when the liberal arguments against socialism to the effect that such planning is “impossible” (as Mises, originally in 1922, showed 100 years ago) were grudgingly acknowledged. The liberal argument is that planning of the sort desired by the socialists
18 Rationality in an Evolutionary Epistemology
371
can only occur within a delimited taxis, but literally is not possible within the social cosmos as a whole, as a decentralized knowledge and resourceallocating system. Within a classic liberal social order this distinction involves the desirability of allowing the cosmos to function in an efficient and effective manner—of “planning” as creating the conditions for progress rather than planning specific “progressive” acts themselves. It is impossible to improve the productivity of a society by restricting the market order to a taxis by utilizing interventionist policies (whether political or economic) to favor a particular end specified in advance. Such an attempt must decrease the overall output of the unfettered cosmic order. It abandons the cosmos for a limited taxis structure. The “new” confusion arises when the valid argument against collectivist planning in society is carried over to the affairs of the individual, and it is assumed that planning is harmful on the individual level also. Collectivists argue that it should be irrational for the individual to plan, since he or she is at the mercy of an impersonal order that is beyond the comprehension or control of any individual (Hamowy, 1987). It is said that one ought to advocate a laissez-faire approach that rules out planning for the future in exactly the same fashion that J. M. Keynes, in his famous dictum that “In the long run we are all dead,” advanced a defense of momentary interventionism against the long-term policy of acting consistently according to general rules. But the arguments (from impossibility and inefficiency) against centralization and collectivization say nothing against planning on the part of the individual . It is perfectly rational for individuals to plan, to the fullest extent possible, for their own futures. Individual planning is rational even though the social order is impersonal, autonomous and “unplanned.” What would be irrational would be failing to take advantage of the creative power of the social cosmos— either by not planning for one’s self (and/or individuals known to one’s self ) at all, or by planning for particulars as though the cosmos were a taxis. Planning in a cosmos, like rational action, is a matter of adhering to abstract rules in the face of infinite particularity. Keynes’ rule—ignore the long run and live for the moment—is in fact irrational (presumably you remember the moral of the fable of the grasshopper) in a cosmos. And it can never be a rule of just conduct in a spontaneous order, for the
372
W. B. Weimer
well-known reason that justice must always be blind to the particular in order to be just, and not change in outcome when situations are slightly different. Planning for progress in a spontaneous order is essentially negative. It consists in not disrupting the ongoing order while one learns to take advantage of the creative power contained within that order. Thus, our “positive” plans will consist in following general rules that are essentially negatives—following rules that tell us what kinds of mistakes to avoid while we utilize (or attempt to utilize) our momentary knowledge of particular circumstances to our best advantage. The only positive action we can take with respect to particulars is to intervene in a seemingly double negative manner to remove an impediment to the proper functioning of the order. Successful intervention in a cosmos removes impediments to proper functioning but never produces particular outcomes planned in advance. What we can rationally do is remove prior constructivist attempts to favor one or another group or outcome. Our planning, to be rational, must allow us to harness the superior capacity of the ongoing spontaneous cosmos without attempting to deflect and limit it to particular results. We can generalize the strategy of “planning for progress” (creating the conditions in which progress might occur) rather than planning particular bits of progress. Consider the domain of the methodology of science, where the tacit dimension of the community structure of research must be allowed to function efficiently if science is to be maximally rational. No theory of scientific methodology may be prescriptive of particulars that one “must” do or rules one must follow in order to be scientific. Or apply this to the domain of therapy and adjustment in the abstract society: it should be obvious that successful clinical intervention must aid the individual in the task of fitting into the abstract order of society, and it is rational if it allows the client to come to be able to plan for his or her progress (no matter what that involves) rather than to plan particular goals to be achieved. In all cases in which behavior is spontaneously ordered and self-constrained, it is of little consequence if one cannot immediately specify whether something is explicitly rational. Rationality emerges only in the long run as a result of consistently following abstract rules of order.
18 Rationality in an Evolutionary Epistemology
373
Note 1. About fallibilism in science and fallibilism in philosophy. It is curious that the indispensable connection between evolution and fallibilism was not immediately recognized. The initial evolutionary theory of society, propounded by the Scottish moralists, concentrated upon the unintended consequences of actions in the creation of an econiche as a context of constraints in which society changed over time. The extreme importance of negative prohibitions to action—the “don’t do’s” and “thou shalt nots”—was clearly recognized and emphasized by the Moralists. But even later, Darwin’s application of evolution to “the species problem” did not initially show the crucial importance of fallibilism. Here the watchwords have been “blind variation and selective retention.” The twin emphases have been that there is no plan or predetermination in the type of change lifeforms exhibit (the thesis that the variation is “blind”), and upon retention as a temporary rather than permanent result of the winnowing effects of a competitive and hostile environment (“selective”), which means “not yet extinct.” All species are fallible—just biological conjectures about survival worthiness—and are subject to elimination by those environmental forces—error elimination—at all times. Despite the changes in the Darwinian framework—with the addition of Mendelian genetics and the “new synthesis” approach, and current modifications of evolution to become less dependent upon inexorable physical laws—the fallibility of species (their being subject to elimination in the unforeseen and unpredictable future) has never changed. This is in contrast to philosophy, where the ideal of mathematical certainty and the justificationist metatheory of knowledge and rationality have always emphasized a building block approach in which knowledge claims, once accepted even provisionally, seem not to be capable of being eliminated (in 1914 the instrumentalist philosopher Pierre Duhem [1954] argued convincingly that any theory can be held “come what may” simply by making enough minor adjustments in either the theory or its background assumptions). While there were discussions of the fallible nature of knowledge in some ancient Greek sources, it was not until the latter half of the nineteenth century that Charles Sanders Peirce seriously proposed elements of a fallibilist epistemology. Peirce’s philosophy was ignored at the time, and has only gained currency by rediscovery in the last decades, where he is now cited as an historical father figure for the biosemiotics movement rather than for his epistemic views. It remained
374
W. B. Weimer
for Karl Popper in 1934 (English translation 1959) to propose falsification as the “method” by which science advances in opposition to positive instance confirmation. This slight change is all that it takes to make falsification the essential characteristic of evolutionary epistemology: Hypotheses precede observations psychologically, logically, even genetically: all experience is theory impregnated. Every animal was born with expectations—that is, with something closely parallel to hypotheses, which, if verbalized, express hypotheses or theories. The role of experience is to break expectations: to criticize and to challenge hypotheses. The ability of an animal to learn will depend on the extent to which it can modify expectations contradicted by experience, on the extent to which it is able to invent new expectations or theories to deal with unanticipated situations. (Bartley, 1982, p. 264) Note carefully the role of experience in the acquisition of knowledge—it is only to break or to falsify previously held expectations. It is never to confirm or enshrine them as true or certain. And the empirical realm is defined negatively, as that which the theory forbids to occur if it is correct, and if what the theory does not permit to occur does in fact occur then the theory is falsified. When expectations are shown to be incorrect then the theory upon which those expectations were based cannot be true. That is all there is—there is no “positive” confirmation anywhere—not in commonsense or science. Traditional building block confirmation oriented philosophy, with its quest for somehow justified belief, has done its best to either ignore this view or to dismiss it after presenting a caricature of it. There have been two lines of attack: first, that this is a prescriptive account, specifying how science ought to proceed, and that, since scientists (at least in periods of Kuhnian normal science) uniformly ignore refutations and hold their favorite theories anyway (and steadfastly assume that a failure to refute has positive confirmation value), it is false as a prescriptive methodology for science. As Rosenberg (2012) says, “Such [refuting] experiments are treated not merely as attempts to falsify that fail, but as tests which positively confirm” (p. 209). This is a refutation from the viewpoint of descriptive epistemology—claiming that the empirical data of science practice do not support the prescriptive specification Popper said were “good” science practice. But no “is” statement can refute an “ought” claim—the Popperian retort is merely that most scientists are “bad” or
18 Rationality in an Evolutionary Epistemology
375
inadequate practitioners of “good” science—no amount of confirming instances changes the logic of modus tollens refutation, and even novel predictions from a theory (e.g., Einstein’s “prediction” that the perihelion of Mercury ought to precess [when Newton does not predict that it should or should not] do not “confirm” it at all (indeed, Einsteinian predictions are falsified by the amount of actually observed precession, so the Einsteinian view is not “true” and was not “confirmed” in this instance at all). It is “better” than Newton only in that it does predict a precession whereas Newton does not. Nevertheless, critics of Popper assume that if they can make this point they can dismiss his views without further comment after noting that it “threatens empiricism,” which is taken to be enough to show it to be untenable: “attempts like Poppers to entirely short-circuit the inquiry about how evidence supports theory and replace it with the question of how it falsifies theory, seem to boomerang into the persistent possibility of underdetermination, and the global threat that it may not be observation, experiment and data collection that really control inquiry” (ibid., p. 215). The answer to “How evidence supports theory” is comparable to the question of “How does present adaptive success support future adaptation” in evolution. Note again the constant quest for rigid control. The answer is exactly the same in both cases: “It doesn’t.” Remember our old friend the cockroach: present on earth for at least as long as all mammals have been here, it seems to be “perfectly” adapted, yet that fact alone does not mean it cannot go extinct tomorrow. More serious criticism of Popper came from another direction. His students, when looking around to find philosophical positions worthy of criticism and improvement, find nothing on the philosophical horizon more interesting or worthy of criticism than the views of Popper himself. As one of Popper’s most innovative students put it: It [the Popperian framework] is under attack not only by professional philosophers: there has been self-destructive internal dissent. One Popperian became so alarmed by this that he wrote, in some exaggeration, that “instead of a coherent philosophical position, one finds a lightly disguised squabble of alley cats” (Bartley, 1990, p. 204). Popper’s views were effectively criticized by his best students and followers. Bartley replaced his justificationist conception of critical rationalism by comprehensively critical rationalism (examined in this Appendix), and showed that Popper’s response to logical positivism,
376
W. B. Weimer
his demarcation of science from metaphysics, was much less important outside of the positivistic framework. Lakatos took an idea from Popper’s 1958 seminar about metaphysical research programs (elaborated beautifully by Watkins, 1958) and added a layer of analysis above Popper’s in order to combat Kuhn’s increasingly well-received conception of science as alternating periods of tradition-based normal science research and revolutionary episodes. Feyerabend attacked the failure of Popper’s prescriptivist methodology and argued for abandoning prescriptivist approaches in favor of “anything goes” and consciously attempting to effect “revolutions in permanence” in order for science to progress as fast as possible. The book in which the “lightly disguised squabble” description occurred (Weimer, 1979, p. xi) utilized Bartley’s comprehensively critical (or pancritical) conception as a framework to broaden its criticism to the entire justificationist conception of rationality, inference and criticism, and proposed a three layer account of scientific activity (found in Table 13.1), including Kuhn’s distinction between normal and revolutionary activity as far more adequate than Popper’s prescriptive methodology. So while Popper’s original works have been found wanting and repaired or replaced, the framework, fallibilism within an evolutionary approach to epistemology, has thus far resisted all attempts to refute it. No theory can ask for more.
References Agassi, J. (1966). Sensationalism. Mind , N. S. Vol. 75, 1–24. Ayer, A. J. (1956). The Problem of Knowledge. Penguin Books. Bartley, W. W. (1982). A Popperian Harvest. In P. Levinson (Ed.), In Pursuit of Truth: Essays in Honour of Karl Popper’s 80th Birthday. Humanities Press. Bartley, W. W., III. (1984). The Retreat to Commitment. Open Court. Bartley, W. W. (1990). Unfathomed Knowledge, Unmeasured Wealth Open Court. Now Cricket Media. Bronowski, J. (1978). The Origins of Knowledge and Imagination. Yale University Press. Dodds, E. R. (1951/2004). The Greeks and the Irrational. University of California Press, 2004.
18 Rationality in an Evolutionary Epistemology
377
Duhem, P. (1914/1954). La Theorie Physique: Son Objet, sa Structure: Translated as The Aim and Structure of Physical Theory. Princeton University Press. Feyerabend, P. K. (1965). Reply to Criticism. In R. S. Cohen & M. W. Wartofsky (Eds.), Boston Studies in the Philosophy of Science (Vol. 2, pp. 223–261). Humanities Press. Hamowy, R. (1987). The Scottish Enlightenment and the Theory of Spontaneous Order. Journal of the History of Philosophy Monograph Series. Southern Illinois University Press. Hayek, F. A. (1967). Studies in Philosophy, Politics, Economics, and the History of Ideas. University of Chicago Press. Hayek, F. A. (1978). New Studies in Philosophy, Politics, Economics and History of Ideas. University of Chicago Press. Hayek, F. A. (1979). Law, Legislation and Liberty: Vol. 3: The Political Order of a Free People. University of Chicago Press. Hickey, T. J. (2005). History of Twentieth-Century Philosophy of Science. (Also revised eds., 2012, 2019) In eLib.at(Hrg), 09 Dezember 2021. http://elib. at/ Kuhn, T. S. (1970). The Structure of Scientific Revolutions. University of Chicago press (Rev. Ed.). Popper, K. R. (1959). The Logic of Scientific Discovery. Harper & Row. Quine, W. V. O. (1951). Two Dogmas of Empiricism. Philosophical Review, 60 (1), 20–43. Quine, W. V. O. (1960). Word and Object. MIT Press. Rorty, R. (1979). Philosophy and the Mirror of Nature. Princeton University Press. Rosenberg, A. (2012). Philosophy of Science: A contemporary introduction. Routledge. Scott, R. (1967). On Viewing Rhetoric as Epistemic. Central States Speech Journal, 18(1), 9–17. https://doi.org/10.1080/10510976709362856 Watkins, J. W. N. (1958). Confirmable and Influential Metaphysics. Mind , 67 (267). https://doi.org/10.1093/mindlxvii267.344 Weimer, W. B. (1977b). Science as a Rhetorical Transaction. Philosophy and Rhetoric, 10 (Winter), 1–29. Weimer, W. B. (1979). Notes on the Methodology of Scientific Research. Erlbaum Associates. Weimer, W. B. (2020). Complex Phenomena and the Superior Power of Negative Rules of Order. Cosmos + Taxis, 8, 39–59. Wittgenstein, L. (1953). Philosophical Investigations. Basil Blackwell.
References
Aaron, R. I. (1937/2009). John Locke. Oxford University Press. Online by Cambridge University Press. Abel, D. L. (2010). Constraints Versus Controls. The Open Cybernetics and Systematics Journal, 4, 14–27. Abel, D. L. (2011). The First Gene: The Birth of Programming, Messaging and Formal Control . Longview Press. Abel, D. L., & Trevors, J. T. (2005). Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information. Theoretical Biology and Medical Modelling, 2, 29. https://doi.org/10.1186/1742-4682-2-29 Agassi, J. (1966). Sensationalism. Mind , N. S. Vol. 75, 1–24. Alcorn, J. (2019). Markets and the Citizen’s Dilemma: The Case of Climate Change. In J. Alcorn (Ed.), Markets and Liberty (pp. 1–27). Shelby Cullom Davis Endowment. Aune, B. (1967). Knowledge, Mind, and Nature. Random House. Ayer, A. J. (1936).Language, Truth and Logic. Dover Publications (second edition, 1946). Ayer, A. J. (1956). The Problem of Knowledge. Penguin Books. Baker, M. (2016). 1,500 Scientists Lift the Lid on Reproducibility. Nature, 533, 452–454.
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4
379
380
References
Bartley, W. W. (1982). A Popperian Harvest. In P. Levinson (Ed.), In Pursuit of Truth: Essays in Honour of Karl Popper’s 80th Birthday. Humanities Press Bartley, W. W., III. (1983). The Challenge of Evolutionary Epistemology. Absolute Values and the Creation of the New World (Vol. II, pp. 835–880). The International Cultural Foundation Press. Bartley, W. W., III. (1984). The Retreat to Commitment. Open Court. Bartley, W. W. (1990). Unfathomed Knowledge, Unmeasured Wealth. Open Court. Now Cricket Media. Bekenstein, J. D. (1973). Black Holes and Entropy. Physical Review D, 7 , 2333. https://doi.org/10.1103/PhysRevD.7.2333 Begley, C. G., & Ellis, L. M. (2012). Raise Standards for Preclinical Cancer Research. Nature, 483, 531–533. https://doi.org/10.1038/483531a Bell, J. S. (2004). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. Bertalanffy, L. (1967). Robots, Men and Minds. George Braziller. Bloor, D. (1974). Knowledge and Social Inquiry. Routledge. Blumenthal, A. (1970). Language and Psychology: Historical Aspects of Psycholinguistics. John Wiley & Sons. Blumenthal, A. (1977). Wilhelm Wundt and Early American Psychology: A Clash of Two Cultures. Annals of the New York Academy of Sciences, 291(1), 13–20. Blumer, H. (1969). Symbolic Interactionism: Perspective and Method . PrenticeHall. Bohm, D. (1965). The Special Theory of Relativity. W. A. Benjamin Inc. Bohm, D. (1976). Fragmentation and Wholeness. The Van Leer Jerusaalem Foundation. Born, M. (1969). Symbol and Reality. In Physics in my Generation (pp. 132– 146). Springer. Boltzmann, L. (1960). Theories as Representations. In A. Danto and S. Morgenbesser (Eds.), Philosophy of Science (pp. 245–252). Meridian Books. Originally Die Grundprinzipien und Grundgleichungen der Mechanik, I. In Populare Schriften. J. A. Barth, 1905. Boring, E. G. (1950). A History of Experimental Psychology. Appleton-centuryCrofts. Bransford, J. D., & Franks, J. J. (1971). The Abstraction of Linguistic Ideas. Cognitive Psychology, 2, 331–350. Brentano, F. (1874/1974). Psychology from an Empirical Standpoint. Rutledge & Company. Bridgman, P. W. (1927). The Logic of Modern Physics. Macmillan.
References
381
Broad, C. D. (1925). The Mind and its Place in Nature. Routledge and Kegan Paul. Broad, C. D. (1949). The “Nature” of a Continuant. In H. Feigl and W. Sellars (Eds.), Readings in Philosophical Analysis. Appleton-Century-Crofts. Originally in Examination 0f McTaggart’s Philosophy (Vol. 1), Cambridge University Press, 1933. Bronowski, J. (1978). The Origins of Knowledge and Imagination. Yale University Press. Bühler, K. (1918). Kritische Musterung der neuen Theorien es Satzes. Indogermanisches Jahrbuch, 6 , 1–20. Bühler, K. (1934). Theory of Language: The Representational Function of Language. John Benjamins Publishing Company. Burke, K. (1945). A Grammar of Motives. University of California Press. Burke, K. (1966). Language as Symbolic Action. University of California Press. Burke, K. (1967). On Human Nature: A Gathering While Everything Flows. University of California Press. Burke, K. (1969). A Rhetoric of Motives. University of California Press. Butos, W. (2015). Causes and Consequences of the Climate Science Boom. The Independent Review, 20 (2), 165–196. Butos, W. (2019). Reminiscences and Reflections. In J. Alcorn (Ed.), Markets and Liberty (pp. 109–122). Shelby Cullom Davis Endowment. Butos, W., & McQuade, T. (2015). Government and Science: A Dangerous Liaison. The Independent Review, 11(2), 177–208. Campbell, D. T. (1974a). “Downward Causation” in Hierarchically Organized Biological Systems. In F. J. Ayala and T. Dobzhansky (Eds.), Studies in the Philosophy of Biology. Macmillan & Company. Campbell, D. T. (1974b). Evolutionary Epistemology. In P. A. Schilpp (Ed.), The Philosophy of Karl Popper (pp. 413–463). Open Court. Campbell, D. T. (1975). On the Conflicts Between Biological and Social Evolution and Between Psychology and Moral Tradition. American Psychologist, 30 (12), 1103–1126. https://doi.org/10.1037/0003-066X.30.12.1103 Campbell, D. T. (1988). Methodology and Epistemology for Social Science. The University of Chicago Press. Campbell, N. R. (1920). Physics, the Elements. Cambridge University Press. Campbell, N. R. (1928). An Account of the Principles of Measurement and Calculation. Longmans. Cassirer, E. (1923). Substance and Function, and Einstein’s Theory of Relativity. Open Court Press.
382
References
Cassirer, E. (1953). The Philosophy of Symbolic Forms (Vol. 1). Yale University Press. Cassirer, E. (1957). The Philosophy Symbolic Forms (Vol. 3). Yale University Press. Chamberlin, T. C. (1890). The Method of Multiple Working Hypotheses. Science, 15 (366). https://doi.org/10.1126/science.na-15.366.92 Chomsky, N. (1957/2002). Syntactic Structures. Mouton & Company (Walter de Gruyer GmbH, 2002). Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press. Chrisman, N. R. (1998). Rethinking Levels of Measurement for Cartography. Cartography and Geographic Information Systems, 25 (4), 231–242. https:// doi.org/10.1559/152304098782383043 Coleman, J. S. (1990). Rational Organization. Rationality and Society, 2(1), 94–105. https://doi.org/10.1177/1043463190002001005 Collins, R. (1994). Why the Social Sciences Won’t Become High-Consensus, Rapid-Discovery Science. Sociol Forum, 9, 155–177. https://doi.org/10. 1007/BFO1476360 Cronbach, L. J. (1957). The Two Disciplines of Scientific Psychology. American Psychologist, 12(11), 671–684. https://doi.org/10.1037/H0043943 Davidson, D. (1973). On the Very Idea of a Conceptual Scheme. Proceedings and Addresses of the American Philosophical Association, 47 (1973–74), 5–20. Dewan, E. M. (1976). Consciousness as an Emergent Causal Agent in the Context of Control System Theory. In G. G. Globus, G. Maxwell, & I. Savodnik (Eds.), Consciousness and the Brain (pp. 181–198). Plenum Press. Dewey, J. (1935). Liberalism and Social Action. Capricorn Books. Dodds, E. R. (1951/2004). The Greeks and the Irrational. University of California Press, 2004. Duhem, P. (1914/1954). La Theorie Physique: Son Objet, sa Structure: Translated as The Aim and Structure of Physical Theory. Princeton University Press. Eccles, J. C. (1976). Brain and Free Will. In G. G. Globus, G. Maxwell, & I. Savodnik (Eds.), Consciousness and the Brain (pp. 101–121). Plenum Press. Eddington, A. S. (1929). Science and the Unseen World . MacMillan. Einstein, A. (1953). Geometry and Experience. In H. Feigl and M. Brodbeck (Eds.), Readings in the Philosophy of Science (pp. 189–194). AppletonCentury-Crofts. Originally in A. Einstein. Sidelights of Relativity (pp. 27– 45). E. P. Dutton, 1923. Elsasser, W. (1958). The Physical Foundation of Biology. Pergamon Press.
References
383
Eronen, M. I., & Bringmann, L. F. (2021). The Theory Crisis in Psychology: How to Move Forward. Perspectives on Psychological Science, 16 (4), 779–788. https://doi.org/10.1177/1745691620970586 Ferguson, A. (1767). An Essay on the History of Civil Society. Public Domain Text, Available from Online Literature of Liberty, Liberty Fund. Ferguson. (1785/2010) Institutes of Moral Philosophy. Gale ECCO, print editions. Feyerabend, P. K. (1965). Reply to Criticism. In R. S. Cohen & M. W. Wartofsky (Eds.), Boston Studies in the Philosophy of Science (Vol. 2, pp. 223–261). Humanities Press. Friedman, M. & R. (1980).Free to Choose. Harcourt. Fuster, J. M. (2003). Cortex and Mind: Unifying Cognition. Oxford University Press. Fuster, J. M. (2013). The Neuroscience of Freedom and Creativity. Cambridge University Press. Galileo, G. (1960). Two Kinds of Properties. In A. Danto and S. Morgenbesser (Eds.), Philosophy of Science. Meridian Books. Originally translated by A. Danto in Introduction to Contemporary Civilization in the West (Vol. 1). Columbia University Press, 1954. Gamow, G. (1966). Thirty Years that Shook Physics. Doubleday & Company Inc. Gell-Mann, M. (1994). The Quark and the Jaguar. Henry Holt & Co. Ghiselli. (1964). Theory of Psychological Measurement. McGraw-Hill. Gibson, J. J. (1966). The Senses Considered As Perceptual Systems. Houghton Mifflin Company. Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Houghton Mifflin Company. Gigerenger, G., & Selten, R. (2002). Bounded Rationality: The Adaptive Toolbox. MIT Press. Godfrey-Smith, P. (2003) Theory and Reality: An Introduction to the Philosophy of Science. University of Chicago Press. Gregg, R. B. (1984). Symbolic Inducement and Knowing: A Study in the Foundations of Rhetoric. University of South Carolina Press. Gribbin, J. (1984). In Search of Schrodinger’s Cat: Quantum Physics and Reality. Bantam Books. Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion. Pantheon Books. Haidt, J. (2018). Why Do They Vote That Way? Penguin Random House.
384
References
Hannon, M., & de Ridder, J. (2021). The Routledge Handbook of Political Epistemology. Routledge. Hanson, N. R. (1958). Patterns of Discovery. Cambridge University Press. Hanson, N. R. (1961). Comments on Feyerabend’s “Neils Bohr’s Interpretation of the Quantum Theory” or Die Feierabendglocke fur Copenhagen? In H. Feigl and G. Maxwell (Eds.), Current Issues in the Philosophy of Science (pp. 390–398). Holt, Rinehart and Winston, Inc. Hanson, N. R. (1970). A Picture Theory of Theory Meaning. In M. Radner & S. Winokur (Eds.), Minnesota Studies in the Philosophy of Science, IV (pp. 131–141). University of Minnesota Press. Hamowy, R. (1987). The Scottish Enlightenment and the Theory of Spontaneous Order. Journal of the History of Philosophy Monograph Series. Southern Illinois University Press. Hawking, S. (1975). Particle Creation by Black Holes. Communications in Mathematical Physics, 43, 199–220. https://doi.org/10.1007/BF02345020 Hayek, F. A. (1952). The Sensory Order. University of Chicago Press. Hayek, F. A. (1967). Studies in Philosophy, Politics, Economics, and the History of Ideas. University of Chicago Press. Hayek, F. A. (1973/2012). Law, Legislation and Liberty: Vol. 1: Rules and Order. University of Chicago Press. Now Rutledge Classics. Hayek, F. A. (1976). Law, Legislation and Liberty: Vol. 2: the Mirage of Social Justice. University of Chicago Press. Hayek, F. A. (1978). New Studies in Philosophy, Politics, Economics and History of Ideas. University of Chicago Press. Hayek, F. A. (1979). Law, Legislation and Liberty: Vol. 3: The Political Order of a Free People. University of Chicago Press. Hayek, F. A. (1983). Knowledge, Evolution and Society. Adam Smith Institute. Hayek, F. A. (1989). The Fatal Conceit: The Errors of Socialism. The University of Chicago Press. Hayek, F. A. (2017). Within Systems and About Systems. In V. Vanberg (Ed.), The Sensory Order and Other Writings on the Foundations of Theoretical Psychology (pp. 1–26). University of Chicago Press. Hempel, C. G. (1952). Fundamentals of Concept Formation in Empirical Science. Foundations of the Unity of Science (Vol. II, No. 7). University of Chicago Press. Hempel, C. G. (1965). Aspects of Scientific Explanation. Free Press. Hertz, H. (1960). Two Systems of Mechanics. In A. Danto and S. Morgenbesser (Eds.), Philosophy of Science (pp. 349–365). Meridian Press.
References
385
Originally in H. Hertz, The Principles of Mechanics. (1894). Translated by D. E. Jones and J. T. Walley, Dover, 1956. Hesse, M. (1963/1966). Models and Analogies in Science. Notre Dame University Press (Rev. Ed. 1966). Hickey, T. J. (2005). History of Twentieth-Century Philosophy of Science. (Also revised eds., 2012, 2019) In eLib.at(Hrg), 09 Dezember 2021. http://elib. at/ Hoffmeyer, J. (1998). Surfaces inside Surfaces. On the Origin of Agency and Life. Cybernetics and Human Knowing, 5 (1), 33–42. Hoffmeyer, J. (2000). Code-Duality and the Epistemic Cut. In J. L. R. Chandler and G. Van de Vijver (Eds.), Closure. Emergent Organizations and Their Dynamics (Vol. 901, pp. 175–186). New York Academy of Science. Hoffmeyer, J. (2003). Origin of Species by Natural Translation. In S. Petrilli (Ed.), Translation, Translation (pp. 329–346). Rodopi. Hoffmeyer, J. (2006). Genes, Development and Semiosis. In E. M. NeumannHeld & C. Rehmann-Sutter (Eds.), Genes in Development: Re-reading the Molecular Paradigm (pp. 152–174). Duke University Press. Hoffmeyer, J. (2008). Biosemiotics. University of Scranton Press. Hollander, M., Wolfe, D. A., & Chicken, E. (2013). Nonparametric Statistical Methods. John Wiley & Sons. Homans, G. C. (1961). Social Behavior: Its Elementary Forms. Harcourt, Brace. Horwitz, S. (2015). Hayek’s Modern Family: Classical Liberalism and the Evolution of Social Institutions. Palgrave Macmillan. Howarth, E. (1954). A Note on the Limitations of Externalism. Australian Journal of Psychology, 6 , 76–84. Houle, D., Pelabon, C., Wagner, D. P. and Hanson, T. F. (2011). Measurement and Meaning in Biology. The Quarterly Review of Biology, 86(1).https://doi. org/10.1086/658408 Hull, C. L. (1943) Principles of Behavior. Appleton-Century-Croft. Hume, D. (1888). Hume’s Treatise of Human Nature. L. A. Selby-Bigge (Ed.). Oxford Clarendon Press. Idso, C. D., Carter, R. M., & Singer, S. F. (2016). Why Scientists Disagree About Global Warming: The NIPCC Report on Scientific Consensus. The Heartland Institute. Islami, A., & Longo, G. (2017) Marriages of Mathematics and Physics: A Challenge for Biology. Progress in Biophysics and Molecular Biology, 131. https:// doi.org/10.1016/j.pbiomolbio.2017.09.006 Judson, H. F. (2004). The Great Betrayal: Fraud in Science. Harcourt. https:// doi.org/10.1172/JC124343
386
References
Johnson, H. M. (1936). Pseudo-Mathematics in the Mental and Social Sciences. American Journal of Psychology, 48, 342–351. Kantor, J. R. (1958). Interbehavioral Psychology. Principia Press. Kantor, J. R. (1981). Interbehavioral Philosophy. Principia Press. Kauffman, S. A. (2019). A World Beyond Physics. Oxford University Press. Kauffman, S. A. (2000). Investigations. Oxford University Press. Kevles, D. J. (1998). The Baltimore Case: A Trial of Politics, Science, and Character. W. W. Norton. Kitcher, P. (1981). Explanatory Unification. Philosophy of Science, 48, 507–531. Kitcher, P. (1989). Explanatory Unification and the Causal Structure of the World (pp. 410–504). University of Minnesota Press. Koestler, A. (1964). The Act of Creation. Macmillan. Körner, S. (1966). Experience and Theory. Humanities Press. Koutstall, W. (2012). The Agile Mind . Oxford University Press. Kranz, D. H., Luce, R. D. & Suppes, P. & Tversky, A. (1971). Foundations of Measurement: Vol. 1. Additive and Polynomial Representations. Academic Press. Kuhn, T. S. (1970). The Structure of Scientific Revolutions. University of Chicago press (Rev. Ed.). Kuhn, T. S. (1977). The Essential Tension: Selected Studies in Scientific Tradition and Change. University of Chicago Press. Lakatos, I. (1976). Proofs and Refutations: The Logic of Mathematical Discovery. Cambridge Univesity Press. Lashley, K. S. (1951). The Problem of Serial Order in Behavior. In L. A. Jeffress (Ed.), Cerebral Mechanisms in Behavior (pp. 112–135). John Wiley & Sons. Levendis, J., Eckhardt, R. B., & Block, W. Evolutionary Psychology, Economic Freedom, Trade and Benevolence. Review of Economic Perspectives/Narodohospodarsky, 19 (2), 73–94. Lombard, B. (1998/2001). The Skeptical Environmentalist: Measuring the Real State of the World . University of Cambridge Press. https://doi.org/10.1017/ CB09781139626378 Lombord, B. (2020). False Alarm: How Climate Change Panic Costs Us Trillions, Hurts the Poor, and Fails to Fix the Planet. Basic Books. Longo, G., & Montévil, M. (2017). From Logic to Biology via Physics: A Survey. Logical Methods in Computer Science, 13(4:21), 1–15. Longo, G., Montévil, M., & Kauffman, S. (2012). No Entailing Laws, but Enablement in the Evolution of the Biosphere. Genetic and Evolutionary Computation Conference (pp. 1379–1392). ACM. https://doi.org/10.1145/ 2330784.2330946
References
387
Louie, A. (2012). Robert Rosen’s Anticipatory Systems. Foresight, 12, 18–29. Mach, E. (1959/2002) The Analysis of Sensations, and the Relation of the Physical to the Psychical . Forgotten Books Reprint Series. Maxwell, G. (1968). Scientific Realism and the Causal Theory of Perception. In I. Lakatos and A. Musgrave (Eds.), Problems in the Philosophy of Science. North-Holland Publishing Company. Maxwell, J. C. (1890/2011).The Scientific Papers of James Clerk Maxwell . Reprinted by Cambridge University Press, 2011. https://doi.org/10.1017/ CB09780511698095 Mayr, E. (1949). Discussion: Footnotes on the Philosophy of Biology. Philosophy of Science, 36 , 197–202. Mayr, E. (2001). What Evolution is. Basic Books. Meehl, P. E. (1954). Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. University of Minnesota Press. https://doi.org/ 10.1037/11281-000 Meehl, P. E. (1956). Wanted—A Good Cookbook. American Psychologist, 11(6), 263–272. https://doi.org/10.1037/h0044164 Meehl, P. E., & Sellars, W. (1956). The Concept of Emergence. In H. Feigl and M. Scriven (Eds.), Minnesota Studies in the Philosophy of Science (Vol. 1, pp. 239–252). Meehl, P. E. (1967). Theory Testing in Psychology and Physics: A Methodological Paradox. Philosophy of Science, 34 (2), 103–115. Menger, C. (1871/1950). Principles of Economics. Translated by J. Dingwall & B. F. Hoselitz. The Free Press. Now Mises Institute. Mill, J. S. (1843). A System of Logic: Ratiocinative and Inductive. Text in the Public Domain. University of Toronto press publication in 1974 is current reference text. Miller, N. E. (1957). Experiments on Motivation; Studies Combining Psychological, Physiological, and Pharmacological Techniques. Science, 126 , 1271– 1278. Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the Structure of Behavior. Henry Holt and Company. Miller, W. A., & Wheeler, J. A. (1984). Delayed-Choice Experiments and Bohr’s Elementary Quantum Phenomenon. In S. Kamefuchi et al. (Ed.), Proceedings of International Symposium of Foundations of Quantum Mechanics in the Light of New Technology, Tokyo, 1983 (pp. 140–151). Mises, L. (1922). Socialism: An Economic and Sociological Analysis. English 1951, Yale University Press, Liberty Press/Liberty Classics, 1981.
388
References
Mises, L. (1966).Human Action (3rd ed.). Contemporary Books. Now Liberty Fund. Mises, L. (1978). The Ultimate Foundations of Economic Science—An Essay on Method (2nd ed.). Sheed Andrews and McMeel. Mises, L. (1981). Epistemological Problems of Economics. New York University Press. Mises, L. (1990). Money, Method, and the Market Process. Essays selected by Margit von Mises, Edited by R. M. Ebeling. Kluwer Academic Publishers. Moore, P. (2010). Trees are the Answer. Beatty Street Publishing Inc. Moore, P. (2021) Fake Invisible Catastrophes and Threats of Doom. Independently published. Montévil, M. (2019). Measurement in Biology is Methodized by Theory. Biology and Philosophy, 34 (3):35, 1–14. https://doi.org/10.1007/s10539019-9687-x Montévil, M., & Massio, M. (2020). The Identity of Organisms in Scientific Practice: Integrating Historical and Relational Conceptions. Frontiers of Physiology, 11, 611. https://doi.org/10.3389/fphys.2020.00611 Morgan, J. E., & Ricker, J. H. (2018). Textbook of Clinical Neuropsychology. Taylor and Francis. Moruzzi, G., & Magoun, H. W. (1949). Brain Stem Reticular Formation and Activation of the EEG. Electroencephalography and Clinical Neurophysiology, 1(1–4), 455–473. https://doi.org/10.1016/0013-4694(49)90219-9 Mossio, M., & Moreno, A. (2010). Organizational Closure in Biological Organisms. History and Philosophy of the Life Sciences, 32, 269–288. Mossio, M., & Bich, L. (2017). What Makes Biological Organization Teleological? Synthese, 194, 1089–1114. springer.com/article/https://doi.org/10. 1007/s.11229-014-0594-z Ohm, G. S. (1826). Bestimmung des Gesetzes, nach welchem Metalle die Contaktelektricitat leiten, nebst einem Entwurfe zu einer Theorie des voltaischen Apparats und des Schweiggerschen Multiplicsators [Determination of the Law in Accordance with Which Metals Conduct Contact Electricity, Together with an Outline of a Theory of the Voltaic Apparatus and of Schweigger’s Multiplier]. Journal Fur Chemie Und Physik, 46 , 137–166. Pattee, H. H. (1981). Symbol-Structure Complimentarity in Biological Evolution. In E. Jantsch (Ed.), The Evolutionary Vision (pp. 117–128). Westview Press. Pattee, H. H. (2001). The Physics of Symbols: Bridging the Epistemic Cut. Bio Systems, 60, 5–21.
References
389
Pattee, H. H. (2006). The Physics of Autonomous Biological Information. Biological Theory, 1(3), 224–226. Pattee, H. H. (2007). Laws, Constraints and the Modeling Relation—History and Interpretation. Chemistry and Biodiversity, 4, 2272–2295. Pattee, H. H. (2012). Laws, Language and Life. Springer. Pattee, H. H. (2013). Epistemic, Evolutionary, and Physical Conditions for Biological Information. Biosemiotics, 6 (1), 9–31. Peirce, C. S. (1898/1992). Reasoning and the Logic of Things: The Cambridge Conference Lectures of 1898. K. I. Kettner (Ed.), Harvard University Press. Phillips, J. N. (1972). Degrees of Interpretation. Philosophy of Science, 39 (3), 315–321. https://doi.org/10.1086/288453 Polanyi, M. (1951). The Logic of Liberty. The University of Chicago Press. Polanyi, M. (1958). Personal Knowledge. Harper and Row. Polanyi, M. (1966). The Tacit Dimension. Doubleday (Penguin Random House). Polanti, M. (1969). Knowing and Being. University of Chicago Press. Popper, K. R. (1945).The Open Society and Its Enemies (2 volumes). Revised edition. Harper and Row, 1962. Popper, K. R. (1959). The Logic of Scientific Discovery. Harper & Row. Popper, K. R. (1963/2014). Conjectures and Refutations. Harper & Row. Now Rutledge Classics. Popper, K. R. (1972). Objective Knowledge: An Evolutionary Approach. Oxford University Press. Popper, K. R. (1974). Intellectual Autobiography, and Replies to my Critics. In P. A. Schilpp (Ed.), The Philosophy of Karl Popper (pp. 3–181, 961–1198). Open Court. Popper, K. R. (1977). Part 1. In The Self and Its Brain. (by K. R. Popper and J. C. Eccles). Springer International. Popper, K. R. (1982). Unended Quest: An Intellectual Autobiography. Open Court Press. Porges, S. W. (2011). The Polyvagal Theory: Neurophysiological Founations of Emotions, Attachment, Communication, and Self-Regulation. W. W. Norton. Post, E. L. (1943). Formal Reductions of the General Combinatorial Decision Problem. American Journal of Mathematics, 65, 197–215. Post, E. L. (1965). Absolutely Unsolvable Problems and Relatively Undecidable Propositions—Account of an Anticipation. In M. Davis (Ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions (pp. 340–344). Raven Press.
390
References
Premack, D. (1959). Toward Empirical Behavior Laws, 1. Positive Reinforcement. Psychological Review, 66 (4), 219–233. Pribram, K. H. (1971). Languages of the Brain. Prentice-Hall. Quine, W. V. O. (1951). Two Dogmas of Empiricism. Philosophical Review, 60 (1), 20–43. Quine, W. V. O. (1960). Word and Object. MIT Press. Radnitzky, G., & Bartley, W. W. (Eds.). (1988). Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. Open Court. Ramsey, F. P. (1931). The Foundations of Mathematics and Other Logical Essays. Macmillan. Rand, A. (1982/1984). The Stimulus and the Response: A Critique of B. F. Skinner. In Philosophy: Who Needs it. Bobbs-Merrill. Now Signet (Penguin Classics). ISBN 9780451138934. Rasch, G. (1980). Probabilistic Models for Some Intelligence and Attainment Tests. Nielsen & Lydiche. Reichenbach, H. (1938). Experience and Prediction: An Analysis of the Foundations and the Structure of Knowledge. University of Chicago Press. Rescher, N. (1970). Scientific Explanation. Free Press. Risjord, M. (2014). Philosophy of Social Science: A Contemporary Introduction. Routledge. Rorty, R. (1979). Philosophy and the Mirror of Nature. Princeton University Press. Rosen, R. (1985/2012). Anticipatory Systems: Philosophical, Mathematical, and Methodological Foundations. Springer. Rosen, R. (1991). Life Itself. A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press. Rosenberg, A. (2012). Philosophy of Science: A contemporary introduction. Routledge. Rosenberg, A. (2016). Philosophy of Social Science. Westview Press. Rosenblueth, A., & Wiener, N. (1945). The Role of Models in Science. Philosophy of Science, 12(4), 316–321. Russell, B. (1912). The Problems of Philosophy. H. Holt and Company. Russell, B. (1927). The Analysis of Matter. Kegan Paul. Russell, B. (1934/2009). Freedom and Organization, 1814–1914. George Allen & Unwin. Now Rutledge Classics. Russell, B. (1944). Reply to Criticisms. In P. A. Schilpp (Ed.), The Philosophy of Bertrand Russell (pp. 681–741). Northwestern University Press. Russell, B. (1948). Human Knowledge: Its Scope and Limits. Simon and Schuster.
References
391
Schlick, M. (1925/1974). General Theory of Knowledge. Open Court. Reprint 1974. Schumpeter, J. (1954). History of Economic Analysis. (E. B. Schumpeter, Ed.). Routledge and Kegan Paul. Shrirer, A. (2020). Irreversible Damage. Regnery Publishing. Schrödinger, E. (1956). Science and the Human Temperament. W. W. Norton and Co. Scott, R. (1967). On Viewing Rhetoric as Epistemic. Central States Speech Journal, 18(1), 9–17. https://doi.org/10.1080/10510976709362856 Sellars, W. (1963). Science, Perception and Reality. Rutledge & Keegan Paul. Selten, R. (1988). Models of Strategic Rationality. Springer. Selten, R. (1999). Game Theory and Economic Behaviour (2 vols.). Edward Elgar Publishing. Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell System Technical Journal, 27 (379–423), 623–656. Siegel, S. (1956). Nonparametric Statistics. McGraw-Hill. Simon, H. (1969/2019). The Sciences of the Artificial (3rd ed.). MIT Press. 2019. Simons, H. W. (1980). Are Scientists Rhetors in Disguise? An Analysis of Discursive Processes Within Scientific Communities. In E. F. White (Ed.), Rhetoric in Transition: Studies in the Nature and Uses of Rhetoric. University Park: The Pennsylvania State University Press. Singer, S. F., Legates, D. R., & Lupo, A. R. (2021). Hot Talk, Cold Science. Independent Institute. Skinner, B. F. (1938/1976). The Behavior of Organisms. Appleton-CenturyCrofts. Now B. F. Skinner Foundation. Skinner, B. F. (1957).Verbal Behavior. Appleton-Century-Crofts. Now B. F. Skinner Foundation. Smith, V. (1976). Experimental Economics: Induced Value Theory. American Economic Review, 66 (2), 274–279. Smith, V. (1982). Microeconomic Systems as an Experimental Science. American Economic Review, 17 (5), 923–955. Smith, V. (1992). Constructivist and Ecological Rationality in Economics. www.nobelprize.org/prizes/economic-sciences/2002/smith/lecture/ Spencer Brown, G. (1969). Laws of Form. George Allen and Unwin Ltd. Stevens, S. S. (1946). On the Theory of Scales of Measurement. Science, 103(2684), 677–680. Susskind, L. (1995). The World as a Hologram. Journal of Mathematical Physics, 36 (11), 6377–6396. https://doi.org/10.1063/1.531249
392
References
‘t Hooft, G. (1991). The Black Hole Horizon as a Quantum Surface. Physica Scripta, T36 , 247–252. Tavabi, A. H., Boothroyd, C. B., Yucelen, E., Gazzadi, G. C., DuninBorkowski, R. E., & Pozzi, G. (2019). The Young-Feynman Controlled Double-Slit Electron Interference Experiment. Scientific Reports, 9, 10458. https://doi.org/10.1038/s41598-019-43323-2 Thomas, L. (1974). The Lives of a Cell . Penguin Random House. Thurstone, L. L. (1927). A Law of Comparative Judgment. Psychological Review, 34 (4), 273–286. Tolman, E. C. (1932). Purposive Behavior in Animals and Men. Century/Random House UK. Tolman, E. C. (1959). Principles of Purposive Behavior. In S. Koch (Ed.), Psychology: A Study of a Science (Vol. 2, pp. 92–157). McGraw-Hill. Trendler, G. (2009). Measurement Theory, Psychology and the Revolution That Cannot Happen. Theory and Psychology, 19 (5), 579–599. Trendler, G. (2013). Measurement in Psychology: A Case of Ignoramus et Ignorabimus? A Rejoinder.Theory and Psychology, 1–25. https://doi.org/10. 1177/0959354313490451 Vanberg, V. (2004). Austrian Economics, Evolutionary Psychology and Methodological Dualism: Subjectivism Reconsidered. Freiburger Diskussionspapiere zur Ordnungsokonomik, No. 04/3, Albert-Ludwigs-Universitat Freiburg, Institute fur Allgemeine Wirtschaftsforschung, Abteilung fur Wirtschaftspolitik, Freiburg, iBr. Vigier, J.-P. (1989). Two Problems (Discussion comments on Wheeler, J. A., Information, Physics, Quantum, reference 1989 below), 323. von Neumann, J. (1951). The General and Logical Theory of Automata. In L. A. Jeffress (Ed.), Cerebral Mechanisms in Behavior: The Hixton Symposium (pp. 1–31). J. Wiley & Sons. von Neumann, J. (1955). Mathematical Foundations of Quantum Mechanics. Princeton University Press. von Neumann, J. (1958). The Computer and the Brain. Yale University Press. von Neumann, J. (1966). Theory of Self-Reproducing Automata. In A. W. Burks (Ed.). University of Illinois Press. Watkins, J. W. N. (1958). Confirmable and Influential Metaphysics. Mind , 67(267). https://doi.org/10.1093/mindlxvii267.344 Watson, N. V., & Breedlove, S. M. (2020). The Mind’s Machine: Foundations of Brain and Behavior. Sinauer Associates (Oxford University Press). Wasserman, L. (2006). All of Nonparametric Statistics. Springer Science.
References
393
Weaver, W. (1949). The Mathematics of Communication. Scientific American, 181(1), 11–15. Weaver, W. (1959). A Scientist Ponders Faith. Saturday Review, XLII (1), 3. Weimer, W. B. (1974). The History of Psychology and Its Retrieval from Historiography: Part 1. Science Studies, 4, 235–258. Weimer, W. B. (1977a). A Conceptual Framework for Cognitive Psychology: Motor Theories of the Mind. In R. Shaw & J. D. Bransford (Eds.), Perceiving, Acting, and Knowing. Erlbaum Associates. Weimer, W. B. (1977b). Science as a Rhetorical Transaction. Philosophy and Rhetoric, 10 (Winter), 1–29. Weimer, W. B. (1979). Notes on the Methodology of Scientific Research. Erlbaum Associates. Weimer, W. B. (1984). Limitations of the Dispositional Analysis of Behavior. In J. R. Royce & L. P. Mos (Eds.), Annals of Theoretical Psychology (Vol. 1, pp. 161–198). Plenum Press. Weimer, W. B. (2021). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 1. Cosmos + Taxis, 9 (11+12), 1–29. Weimer, W. B. (2022a). Problems of a Causal Theory of Functional Behavior: What the Hayek-Popper Controversy Illustrates for the 21st Century-Part 2. Cosmos + Taxis, (in press). Weimer, W. B. (2022b).Retrieving Liberalism from Rationalist Constructivism: History and its Betrayal (Vol. 1). Palgrave Macmillan. Weimer, W. B. (2022c). Retrieving Liberalism from Rationalist Constructivism: Basics of a Liberal Psychological, Social and Moral Order (Vol. II). Palgrave Macmillan. Weyl, H. (1949). Philosophy of Mathematics and Natural Science. Princeton University Press. Wheeler, J. A. (1978). The “Past” and the Delayed-Choice” Experiment. In A. R. Marlow (Ed.), Mathematical Foundations of Quantum Theory (pp. 9–48). Academic Press. https://doi.org/10.1016/B978-0-12-473250-6.50006-6 Wheeler, J. A. (1989). Information, Physics, Quantum: The Search for Links. In Proceedings III International Symposium on Foundations of Quantum Mechanics (pp. 354–358). https://doi.org/10.1201/9780429500450-19 Wheeler, J. A. (1990). Information, Physics, Quantum: The Search for Links. In W. H. Zurek (Ed.), Complexity, Entropy, and the Physics of Information. Addison-Wesley.
394
References
Whewell, W. (1840/2014) The Philosophy of the Inductive Sciences. Online publication 2014. Cambridge University Press. https://doi.org/10.1017/ CBO9781139644662 Wiener, N. (1961). Cybernetics. MIT Press. Wigner, E. (1961). Remarks on the Mind-Body Problem. In L. J. Good (Ed.), The Scientist Speculates (pp. 284–302). Heinemann. Wittgenstein, L. (1953). Philosophical Investigations. Basil Blackwell. Wundt, W. (1874). Principles of Physiological Psychology. Engelmann. Young, N., & Weimer, W. (2021) The Constraining Influence of the Revolutionary on the Growth of the Field. Axiomathes. https://doi.org/10.1007/ s10516-021-09584-1
Name Index
A
Aaron, R.I. 140 Abel, D.L. 177, 178, 188, 200, 201, 237 Agassi, J. 329, 354 Alcorn, J. 305 Aristotle 43, 45, 53, 216, 221, 341 Aune, B. 83, 143 Austin, John 330 Ayer, A.J. 328, 330, 353
B
Baker, M. 58 Baltimore, David 307 Bartley III, William 9, 27, 63, 82, 138, 168, 332, 338, 340, 350, 354, 374–376 Begley, C.G. 58 Bekenstein, J.D. 125, 283
Bellarmino, Cardinal 322, 333 Bell, J.S. 49, 185 Bell, Stewart 244 Bentham, Jeremy 314 Berkeley, Bishop George 48, 50, 83, 140–142, 167, 168, 174 Berkson, Joseph 95, 96, 122 Bernal, J.D. 206 Bertalanffy, L. 73 Bich, L. 216–219 Block, Walter E. 309 Bloor, David 334 Blumenthal, A. 75 Blumer, Herbert 313 Bohm, David 9, 16, 29, 44, 49 Bohr, Niels 41, 48–50, 63, 67, 80, 158, 240, 332 Boltzmann, L. 16, 285 Boring, E.G. 75 Born, Max 126, 168, 170
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4
395
396
Name Index
Brandeis, Louis 295 Breedlove, S.M. 89 Brentano, F. 75, 162, 170 Bridgman, P.W. 85 Bringmann, L.F. 112 Broad, Charles Dunbar 72, 78, 79, 322, 324 Bronowski, Jacob 29, 30, 237, 276, 279, 369 Bühler, Karl 48, 339 Burke, Edmund 103, 113 Burks, A.W. 251 Butos, William N. 9, 306, 307
C
Campbell, Donald T. 9, 17, 27, 162, 183, 185, 197, 228, 272, 290, 345 Campbell, N.R. 114 Carnap, Rudolf 86, 323 Casimir, Hendrik 49 Cassirer, E. 48, 169–172, 174, 187 Chamberlain, T.C. 114 Chicken, E. 109 Chomsky, N. 75, 272–274, 288 Chrisman, N.R. 55 Coleman, J.S. 314 Collins, R. 313 Comte, Auguste 79, 117, 124, 173, 253, 296, 367 Cronbach, L.J. 98–100
D
Davidson, D. 263 Democritus 160, 161 de Ridder, J. 298
Descartes, René 5, 78, 141, 211, 296, 367 Dewan, E.M. 214 Dewey, J. 68, 297 Diderot, Denis 19 Dodds, E.R. 367 Duhem, Pierre 17, 67, 189, 326, 338, 373 Durkheim, Emile 313, 314 E
Eccles, John Carew 163, 189 Eckhardt, R.B. 309 Eddington, A.S. 289 Edge, David 334 Einstein, Albert 13, 47–49, 66, 103, 125, 127, 128, 355, 375 Ellis, L.M. 58 Elsasser, W. 209 Eronen, M.I. 112 Escher, Maurits 289 Euclid 16, 113, 120, 202 Euler, Leonhard 19 F
Feigl, Herbert 9, 80 Ferguson, Adam 17, 61, 232, 259, 300, 313, 315, 360 Feyerabend, P.K. 334, 355, 376 Feynman, Richard 96, 228 Fisher, R.A. 112, 113 Follesdal, Dagfinn 66 Friedman, M. & R. 243 Fuster, J.M. 89 G
Galanter, E. 234
Name Index
Gallilei, Galileo 34, 45, 47, 78, 95, 124, 139–141, 143, 145, 146, 148, 160, 180, 211, 297, 323, 333 Gamow, George 48 Geigerenger, G. 122 Gell-Mann, M. 289 Ghiselli, Edwin E. 106 Gibson, J.J. 73, 87 Godfrey-Smith, P. 3 Goethe, Johann Wolfgang von 18 Gregg, Richard 344 Gribbin, J. 49
397
Homans, G.C. 314 Horwitz, S. 310–312, 315 Houle, D. 54, 55, 57, 88 Howarth, E. 74 Hull, Clark L. 85, 86 Hume, David 17, 142, 167, 259, 315, 327, 351, 353 Huygens, Christiaan 213 I
Idso, C.D. 305 Islami , A. 61 J
H
Haidt, Jonathan 302 Hamowy, R. 371 Hannon, M. 298 Hanson, Russell 16, 50, 237, 274, 275 Hawking, S. 125, 283 Hayek, Friedrich A. 9, 10, 24, 48, 62, 78, 84, 89, 120, 148, 160, 161, 185, 196, 199, 210, 216, 220, 227, 233, 235, 238, 239, 241, 249, 250, 252, 253, 260, 296, 297, 300, 308, 309, 311, 358–360, 364 Heisenberg, Werner 41, 44, 49, 50, 244, 332 Hempel, Carl G. 105, 261, 264 Heraclitus 211 Hertz, Heinrich 16, 67, 150 Hesse, M. 237 Hickey, Thomas 68, 332, 353 Hilbert, David 211 Hoffmeyer, J. 183, 211 Hollander, M. 109
James, William 68, 141–143, 145, 175 Johnson, John Anthony 9, 78, 114 Judson, H.F. 307 K
Kant, Immanuel 140–143, 145, 146, 149, 153, 169 Kantor, J.R. 86 Kauffman, S.A. 61, 131, 216, 230–232 Kepler, Johannes 274 Kevles, D.J. 307 Keynes, J.M. 128, 371 Kitcher, P. 265 Koestler, A. 73 Körner, S. 143, 338 Koutstall, W. 89 Kranz, D.H. 76 Kuhn, Thomas S. 9, 16, 30, 95, 113, 187, 237, 258, 261–263, 265, 266, 274, 334, 345, 362, 363, 365, 374, 376
398
Name Index
Külpe, Oswald 48
L
Lakatos, Imre 93, 376 Laplace, Pierre-Simon 114, 277 Lashley, K.S. 277 Legates, D.R. 305 Leibniz, Gottfried Wilhelm 86, 158 Lemaitre, George 189 Levendis, J. 309 Lincoln, Don 206 Locke, John 140, 368 Lomborg, B. 306 Longo, G. 61, 88, 180, 210, 230, 232 Lord Rutherford 206 Lorenz, Konrad 48 Louie, A. 235, 236 Luce, R.D. 76 Lupo, A.R. 305
Meehl, Paul 9, 100–103, 111, 112, 184, 207, 250 Menger, C. 260 Miller, D.S. 41, 100, 212 Miller, George A. 234 Mill, John Stuart 74–76, 99, 314 Mises, Ludwig 118–120, 122, 127, 128, 130, 173, 272, 370 Montevil, M. 230 Moore, P. 305 Moreno, A. 217 Morgan, J.E. 89 Moruzzi, G. 77 Mossio, M. 60, 216–219 N
Newton, Isaac 45, 78, 99, 117, 189, 196, 211, 244, 355, 375 Novalis 46 O
M
MacCorquodale, Kenneth 101–103 Mach, Ernst 48–50, 63, 67, 80–82, 84, 85, 142, 143, 145, 240, 323, 332 Magoun, H.W. 77 Malcolm, Norman 174 Marsh, Leslie 9 Marx, Karl 296, 313, 367 Maxwell, Grover 9 Maxwell, James Clerk 42–44, 72, 148, 152, 154, 277 Mayr, Ernst 59, 63, 197 McIntyre, Alasdair 321 McQuade, T. 306 Medawar, Sir Peter B. 63
Occam, William 281 Ohm, G.S. 76 P
Pattee, Howard 9, 29, 127, 168, 171, 182, 188, 197, 201, 205, 211, 218, 221, 289 Patterson, Donald G. 102 Peirce, C.S. 258, 373 Penfield, Wilder 189 Petty, William 117 Phillips, J.N. 338 Pinker, Steven 47 Planck, Max 48, 287 Plato 176, 186, 188, 202, 298, 359 Poincare, Henri 67, 189
Name Index
Polanyi, Michael 61, 88, 128, 178–180, 186, 187, 197, 202, 206, 226, 228, 238, 241, 258, 300, 345 Popper, Karl 9, 17, 32, 48, 63, 137, 165, 168, 186, 187, 189, 200, 253, 255, 260, 261, 265, 266, 276, 324, 339, 353, 356, 362, 363, 374–376 Porges, S.W. 89 Post, Emile 275 Premack, D. 209, 210 Pribram, Karl 234, 235, 279, 284 Prigogine, Ilya 266 Prout, William 354
Q
Quine, W.V.O. 171, 263, 338, 353
R
Radnitzky, Gerard 9, 332 Ramsey, F.P. 166 Rand, Ayn 79 Rasch, G. 57 Reichenbach, Hans 327 Rescher, N. 341 Ricker, J.H. 89 Risjord, M. 3 Rorty, Richard 68, 328, 330–332, 352, 353 Rosenberg, A. 9, 10, 374 Rosenblueth, Arturo 251 Rosen, R. 217, 218, 221, 235, 237, 241 Rousseau, Jean-Jacques 124
399
Russell, Bertrand 50, 67, 144–148, 150, 151, 154, 155, 186, 263, 288, 296, 322, 323, 328, 367 Ryle, Gilbert 330
S
Scheffler, Israel 261 Schlick, M. 154 Schrödinger, Erwin 41, 49, 62, 87, 161, 205 Scott, R. 345, 365 Sellars, Wilfrid 9, 143, 158, 184, 207, 326 Selten, R. 122 Shakespere, William 270 Shannon, C.E. 64, 66, 88, 177, 182, 183, 199, 200, 210, 211, 221, 284 Shaw, Robert 9, 159 Shelley, Percy B. 63 Sherrington, Charles S. 189 Shrirer, A. 304 Siegel, Sidney 109 Simon, H.W. 86, 253, 345 Singer, S.F. 305 Skinner, B.F. 59, 60, 72–74, 85–88, 101, 103, 121, 122, 240, 253, 297 Smith, Adam 314, 315 Smith, V. 122 Spencer Brown, G. 345 Spinoza, Baruch 158 Stevens, S.S. 55, 72, 76, 105, 106, 114 Suppes, P. 76
400
Name Index
T
Tavabi, A.H. 96 Tegmark, Max 67 Thomas, Lewis 231, 232 ’tHooft, G. 125, 283 Thurstone, L.L. 57 Tolkein, J.R.R 169 Tolman, E.C. 49, 86, 160 Trendler, Gunter 76, 78, 107, 114 Trevors, J.T. 237 Turing, Alan 210 Tycho (Brahe) 274 V
Vanberg, Viktor J. 128, 185, 220 Vigier, J.-P. 66 von Neumann, John 62, 159, 201, 221, 234, 241, 250, 251, 266, 276
Watkins, J.W.N. 376 Watson, J.B. 253 Watson, N.V. 85, 86, 89 Weimer, Walter B. 10, 17, 26, 27, 49, 75, 89, 98, 113, 130, 185, 220, 254, 262, 290, 296, 297, 309, 345, 350, 363, 365, 376 Weyl, Hermann 158, 159, 187, 225 Wheeler, John Archibald 41, 63–68, 125, 182, 240, 285 Whewell, W. 114 Wible, James 9 Wiener, Norbert 213, 251, 253 Wigner, E. 41 Williams, Robin 137 Wittgenstein, L. 174, 330, 331, 353 Wolfe, D.A. 109 Wundt, W. 48, 75–77, 158
Y W
Wasserman, L. 109
Young, Neil P. 9 Young, T. 49, 95, 96, 113
Subject Index
A
abstract order, modern society 309 abstract society, as impersonal cooperation 311 acquaintance vs. description (Russell) 144 action, always ambiguous 205 action, always context sensitive 207 action, as both physical and formal 203 action, as deep structurally ambiguous 130 action, as functional, not physical 118 adjunctive conditional, model for science 343 adjunctive proposition 342 agency 29, 215 agency, harnesses inexorability 177 agents, as functional 6
agnosticism 189 algorist 159 ambiguity, as fundamental problem 28 ambiguity, increases with more dimensions 287 ambiguous figures 262 anthropic principle (Wheeler) 240 apodictic, rate-independent only 93 ARAS, as functional localization 77 argumentative claiming 14 autonomy, of knowledge 186
B
behavior, argumentative form 339 behaviorism, as phenomenalism 84 biochemical information (Hoffmeyer on) 183
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. B. Weimer, Epistemology of the Human Sciences, Palgrave Studies in Classical Liberalism, https://doi.org/10.1007/978-3-031-17173-4
401
402
Subject Index
biological models, as poorer than the phenomena 61 biological organisms, as dynamically changing constraints 60 biological systems defined (Mossio and Bich) 216 Bloomsbury Group 297 boundary conditions 42 boundary conditions, as physical local accidents 180 boundary conditions, defined 179
C
capitalism, freed us from materialistic marriage 312 Cartesian common sense 296 Cartesian explicit rationality 132 causality, as “push” not “pull” 241 causality, as anticipatory models (Rosen) 221 causality, as arrow in time 163 causality, as theoretical 42 causal theory of perception 147 CFF (critical fusion frequency) 282 choice, as functional and conceptual 177 choice, as harnessing physicality 5 choice, as internal or semiotic constraint 215 choice, as requiring alternative possibilities 215 choice, cannot be physical 196 choice contingency 159 choice determination 199 choice is deterministic 178 circle of knowledge and ignorance (Weaver) 31 classification, as judgmental 28
classification vs. measurement 106 climate change 305 CNS, as anticipatory modeling system 238 collapsing the wave function 41 commensurability of subjects (Montevil) 59 communication, of differences in qualities 161 competition, as producing cooperation 231 complexity 249 complex vs. simple phenomena 24 complimentarity 3 comprehensively critical rationalism, definition 354 comprehensive rationality, definition 350 computer based theories of life, as symbol manipulation systems 278 consciousness, as aid to memory 165 consciousness, as evolutionary fail safe 165 consciousness, as rate-independent 163 constraint, defined 176 constraints 218 context of constraints 358 continuity, existing only in rate-independence 286 conventionalism 327 cooccurrence relations 290 co-occurrence relationships 98 Copenhagen quantum interpretation 49 correlation techniques 103 cosmos (organization) 360 critical rationalism, definition 352
Subject Index
criticism, as essential negatively 339 criticism, as means, not end 364
D
Darwinian “New Synthesis” 373 deep structure ambiguity 263 deep vs. surface structure ambiguity 129 degrees of freedom (non-holonomic) 289 degrees of interpretation (Phillips) 338 demonstration studies 97 derivational history 8 Diderot vs. Euler 19 dimension, definition 281 directional hypothesis testing 112 dissipative structures (Prigogine) 266 diversity, in politics 302 division of knowledge 298 division of labor 298 downward causation 7, 164, 185, 228, 272, 290 downward causation (Campbell) 162 dualisms, epistemic versus ontological 5 duality of descriptions 4, 158, 196, 204, 270 dual system of causal controlled 221
E
econiche, as organismic environment 26 economic experiments, as demonstrations 121 economics, as disparity of values (Mises) 119
403
efficient cause (Aristotle) 216 electrical power grid 213 emergence 7 emergence (Meehl and Sellars) 207 emergence, requires historical explanation 229 empirical basis, defined negatively 374 entailing laws, cannot explain biology 131 entailing laws vs. enabling relations 61 epistemic cut (Pattee) 197 epistemic subjectivism (Berkeley) 140 epistemology, as life science 6 epistemology, nature of 2 epistemology, without a subjective 187 erkennen vs. kennen (Schlick) 154 error detection 200 error, no physical concept of 198 error, role in modeling 237 essential complexity 3 essential tension, of science (Kuhn) 345 evolutionary “translation” (Hoffmeyer) 211 evolutionary epistemology, falsification as essential 373 evolved institutions, as knowledge structures 308 existence determined by measurement 50 experience and theory (Korner) 169 experimental design, as constraint on boundary conditions 45 experimentation, as active intervention 43
404
Subject Index
experiments, as artifacts 74 explanation, as deduction 338 explanation, as increasing ambiguity 29 explanation, as statement of equivalence 264 explanations, as rhetorical and argumentative 14 explanations of the principle (Hayek) 127 explicit analysis, as Cartesian ideal 366
F
face-to-face, as tribalism 311 face-to-face society 296 fallacy of four terms 265, 338 fallibilism, in science and philosophy 373 family, as epistemic ecology 311 family vs. village 310 feedforward control 236 final cause (Aristotle) 216 frozen accidents 215, 287 functional, as based on error and probability 242 functionalism, in sociology (Durkheim) 313 functionality, as ambiguous 185 functionality, as intentional 162 functionality, no laws of 242 functionality, not determined by causal laws 131 functional phenomena 3
G
Galilean Revolution 78, 117
Galilean revolutionary 301 generalizability 110 genetic language 201 genetic program 217 genotype vs. phenotype 231 gentrification 303 government, as suppressing knowledge 8 Grain Objection (Sellars) 158
H
habituation 40 harnessing laws (Polanyi) 197 H-D explanation 17 “helicopter” parenting 311 hermeneutic philosophy of science 330 hierarchical functions of language (Bühler) 338 high complexity, definition 251 historiography, cumulative record approach 325 holography 279 homeostasis 217 homunculus 164 human sciences, as empirical but not experimental 7
I
idealism, contradicted by science 138 “identity” politics 310 ignorance, as indispensable 299 illusion of duration 287 impersonal morality 309 indeterminism 114, 212 individual differences 98
Subject Index
induction 255 induction, as glory of science and scandal of philosophy 322 inductive logic 324 information, meaningful only in conceptions 183 initial conditions, choices of subjects 180 initial conditions depend on observer 197 “instant” rational assessment 357 instrumentalism 80 instrumentalism (Bellarmino) 322 intentional “aboutness” (Brentano) 204 interocular traumatic test (Berkson) 96 intersubjectivity 169 intervening variables 82 intrinsic properties 151, 186 it from bit hypothesis 182 it from bit (Wheeler) 63
J
justificationism 298 justificationist philosophy 167 justified true belief 167, 321, 366
K
knower, never identical with the known 14 knowledge, as arising from symbolic inducement 345 knowledge, as capacity for classification and modeling 369
405
knowledge as classical, never quantal 44 knowledge, as diachronic 198 knowledge, as fallible and non-justifiable (Popper) 168 knowledge, as justified true belief 4, 26 knowledge, as structurally 138 knowledge, determined by proper scaling 124 knowledge, growth in open contexts 30
L
Lamarckian inheritance 217 language as socialists 169 law of large numbers 24 “law” of reinforcement, Premack on 209 laws, as compressed algorithms 212 laws, as invariant and inexorable 242 laws as invariants vs. life as enablements 226 laws, as inviolate and unbreakable 25 laws in psychology, impossible (J.S. Mill) 74 laws, not dependent on observer 196 learning, as trial and error elimination 17 liberalism, as theory of society 298 liberty, as freedom from unnecessary restraint 315 life, as prescribed instruction 200 life, as self produced 200 life, constrained by higher order rules 34 life, dependant on records 226 linear strings, in behavior 276
406
Subject Index
linguistic productivity or creativity 129 linguistics, transformational revolution 273 living causality, as rules, not laws 209 localization of function 89 logic of discovery 337
M
man as simple (Simon) 86 market order, as “simple” complexity 267 markets, have no social or political goals 299 markets, lacking functionality 238 Marx, conflict theory 313 material implication 341 material model, definition (Rosenblueth) 251 mathematics as only existents (Tegmark) 67 mathematics, as rate-independent only 127 mathematics, as structure and symbolizing it 149 mathematics, as symbol manipulation systems 21 mathematics (defined) 20 mathematics, pure vs. applied 20 meaning and ambiguity 27 meaning, as intrinsically rhetorical 346 meaning, as not physical or temporal 209 meaning, as only when things could have been otherwise 181
meaning, as predication, not relation 21 meaning, determined by context 289 meaning indeterminacy (Quine) 172 measurement, as classification and judgment 23 measurement, as constraining statistics 57 measurement, as many-to-one mapping 205 measurement, as problem of meaning and record keeping 243 measurement, as scale dependent 72 measurement scales 105 membrane, as locus of life (Hoffmeyer) 283 memory, interpreted by symbols 81 mensuration 4, 10 mensuration, as first functional activity 40 mensuration, defined 71 mensuration, theory of 53 metaphysical research programs 376 methodological directives, as negative rules 363 methodological individualism 173 methodological rules, as negative prohibitions 258 migration or immigration? 303 modeling, as fundamental CNS activity 233 modeling, properties of 236 modus ponens 255 modus tollens 255 Moral Foundation Theory (Haidt) 302 Mr. Rogers’ Neighborhood 311
Subject Index
multiple working hypotheses (Chamberlin) 114 mutual entrainment 213
N
naive realism, defined 140 naive vs. representational realism 147 negative rules, as allowing creativity 363 negative rules of order 32, 254 neo-justificationism 326 nervous system, as delimiting ambiguity 199 nervous system, as polycentric or coalitional 277 neural hologram (Pribram) 284 New Deal 297 NOIR scaling 55 nomological necessity 340 nonparametric statistics 107 noumenal (Kant) 141
O
objects, follow laws 120 Occam’s Razor 281 ontological agnostic (Hayek) 220 ontological idealism 163 ontological phenomenalism 80 operationalist’s fallacy 65 ordinal and cardinal (scaling) 119 ordinary language analysis 330 organism definition, as requiring historical dimensions 48 organisms, as theories of their environment 232 orienting response 39, 181
407
P
parametric tests 107 perceptual senses, as vicars 43 perihelion of Mercury, precession 357 phenomenalism 48, 323 phenomenalism (James, Mach) 142 phenomenal qualia 149 physical, before and after life 208 physical constraint vs. functional choice control 202 physicality and functionality, as realms of existence 157 physical records, as non-holonomic 243 physical vs. life or soft science 3 physics, just a beginning 206 planning, “new” confusion 371 planning, as creating conditions for progress 371 planning, for progress 372 planning, the “old” confusion about 370 political epistemology 298 polycentric and coalitional controlled 238 polycentric or coalitional vs. central control 300 Post Languages 275 postulation (Russell on) 67 Praxeology (Mises) 128 precision vs. prediction 250 presentationalism, defined 80 primary vs. secondary qualities 139 Principle of Acquaintance (Russell) 145 private languages 171 probabilistic inductive logic 327 probability, limitations 126
408
Subject Index
probability, scaling requirements of 57 progress, cannot be planned 300 progressivist cultural Marxism 8 pure intervening variable 103
Q
qualitative vs. quantitative 106
R
rate and time 166 rate-dependent (dynamical) vs. rate-independent (functional) 4 rate independence vs rate dependence 94 rate-independent realm 163 ratio 93 rational expectations, not a guide to behavior 314 rationalism 323 rationalist constructivism, definition 364 rationality, as action in accordance with reason 359 rationality, as emergent property 368 rationality, need not be consciously 357 rationality, refers to conduct of agents 355 realism, as metaphysical thesis 137 realism, defined 5, 137 realism, has explanatory power 333 reason, as evolutionary result 367 reason, refinement of passion 22 record keeping 205 reference vs. meaning 151
replication “crisis” 97 result of human action, not design 300 results of action, but not design (Ferguson) 215 results of action, not design (Ferguson) 232 retreat to commitment 352 rhetoric, as epistemic process 344 robustness 110 role of experience, as breaking expectations 374 rule of impartial law 315
S
sausage machine science 101 saving the appearances (Bellarmino) 333 science as essential tension (Kuhn) 363 science, as knowledge by description 147 science, as self-correcting 307 science, levels of analysis for 261 science, normal vs. revolutionary 261 scientific entities, as abstract and unobservable 153 scientific psychology, as laboratory study (Boring) 75 scientific revolutions, as always scale dependent 47 Scottish moralists 9 self-determination, as efficient causation 218 self-reproducing automaton (von Neumann) 234 semantic closure 7
Subject Index
semiotic closure (Pattee) 188 semiotic fitness (Hoffmeyer) 211 semiotics, as syntax, semantics, and pragmatics 270 sense datum 145 senses as vicars 83, 174 sex vs. gender 304 Shannon information, as differences 284 “shifty split” (Bell) 244 skepticism, defined 326 Skinner box, as external physical constraint 87 social cosmos, as unanticipated consequences 239 socialism, defined 297 “social physics” 72, 296 social physics model 124 sociology of science, strong program (Bloor) 334 species, as biological conjectures 373 specious present 165 specious present moment 286 spontaneous order 3 spontaneous orders, regulatory principles 256 spontaneous organization 219 “stamp collecting” (Rutherford) 206 statistics, descriptive vs. inferential 94 Stoic logic, as theory of inference schemas 341 structural realism 5 subjective theory of value 173 subjectivity, as “raw” acquaintance 170 subjects, as free to choose 243 subjects, follow rules 120
409
subjects (social) vs. objects (physical) 22 surfaces, as boundaries 283 surface vs. deep ambiguity 274 symbolic interactionism (Blumer) 313 symbolization, as inducement (Burke) 344 symbols, as quiescent 201
T
tacit dimension, of scientific rationality 362 taxis (organization) 360 teleology, as compatible with downward causation 221 test of significance 98 theists and atheists 189 theories, as arguments 340 theories, as structural representations 149 theory of rationality 349 theory testing paradox (Meehl) 111 thermodynamic work 218 third world conception (Plato) 176 TOTE unit 235 transmissibility assumption (Bartley) 265, 340 transmissibility assumption, defined 346 truth “ex vi terminorum” 18 tu quoque argument 347, 351 twin miracles of progress 308 two disciplines of psychology (Cronbach) 99
410
Subject Index
U
W
“ultimate” conceptual scheme (Davidson) 263 unintended consequences 308 unitary origin of objectivity and phenomenal awareness 174 utilitarianism 314
“weight” of propositions (Reichenbach) 326 world 1 &2 (Popper) 186 world 3 (Popper) 187
Y
Young two-slit experimental 96 V
voluntary behavior, controlled by rules 201