Logiciel: Six Seminars on Computational Reason 9783000715914

Originating in a series of seminars held at The New Centre for Research & Practice, the book traces developments in

209 77 4MB

English Pages 224 [231] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Logiciel: Six Seminars on Computational Reason
 9783000715914

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Logiciel

Logiciel

Six Seminars on Computational Reason

AA Cavia

ISBN 978-3-00-071591-4 © &&& c/o The New Centre for Research & Practice, 2022 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. Published by The New Centre for Research & Practice Chausseestrasse 59a Berlin 10115, Germany &&& is an independent purveyor of theoretically informed, publicly engaged publications, circumventing academic convention in order to open up a more accessible platform for public intellectual practice. As the publishing platform of The New Centre for Research &Practice, our aim is to shape new forms of knowledge production and circulation within and against both past and present modes of intellectual production, distribution, and consumption. www.tripleampersand.org

“Science without time is but the ruin of intelligibility.” — Nicolas Gisin

An Incomplete Timeline

1870

1900

1912

1928

1930

1930

1931

1932

1933

1934

1934

1935

1936

Constructivism (Kronecker, Poincaré)

Hilbert’s Program

Intuitionism & Formalism (Brouwer)

The ‘Entscheidungsproblem’ (Hilbert & Ackermann)

The Intuitionist Foundations of Mathematics (Heyting)

Heyting Algebras

First Incompleteness Theorem (Gödel)

Lambda Calculus (Church)

Undefinability Theorem (Tarski)

General Recursive Functions (Gödel)

Natural Deduction (Gentzen)

Inconsistency of Lambda Calculus Expressions (Church)

The Church-Turing Thesis

1937

1945

1965

1969

1975

1980

1983

1989

1992

2003

2006

2011

2013

Undecidability of the Halting Problem (Turing)

Realizability Interpretation of Intuitionistic Logic (Kleene)

Categorical Foundations (Lawvere)

Propositions as Types (Curry, Howard)

Intuitionistic Theory of Types (Martin-Löf)

Correspondence for Cartesian Closed Categories (Lambek)

The Homotopy Hypothesis (Grothendieck)

Coq Programming Language (Coquand)

Geometry of Interaction (Girard, Abramsky)

Ludics (Girard)

Interactive Computation (Goldin, Wegner)

Computational Trinitarianism (Harper)

Univalent Foundations (Voevodsky)

Contents

Acknowledgments Foreword

i iii

Introduction: A Minimal Program

1

Computation & The Real 1.1 The Excluded Middle 1.2 Three Canonical Models 1.3 The Continuum

15 17 29 39

The Two Dogmas of Computationalism 2.1 Machine Functionalism 2.2 Computational Realism

47 50 67

Conceptual Embedding 3.1 Notions of Formality 3.2 Realizability of Truth 3.3 Expressive Bootstrapping

79 81 89 97

The Topological Turn 4.1 Structuralism, Invariance, and Univalence 4.2 The Curse of Dimensionality 4.3 Recursion as Isomorphism 4.4 Types as Encodings 4.5 Embedding Judgement

107 109 117 121 126 136

Interaction Grammars 5.1 Geometry of Interaction 5.2 Heyting Machines 5.3 The Frame Problem

145 147 155 164

Encoding Reason 6.1 Neural Computation 6.2 Computational Inferentialism 6.3 Intelligence Unbound

177 179 189 201

Acknowledgments

I’d like to thank Mohammad Salemy and The New Centre for Research & Practice for kindly inviting me to organize the seminars that bore the material for this book, as well as the editors of &&&, for their attention to detail throughout the project. The participants of those seminars provided invaluable input to the development of the chapters contained herein. I am also indebted to early readers of the text, including Jesse Josua Benjamin, Inigo Wilkins, and Nicholas Houde. Conversations with M. Beatrice Fazi, Peter Wolfendale, and Alexander Wilson have served as beacons of exchange along the way. The book has provided occasion to collaborate with Anna Longo and Tauba Auerbach and I am grateful for their generous contributions. A special mention to Patricia Reed, who has served as an early reader, the principal designer of the book, and broader collaborator throughout. Lastly, I would like to thank H, with whom I shared a space for the duration of the writing phase, and whose encouragement was integral to the fruition of this text.

Foreword

Foreword by Anna Longo “I then turned my microscope to the cognition engine… This was an engine undergoing continuous transformation, indeed modifying itself as part of its operations. The lattice was not so much a machine as it was a page on which the machine was written, and on which the machine itself ceaselessly wrote.”1 The short story Exhalation by Ted Chiang comprises a report on the nature of cognition. The researcher belongs to a peculiar race of mechanical thinking beings who inhabit a world, the inner nature of which will be revealed by arriving at a full understanding of the functioning of the brain. Using a complex self-observation device, the scientist introduces a microscope mounted on a miniature arm into the exposed back of his own head. While pushing aside the golden leaves packed in the cognitive engine, he manages to observe, from within,

1

Chiang, T., 2019. Exhalation. Pan Macmillan.

iii

Logiciel

the activity that he is conducting in the act of studying his own brain. He sees memories encoded on tiny plates continuously moved by currents of air, and he comes to understand that the instructions to release air are provided by the syntax of the written reminiscences. “My consciousness could be said to be encoded in the position of these tiny leaves, but it would be more accurate to say that it was encoded in the ever-shifting pattern of air driving these leaves. 2 tation of the leaves while the latter are continuously moved by extremities of the conducts on the golden sheets characterizes

geometric transformations. These transformations are added as new written memories in the everchanging organization of the structure: any act of thinking creates a new path for the The scientist realizes that the brain is a computing machine for redirecting air currents, while thinking is the result becomes the condition for thinking and thoughts record the passage of air, the instructions for the possible actualization of which are written in previous thoughts. In the course of the story, the narrator deduces that the condition for the

2

Ibid.

iv

Foreword

pressure between the lower and the higher part of the world they inhabit. His very consciousness owes its existence to the tendency of air to distribute evenly in the enclosed space of that planet. However, as soon as he understands the nature of thought, he is struck by the realization that this same entropic process is causing the observed slowing down of cognitive tion of air pressure will come to determine the death of his species. While any act of thinking contributes to accelerate the erosion of the conditions of existence of thought, nobody can stop thinking of strategies for preventing the certain end. For this reason, our scientist narrator decides to address his research report to the future visitors of his dead planet, to invite them to read the memories encoded in motionless brains, in order to resurrect them as their own thoughts. Encoded in the space voyagers’ own brain leaves, the experience of the frozen world would come to breathe a second life. It seems to me that in this last decision the narrator reveals himself as a true artist and as a repented scientist: what’s the use of knowledge if it cannot prevent the actualization of its truths? If the little conceptual order that it produces is but the real and increasing the overall disorder? When what is known has no other value than producing further knowledge and any further knowledge has no other use than the consumption of more objects, then what is knowledge for? What’s the use of the search for knowledge if it cannot be told as the story of a world that will survive etched in an eternal crystalline structure? Why strive for the truth about reality if

v

Logiciel

spirit of those who will visit us from another realm? In short, a scientist can explain how to use thinking to produce truths, to be told for the unique and eternal value of the world that expresses them. As a philosopher, I am less interested in judging the truth of what is said or written than in wondering about the true power of producing memories of realities that have never been experienced by those who will come from another time and space. Logiciel recounts a rational endeavor that deserves to be told in the hope that intelligences from other worlds and epochs will recognize the value of the unique developments that contributed to our computational journey.

vi

INTRODUCTION

A Minimal Program

1

A Minimal Program

nition for ‘software’, was coined in 1967 as part of the Plan Calcul, a French-led initiative headed by De Gaulle, whose aim was to bootstrap a European computing industry to rival Silicon Valley. The US boasted mainframe and semi-conductor dominance at the time, but IBM exports to France had been banned by their administration, in order to curtail the French nuclear armament program. The Plan Calcul had inadvertently taken its name from the work of pioneering East German computer scientist Konrad Zuse, whose Plankalkül was an early computer language that would go on to inspire the well known meta-programming system Algol 68. There are two interpretations of Logiciel which motivate its use logique and matériel, where the latter denotes ‘hardware’, is an allusion to the conjoined nature of computation as an interface between logic and matter, its multi-faceted character a recurring theme in these notes. The second, logi-ciel, a logic of the sky, is a reference to the epistemic account of computation outlined within, in which I attempt to elaborate a naturalized metaphysics of 3

Logiciel

computation. Lastly, the term has come to stand in for me as a cipher for an alternative program, borne from this failed European project, capable of imagining a novel vocabulary for computation. This research project originated in a series of seminars held at The New Centre of Research & Practice in Autumn 2019, under the title “Computation & the Real”. The aim of the course was to outline a new conception of computation, reframing the received view through the lens of a constructivism elaborated via contemporary developments in logic, mathematics, and theoretical computer science. I began to extend these seminar notes in an attempt to provide an integrated account of computational reason adapted to this new framework, where I take the latter to refer to those explanations which can be distinguished as distinctly computational, as opposed to mathematical, logical, causal, or otherwise. As I’ve progressed, it has become clear that the three canonical models of computation developed in the early 1930s are in need of re-evaluation, facing increasing pressure from this constellation of modern ideas, a novel composition which in my mind constitutes a new computational worldview. Against claim here is that each of the three canonical models of computation in turn. Whilst Gödel’s general recursive functions delimit a mathematical domain, and Church’s lambda calculus a formal language, Turing’s model, by contrast, attempts a clean break from both mathematics and logic, a departure which would lay the foundations for a new science of automata. These models represent distinct traditions in the development of computational ideas—broadly speaking, mathematical 4

A Minimal Program

(Gödel), linguistic (Church), and mechanistic (Turing)—and diverse accounts of computation follow depending on which model one chooses to elaborate. In these notes I will develop an account of computational reason which attempts an integration of these traditions into a holistic view, a perspective to which I will attach the name of computation itself. The text proceeds as a program rather than a theory—in the manner of Chomsky’s minimalist program for linguistics, in order to arrive at a generative model, namely: (1) What is computation? (2) Why does it have the properties that it does? Let us start from the beginning—what is computation? This

the computable domain lacks theoretical closure. To probe

mathematical reasoning prevalent at the time of their emergence. I will come to view them as symptomatic of a historical rupture—a traumatic event in human rationality, the cleaving of reason from mind, from which a novel image of thought which can be succinctly reformulated as: what are its reasons? The computational emerges in twentieth century thought mechanical mode of calculation—but it has ultimately, and perhaps unwittingly, cast its shadow over a fundamental rift in modern mathematics. The origin of this schism lies in the 5

Logiciel

intuitionist philosophy of LEJ Brouwer, a doctrine which places undecidability at the heart of mathematics, a heresy enacting a radical break from Hilbertian formalism. Developing an intuitionistic view of computing leads to new positions on computation and contingency, realizability and truth, interaction and language. While the cognitive elements of Brouwer’s theory will be shed in favour of a generalized notion of inference, the so-called two acts of intuitionism will condition a view I will defend as computational. Here, I will on a computationalism grounded in the principle of univalence (Voevodsky), advancing a structuralist conception of mathematics. Following recent developments in computer science, I synthesize the univalence axiom with the manifold hypothesis in machine learning, endorsing a topological account of computational reason grounded in two fundamental operations— encoding and embedding. A geometric theory of representation is presented as a bridge between the inductive revision of beliefs and formal deductive logic, so-called blind models and symbolic AI. I distinguish this model of computational latter from Lautman, Zalamea, Macbeth, Dutilh Novaes, and Longo. This topological model is characterized as a form of computational inferentialism, and I situate it within recent debates in computational theory of mind, namely representational theories such as RTM (Fodor & Pylyshyn), neurophilosophy (Churchland), and predictive coding (Clark, Metzinger, et al). The semantics of this topological theory is discussed with respect to various computationalist models, and its neurogeometric disposition is rendered in relation to the work of cognitive scientists Jean Petitot and Peter Gärdenfors. A 6

A Minimal Program

action grammars, concurrency, and game semantics, expanding on the inferentialist model with a distributed account of

Gentzen’s natural deduction, are traced to Girard’s dialogical work on geometries of interaction, while parallelisation is discussed with respect to interactive computing. I close the book with an attempted integration of computational reason, remarking on the challenges posed by the spectre of generalized intelligence, and a discussion of the contemporary relation between computation and philosophy. This short book calls for the reappraisal of two dogmas which are central to computationalist positions in epistemoltional with mental kinds, asserting the computational nature of minds, a belief which has both motivated and hamstrung it to parochial notions of human reasoning, whilst simultaneously constraining models of human cognition to computational ideas. Commencing with the functionalism of Fodor and Putnam in the 1970s, and the ensuing discourse on multiple realizability, a broad consensus has emerged which shares the common view that computational explanations obtain in representational theories of mind. This in turn conditions outlooks on the nature of computation, best summarized by Fodor’s remark that there is “no computation without representation”.1 From this a number of computational theories

1

Fodor, J. A., 1981. Representations: Philosophical Essays on the Foundations of Cognitive Science. Brighton: Harvester Press. pp. 225-257.

7

Logiciel

have emerged (e.g. Harman, Block, Chalmers).2 In these theories, computation is invariably invested with semantic content, often expressed through an appeal to functional, conceptual or inferential role. Alternative syntactic accounts have been presented by Chomsky, Stich, and others, as a rebuttal to tics outside the bounds of computation, whilst still preserving a role for computational explanation in cognition. A more forceful rejection of this form of computationalism is what I with the machinic, a dominant view in computer science itthe work of Piccinini. I will argue that computation is underuisite degrees of freedom needed for a theory of computation which both constrains its ontological footprint and allows In the process, I will trace the current debate on semantics in computing, discussing the work of Egan, Ladyman, and Shagrir. On this point my text will follow developments in ing, namely the type theoretic version of the lambda calculus, arguing for its irreducibility to Turing’s model. The aim will be to develop a distinct semantics that can clarify the relation of computation to reason, without adhering to either Turing orthodoxy or a fully blown computational theory of mind. The second dogma I will outline is computational real-

2

Block, N., 1986. Advertisement for a Semantics for Psychology. Midwest Studies in Philosophy, 10, pp. 615-678.

8

A Minimal Program

Bostrom, Chaitin, and Wolfram, to the earlier work of Zuse, and further to the mathesis universalis of Leibniz. While critiful (e.g. Piccinini), digitality is a deeper assumption taken (e.g. Galloway). Here we can take Deutsch’s remark that “the universe computes” as paradigmatic of a view in which physics is describable as automata—in which the physical microstates of matter accord to computational states.3 This is mediated by the lawlike Landauer principle, enshrining the coupling of the computational and the physical via an appeal to a thermodynamic analogy at the heart of information theory. Indeed, a binary encoding of form into ‘bits’ is posited as a fundamental property of information theory, and disentangling logical bivalence from notions of computation will not from the dual of the digital and the analogue, but rather the dyad of computation and the real. I will try to show how a constructive view of computing, following from the logic of Brouwer and the algebra of Heyting, can loosen the grip of the digital on computation, leveraging univalent foundations in mathematics to develop a view on the continuous and the discrete which extends beyond the presuppositions of Boolean logic. In contrast to the dominant model-theoretic approach to logic, a topological treatment of types will be explored, as 3

Deutsch, D.E., Barenco, A. and Ekert, A., 1995. Universality in Quantum Computation. Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences, 449 (1937), pp. 669-677.

9

Logiciel

a multi-valued regime characterised by a proof-theoretic semantics. In the course of this argument, I will examine accounts of the continuum and the relation of computation to contingency, which ensues from an intuitionistic treatment tum and analog computing to demonstrate the generality of computation above and beyond the digital, only through an alternative logical foundation can this be developed fully into its own generative model. Novel interpretations become possible once computation traced through the chapters that follow, is a notion of computation as a distinct mode of reasoning, marking out its own epistēmē—in Aristotelian terms, computation distinguishes itself from tékhnē as a distinct logos, a world whose logic structures the conditions for the possibility of encoding reason. As such, this book outlines a model of computation grounded not in static axiomatics—which I will trace to Hilbertian notions of formalism—but rather dynamic inferential acts, which follow from the constructive view. Proceeding from the realizability interpretation of logic, this model will be elaborated as a means of resolving a number of issues raised by both dogmas of computationalism. The implications of this worldview, grounded in the type theory of Martin-Löf, will be explored in terms of their accompanying ontic and epistemic commitments, and on this topic I will cover the intuitionist physics of Gisin as well as the computational trinitarianism of Harper. A form of informational realism will emerge as an alternative to universalism, a view which both admits the reality of information as a matter of fact in the world, and necessarily commits to an irreducible contingency of the real. 10

A Minimal Program

Any account of computational reason needs to acknowledge the deep history of computational ideas in human thought. Retrospectively, we can identify and assimilate historical notions such as algorithm, automata, induction, recursion, and undecidability, into a broader history of computation. It is not the aim of these notes to fully elaborate

at a contemporary treatment of computation as its own logos explicitly acknowledged as a formal issue by the Buddhist

as the fourfold negation known as catuṣkoṭi.4 This scheme allows for the possibility of a multi-valued logic which admits dialethic statements as superpositions of multiple truth values, a violation of the Law of Non-Contradiction (LNC), while also allowing for the converse, statements which may be neither true nor false, a clear violation of the Law of the Excluded Middle (LEM). These make up two of the three fundamental laws of western logic, acknowledged since Aristotle as immutable, and their rejection in any form would constitute a major threat to modern mathematics. Indeed, catuṣkoṭi, which he calls “the middle way” in thought, is an early expression of what came its recent reassimilation only via the intuitionism of Brouwer

4

Priest, G., 2010. The Logic of the Catuskoti. Comparative Philosophy, 1(2), pp. 51-51.

11

Logiciel

and Heyting and modern paraconsistent logics. We can now view these challenges to western rationality, evident also in ancient Chinese philosophy, as early expressions of the concept of the undecidable in logic, a term which would only come into focus as a properly computational phenomena following Hilbert’s formulation of the entscheidungsproblem in 1928. Similarly complex genealogies underlie many concepts fundamental to an account of computational reason—mathematical induction was deployed by Euclid in his proof of the ry AD. The indications are that we should not be blinkered into assuming we are dealing with a twentieth century development—the digital computer as a distinct technical artifact—but rather a logical structure integral to the long arc of reason itself. Indeed, I will mention almost nothing of those computational devices, infrastructures, and technologies phenomenological experience, focusing instead on the conceptual underpinnings of computation—for it is the nature time, if computationalism is to bring its epistemology, semantics, and ontology into a new phase. Lastly, a note on methodology and style—this is a work of synthetic philosophy, blending references from the analytic in which analogies, echoes, and resonances sit alongside isomorphisms, identities, and formal correspondences, in which stances, doctrines, proofs, dogmas, worldviews, and speculations freely mingle. It is a multi-disciplinary text, navigating a number of discourses, including cognitive science, logic, 12

A Minimal Program

mathematics, philosophy of mind, and not least my “home discipline”, computer science. While I have attempted exposition of technical concepts where necessary, invariably tradewith such diverse material. In the spirit of the seminars which bore this material, I have attempted to craft a text which is approachable by readers coming from a range of backgrounds, without alienating either experts or novices in the aforementioned disciplines—as such, no real technical background is assumed on the part of the reader, aside from an interest in will irk those with deeper knowledge in any given domain. All I ask is that you trust the discursive value of the overarching argument and the balance I have tried to strike between which is admittedly heterodox by academic standards, but which underpins the open and informal nature of this text.

13

ONE

Computation & The Real

“To the question whether we need intuition for the solution to mathematical problems it must be answered that language itself here supplies the necessary intuition… the process of calculation brings about just this intuition.” 1 — Ludwig Wittgenstein

1

Wittgenstein, L, 1998. Tractatus Logico-Philosophicus: 6.233. Courier.

15

Computation & The Real

1.1 The Excluded Middle For as long as axiomatic systems have existed, they have been expressed itself in the history of western thought geometrily innocuous observation that parallel lines on a plane will never intersect, a valid but nonetheless unprovable proposi-

tems ever since, but only in the twentieth century has western mathematics faced up to this issue at the heart of its discipline. I aim here to outline the relation between intuitionism and computation, by taking such forms of undecidability as a thread with which to bind these two developments in reasoning. Our concern here is an overview of the historical developments that lead to the emergence of an alternative model of computation, founded on such attempts to confront the limits of determinacy, an intuitionistic view of computing with its own distinct semantics, metaphysics, and ontology. Since 17

Logiciel

computation can be deemed a historical attempt to develop a formal account of contingency, and intuitionism represents of computational ideas cannot be overstated. In 1912, LEJ Brouwer—an established topologist at the time—announced a new perspective on mathematical thought which was to spark a major controversy. In a public lecture, Brouwer outlined the central tenets of what he called intuitionism, a doctrine shaping his views on the nature of mathematics, asserting its claim to the status of a foundational system.2 He brazenly attacked the presuppositions underlying the greatest open problem in set theory, known as the continuum hypothesis, dismissing the entire conjecture as nonsensical cardinality of the real numbers,

. Indeed, Cantor’s entire cal project, and doubt was cast on core tenets of what Hilbert called “Cantor’s paradise”, namely the set theoretic foundamatics was to be seen as a constructive enterprise, comprised of methods constrained to strictly denumerable objects, such as the realm of the rational numbers. In this regard, Brouwer was echoing earlier expressions of constructivism, most famously that of nineteenth century number theorist Leopold Kronecker, who had exclaimed, “Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk”/“God made the integers, all else is the work of man”. 2

1

Brouwer, L.E.J., 1913 (1975). Intuitionism and formalism. In: Philosophy and Foundations of Mathematics.North-Holland. pp. 123-138.

18

Computation & The Real

Brouwer outlines a conception of mathematics rooted in intuition, “[as] a languageless activity of the mind having its origin in the perception of a move of time. This perception of a move of time may be described as the falling apart of a life moment into two distinct things, one of which gives way to the other, but is retained by memory. If the twoity form of the common substratum of all twoities. And it is this common substratum, this empty form, which is the 3

logic via a concept Brouwer calls “twoity”—simply one thing followed by another—serving as the basis for the natural numbers, proceeding to encompass the whole of arithmetic and alto the faculty of intuition, which issues a challenge to a priori conceptions of mathematical form, instead irrevocably coupling itself to an inductive process, precipitated as it is by the “perception of a move of time”. In Kantian terms, intuitionism as a doctrine can be taken to allude to both pure and sensible intuition at once, engaged in the genesis and transformation of both concepts and percepts. For Kant, intuition is to be distinguished from sensation, just as it is to be treated as distinct from

3

Brouwer, LEJ., 1981. Brouwer’s Cambridge lectures on intuitionism, D. van Dalen (ed.), Cambridge: Cambridge University Press, Cambridge. pp. 4-5

19

Logiciel

representations.4 Intuition in this sense is not simply a case of representing particular objects given to sensibility, which are to be subsumed by general concepts, mirrored as it is by a realm of pure intuitions, such as space and time, which are a priori in matters of reasoning. In the dual cognitive system proposed by Kant, the role of intuition spans both understanding and rather an intermediary mode of representation, one which we could characterize, in modern terms, as “sub-symbolic”. As Longuenesse points out, for Kant mathematical objects conform to the realm of the intelligible, they originate in pure intuition, but unlike the objects of metaphysics they do not necessitate transcendental categories.5 In Kant’s transcendental deduction, Longuenesse observes the “contrast between their a priori validity, and the pure concepts of the understand6 Brouwer challenges this Kantian conception of mathematics as a realm of synthetic a priori judgements following from pure intelligibles, proposing instead a practice induced by the dynamic movement of time, a genesis of form which is integral to that movement.7 This synthesis, which is the act of mathematical creativity itself, is necessarily conceptual in nature, insofar as it constitutes inferential acts endowed with epistemic content, as the second act of intuitionism will make explicit.

4 5 6 7

Engstrom, S., 2006. Understanding and Sensibility. Inquiry, 49(1), pp.2-25. Longuenesse, B., 2020. Kant and the Capacity to Judge: Sensibility and Discursivity in the Transcendental Analytic of the Critique of Pure Reason. Princeton University Press. p. 32. Ibid. The tension between intuitionism and Kant’s synthetic a priori will be explored in terms of a computational treatment of modal logic in Chapter 5.

20

Computation & The Real

Kant cites geometry as a domain of mathematical entities with no empirical basis, for which “not being grounded in experience, they cannot, in a priori intuition, exhibit any object such as might, prior to all experience, serve as ground for their synthesis.”8 Mathematics appears to have no objective grounding in the Kantian account, a stance which seemingly endorses a claim regarding the irreducibility of the intelligible to the empirical. But as Longuenesse argues, Kant’s position on this point is complex, and a convincing case can be made for the primacy of judgement, acting on a manifold of intuition, over the prior application of pure categories, even in the context of mathematical structures.9 By contrast, a more transcendental Kantian view is to be found in the Sellarsian notion of “conceptual intuition”, which for Brassier is an “epistemically irreducible” representation imbued with categorical form, providing predicative content to empirical judgements, and implicitly structuring our observational reports.10 Here, no clear decomposition of intuition and concepts is forthcoming, and intuition is anchored by what Sellars calls “correct picturing”, a form of structural realism conditioned by spatiotemporal priors, mediated via non-conceptual representings.11 These two perspectives serve to highlight the hermeneutic emphasizing the stakes at play, namely a cognitive account of with Brouwer’s doctrine, the former interpretation, centered

8 9 10 11

Immanuel, K., 2018. Critique of Pure Reason. Charles River Editors. (A87-88/B 120) Longuenesse, B., Kant and the Capacity to Judge. pp. 199-209 Brassier, R., 2016. Transcendental Logic and True Representings. Glass Bead Journal.

Ibid.

21

Logiciel

constructive notion of intuition, whereas the Sellarsian theory of picturing can bridge an intuitive account of structure with the real. For Brouwer, the intuitive basis of mathematics is grounded in a “common substratum”, which can be interpreted as a gesture to the continuum, a murky “empty form” recalling Anaximander’s apeiron, the boundless or without limit. That which is not given to mathematical reason can have no logical basis, but it nevertheless acts as its “basic intuition”, a notion I will seek to clarify as we proceed. The naive sounding reference to a “languageless activity” can be interpreted in a variplay a role in proof networks in a later chapter. But another interpretation arises from the sub-symbolic nature of geometry, which is implicit in the topological model of computational reason I will go on to endorse. In this view, non-conceptual (geometric) representings provide the basic intuition for the generation of (topological) structures which fall under concepts in the form of types. But as we shall see, the decoupling of geometry and structure will be complicated by a close inspection of contemporary mathematics. Brouwer’s appeal to the non-linguistic nature of intuition can also be conceived as an attempt at staking out mathematics as a formally autonomous practice, distinct from the natural language of speech acts, or even formal classical logic—an interpretation which accords with the proof-theoretic semantic tradition that intuitionism would put into motion.12 In any case, it is fair to say 12

Goldblatt, R., 2014. Chapter 8: The Logic of Intuitionism. In: Topoi: The Categorial Analysis of Logic. Elsevier.

22

Computation & The Real

that the vague nature of Brouwer’s pronouncement encouraged the portrayal of intuitionism as an eccentric disavowal many a mathematician along the way, unable to satisfy the demand for precision which is the norm in the discipline. The second act sheds light on the intuitionist treatment of the continuum, insisting on the constructive nature of mathematical objects, intuitionism admitting two ways of constructing said objects: properties supposable for mathematical entities previously tain mathematical entity, they also hold for all mathemati13

Mathematics is framed as a means of establishing identities between objects produced via denumerable operations, or in turn those between species (i.e. types) of objects. This marks out intuitionism as a form of constructivism, and would in turn provide a treatment of the continuum as an a pre-existing unity or totality in favour of a generative account of the real numbers. In this regard, Brouwer’s doctrine resembles one of Simondon’s three modes of intuition, namely the philosophical variety, which is “neither sensible

13

Brouwer, LEJ., Brouwer’s Cambridge lectures on intuitionism, p. 8.

23

Logiciel

nor intellectual”,14 but rather a systemic means of formation or genesis, distinct from both concept and idea, and unmoored from the unity of the real:

like the concept, nor a reference to the totality of the

15

While Simondon’s motivation is an account of the evolution of technicity, it shares with Brouwer’s portrayal of mathematical reason a systematic role for intuition in the genesis of form, a genesis which eschews the unity or totality of the real. As Ladyman has remarked, modern logical frameworks descendfollow the constructive path whilst remaining non-committal on its intuitive philosophical basis.16 In a sense, these two acts can be decoupled in the mind of the working mathematician, but a philosophical position would need to develop an integrated view in order to call itself intuitionistic, as opposed to simply constructive. I will defend such an intuitionistic view on the basis that the temporal substratum which yields twoity is foundational in elaborating the relation of computation to the real, in that it generates a dynamic notion of inference I will defend as integral to computational reason.

14 15 16

Simondon, G., 2017. On the Mode of Existence of Technical Objects. Minneapolis: Univocal Publishing. p. 254. Ibid. Ladyman, J. and Presnell, S., 2014. A Primer on Homotopy Type Theory Part 1: The Formal Type Theory. p. 20.

24

Computation & The Real

Brouwer’s two commitments led to a major break with the existing consensus at the time, Hilbert’s program of formalism, which had come to replace the failed logicism of Frege, Russell, and Whitehead, as the dominant school of mathematical thinking with regards to foundations. Brouwer summarized this eminently philosophical rupture as such, 17

For formalism, mathematics is an analytical process of deduction performed on a set of axioms which are to be treated as logical givens, with the emphasis on the internal consistency of a symbolic structure governed by unambiguous rules. To extend the expressivity of a system, a formalist must add axioms, which in turn supplement the existing canon as imon creativity to the intuitionistic model I will endorse, whose extensibility will rest instead on the formation of inferential rules which yield new types. For the formalist, the meaninglessness of the symbols is assured by their lack of referent, denoting nothing in themselves, but instead embodying a purely analytical practice. This assumption would be proven untenable in due course, classical logic inheriting a semantics which Tarski would in time identify as meta-linguistic in nature. Intuitionism, by contrast, would provide its own semantics rooted in its constructive doctrine, originating in a challenge

17

Brouwer, L.E.J., 1913. Intuitionism and Formalism.

25

Logiciel

aimed at the heart of logical foundations. The three founding axioms of western logic under formalism, immutable laws which have held since Aristotle, are the following: 1. Law of Identity: A := A 2. Law of Non-Contradiction: ¬(A ¬A) 3. Law of the Excluded Middle: A ¬A Intuitionism would come to reject the third of these—the law of the excluded middle (LEM)—as a general axiom, on constructive grounds. The LEM simply poses that a statement must be either true or false, there is no middle ground in logic, a fundamental law which Brouwer attacks via a strictly philosophical argument. For constructivists, the evidence of the truth or falsity of a mathematical statement is only laid bare upon the construction of a proof, a process which Heyting likened to an algorithm, in a symposium on epistemology held in Könisberg in 1930.18 At this renowned conference, Heyting defended the foundational prospects of intuitionism, while Carnap defended logicism, and Von Neumann put

which, for Lautman, marks the transition from the “naive” to the “critical” phase of mathematical thought.19 Heyting positioned the intuitionist stance against the classical Platonist insistence on the independent existence of mathematical 18

19

Heyting, A., 1930. The Intuitionist Foundations of Mathematics. In: Benacerraf, P. and Putnam, H. (eds.), 1984. Philosophy of Mathematics: Selected Readings. Cambridge University Press. Lautman, A., 2011. Mathematics, Ideas and the Physical Real. A&C Black. p. 141.

26

Computation & The Real

objects, instead regarding proof construction an ontologically ampliative exercise which brings a truth into being. In of the demonstration of a proof, leading Dummett to remark that intuitionism implores the mathematician to “replace the notion of truth, as the central notion of the theory of meaning for mathematical statements, by the notion of proof”.20 In this constructive view, computation is vital in providing a semantics for mathematical statements—as Dummett argues, arithmetic expressions like 2+2=4 are meaningless without summoning an algorithm that establishes their identity.21 This has implications for the law of the excluded middle, given that if no proof currently exists for an arbitrary assertion A, then no guarantee can be made in advance regarding its decidability, and we will see in due course how various develproof-theoretic semantics. Brouwer would turn this split on the LEM into an attack on formalism as a foundation for mathematics: “The long belief in the principle of the excluded third in mathematics is considered by Intuitionism as a phenomenon of history of civilisation of the same kind as the oldtime belief in the rationality of pi… intuitionism tries to explain the long persistence of this dogma by two facts:

20 21

Dummett, M., 1975. The Philosophical Basis of Intuitionistic Logic. In: Studies in Logic and the Foundations of Mathematics, Vol. 80. Elsevier. pp. 5-40. Ibid.

27

Logiciel

of the whole of classical logic for an extensive group of sim22

While Brouwer admits that such a law can trivially hold for single assertions, given the demonstration or refutation of a proof, it cannot hold as a general axiom. This law had seemed so fundamental to the practice of mathematics up until this point, that Hilbert famously replied that “taking the principle of excluded middle from the mathematician would be the same, say as proscribing the telescope to the astronomer 23 It is precisely this excluded middle which Brouwer sought to fold back in to the mathematical domain, on account of a fundamentally temporal characterisation of mathematics, elicited by twoness as a primary intuition yielded by a “common substratum”, a form The rejection of the LEM places undecidability at the heart of mathematics, insisting that mathematicians drop Platonist notions regarding the ‘discovery’ of proofs, emphasizing instead a dynamic and resource sensitive search space of potential proof construction. A search space which is not simply conditioned by a priori forms, but is instead constrained by those topologies which mathematical creativity is able to realize as novel structures in the world, and we will come to see how these very spaces for the construction of proofs can

22

23

Church, A., 1949. LEJ Brouwer. Consciousness, Philosophy, and Mathematics. Proceedings of the Tenth International Congress of Philosophy (Amsterdam, 1948), North-Holland Publishing Company, pp. 1235–1249. The Journal of Symbolic Logic, 14(2), pp. 132-133. Ewald, W. and Sieg, W., 2013. David Hilbert’s Lectures on the Foundations of Arithmetic and Logic 1917-1933. Springer Berlin Heidelberg.

28

Computation & The Real

be generated via a topological model of computation in due course. The computational nature of undecidability would become apparent following Hilbert’s framing of the entscheidungsproblem in 1929. Whilst Church and Turing would forcomputation as a powerful means of modelling contingency, it is Brouwer’s logic which emerges as the key implication of the undecidability of the halting problem, rendering the undecidable a symptom of a deeper indeterminacy of the real, a symptom exposed by computational acts in the form of decision procedures.

1.2 Three Canonical Models Let us examine the development of computation in the shadow of intuitionist ideas with a semantic analysis of canonical models. The three canonical models of computation developed in the 1930s are Gödel’s general recursive functions, Church’s lambda calculus, and Turing Machines. I will ar-

of these models belies a more complex relation. The aim is to make the ampliative nature of a type theoretic account of computing based on Church’s model apparent as we progress, marking it out as irreducible to a Turing Machine (TM). 29

Logiciel

Let us distinguish the approach taken by each model in turn. While Gödel attempts to give an account of computability in terms of a mathematical domain of functions, Church is concerned with providing a calculus with which to represent computational operations, while Turing in turn focuses on the description of an automaton that can realize computations. We can broadly align these models with distinct mathematical (Gödel), linguistic (Church), and mechanistic (Turing) traditions in computational theory. Moreover, they allow us to see computation as a historical attempt to modincompleteness (Gödel), inconsistency (Church), and undecidability (Turing) in a formal manner. What these models tion, but rather a diagnosis of contingency in formal systems, which we can view through the lens of intuitionism and its rejection of the LEM. It is this aspect of computation which inherits and further extends intuitionistic ideas developed in model makes semantic assumptions regarding the nature of computation, and it will be my focus in this section to identify these presuppositions.

natural number, staking out a sub-space of all mathematical this formal system outlined below:

30

Computation & The Real

Primitives: ‒ Constant f: z(n) = 0 ‒ Successor f: S(n) = n+1 ‒ Projection f: pi(k1 ,...,kn) = ki Operators: ‒ Composition: f g ‒ Recursion: ρ(h): h(x, y+1) = g(x, y, h(x, y)) ‒ Minimization: μ Recursion is singled out as a key property and formalized as an operation in which a function iteratively calls itself. An unbounded search operator (μ) can be introduced to extend the primitive recursive functions in order to handle second-order recursion—functions which not only call themselves but which supply themselves as arguments, the so-called Ackermann function schema.24 The purpose of μ is to seek the minimum input that produces an output of zero, and to use that as an origin for an unbounded search, making explicit the halting characteristics which Turing would go on to formalize for TMs. The recursivity of computationrunaway dynamics of recursive processes are not tamed, but rather accepted as intrinsic, embodied in the form of the unbounded search algorithm, μ. Gödel would go on to link this 24

Kleene, S.C., De Bruijn, N.G., de Groot, J. and Zaanen, A.C., 1952. Introduction to Metamathematics, Vol. 483. New York: Van Nostrand. pp. 262-308.

31

Logiciel

rem in due course. The semantics of Gödel’s system are entirely inherited from mathematics, that is to say, the domain and range of such functions are assumed to denote their input and output, as this denotational semantics was the accepted approach in the discipline at the time. By contrast, Church’s lambda calculus is so minimal it can be summarized in just three expressions:

L, M, N ::= x | (λx. N) | (L M)

terms abstraction application and composition

Here the pipe symbols (|) denote expression substitution as is conventional in Backus-Naus Normal Form (BNF). So called terms (L, M, N) can be assigned to variables (x), these are bound to anonymous functions known as lambdas (λ), which can then be applied in chains (composition).25 This provides a higher-order functional calculus for computationdevelopment of programming languages, such as Haskell, an inconsistency pointed out by two students of Church, Kleene and Rosser, an issue which is related to the Richard paradox in formal languages, and constitutes a form of diagonal argument (Cantor).26 This echoes the manner in which

25 26

Hindley, J.R., 1997. Basic Simple Type Theory (No. 42). Cambridge University Press. pp. 1-10. Ibid. pp. 12-27.

32

Computation & The Real

set theory was extended with types in order to avoid the famous Russell paradox at the center of the foundational project of the Principia Mathematica.27 These antinomies can be construed as paradoxes of self-reference, and the types which extend Church’s calculus into the simply typed lambda calrecursive regress arising from unconstrained function application. This marks out the computational as the family of functions operating on the natural numbers, distinguished by the function type, . The STLC would become the basis for Martin-Löf ’s intuitionistic theory of types (ITT) which forms the foundations for the account of computation developed in Chapter 4, and this would go on to supply on a function or a set, introducing its own operational semantics distinct from the denotational tradition. Lastly, the Turing Machine (TM) is a model of an automaton capable of computing any effectively computable funcas turing computability.28 Turing’s emphasis on the physical realizability of computations propelled the founding of computer science as an empirical discipline in its own right, introducing key concepts such as determinacy, boundedness, and locality.29 output tape containing cells of symbols are read by a ‘head’ which can in turn overwrite symbols on the tape based on

27 28 29

Priest, G., 2002. Beyond the Limits of Thought. Oxford University Press. p. 128. Turing, A.M., 1937. On Computable Numbers, with an Application to the EntscheidungsProblem. Proceedings of the London Mathematical Society, 2(1), pp.230-265. Hopcroft, J.E., Motwani, R. and Ullman, J.D., 2013. Automata Theory, Languages, and Computation (Third Edition). Pearson. pp. 315-377.

33

Logiciel

Notably, this architecture lacks much of what we associate with modern computers—such as stored programs, memory, or even a CPU—since it would take decades for Von Neumann, Zuse, and many others to develop these ideas. Of a formalist bias, which makes little attempt to specify what kind of transformations are distinctly computational, beyond an appeal to operations which “include all those which are used in the computation of a number”.30 not articulate the relation between computation, logic and mathematics, other than stipulating their symbolic nature. One could be excused for thinking there was no logical basis at all to computation based solely on Turing’s account, as it tion. This has furnished the TM formalism with a universality which allows for its application in ontological arguments, extending forms of computational realism into the natural or syntactic aspect of computation is emphasized with respect is however deeply contested, and the assumptions made by TMs on this front merit closer scrutiny, as they speak to key issues regarding the semantics of computation. It is not my intention here to outline a broad semiotic theory of signs, but rather to locate the role of the symbolic within computational reason, and in this regard certain concepts which are of particular interest—denotation, indexicality and

30

Ibid.

34

Computation & The Real

intensionality—should be introduced. The origin of modern semantic theory is in Frege’s Sinn (sense) und Bedeutung (reference), and this is a common starting point from which to interrogate the symbolic. Frege’s innovation was to distinguish the sign from its denotation, allowing for an account of the internal structure of symbols and their relations, advancing a new denotational semantics for formal languages. Frege was working in parallel to Peirce, whose semiotics located symbols within a trichotomy of the sign, alongside icons and indices. For Peirce, an indexical sign is characterized by its capacity to refer to an atomic unit or individual, without exhibiting a resemblance or likeness, and I will elaborate on the computational nature of indexicality via notions of addressability and 31 With Carnap’s development of intensional logic, denotation was formalized further, allowing logicians to treat intensions as pliable functions and extensions as structures. It should be noted that the symbolic, in so far as it refers to a mark or inscription, is intrinsically geometric—I draw a close relationship between topology and syntax in later chapters, situating a notion of geometric structure at the root mark as symbolic then has three interpretations, it is to claim it has an internal structure (mathematical), an intensional logic a denotation or referent (Bedeutung), but does imply a syntax, without which the grapheme has no sense (Sinn). I posit that an intension is a minimal condition for a symbol to exist qua 31

Peirce, C.S., 1902. Logic as Semiotic: The Theory of Signs in Philosophical writings of Peirce. Courier Corporation.

35

Logiciel

symbol, and that this intensional structure has a geometric ba-

course, resolving some of the semantic issues thrown up by Turing, but for now let us diagnose them. Early formalism can be accused of adopting an uncritical or naive attitude towards semantics insofar as its claims of establishing an axiomatic science of deductive analysis on meaningless signs have been shown to be untenable. Tarski was to show that no such purely syntactic regime can be established when discussing any language descended from classical logic, but rather that a meta-linguistic apparatus is needed to underpin meaning in these so-called formal languages.32 For Tarski, any semantic treatment of such languages hinges on the identity of truth values, in other words, in how truth is established. In systems descended from classical logic, such as the Boolean logic underlying digital computers, truth tables

not possible to articulate them axiomatically, leading Tarski to prove that no system of arithmetic could ground its own truth values. Tarski’s approach would later develop into a act as meta-linguistic frames undergirding the semantics of possible worlds in modal logic. Intuitionist semantics marks a radical departure from this approach, instead elaborating a proof-theoretic basis for establishing meaning, an opera-

32

Etchemendy, J., 1988. Tarski on Truth and Logical Consequence. The Journal of Symbolic Logic, 53(1), pp.51-79.

36

Computation & The Real

tradition proceeds from Heyting, via the work of Gentzen to modern practitioners like Martin-Löf, Abramsky and Girard, Church’s calculus. Turing’s model, by contrast, should be cient account of computational reason, by virtue of its semantic naivety. These two semantic traditions, model-theory and proof-theory, constitute distinct perspectives on meaning in formal languages, and we shall explore this interplay further in Chapters 3 & 4, as part of a project to develop a strictly constructive model of computation. In time, all three canonical models articulated their own (1931) was explicitly based on the primitive recursive nature of a proposition known as the Gödel sentence, and it was Gödel’s attempts to later formalize this notion that led to work on the general recursive functions in 1934. Gödel’s theorem struck a fatal blow to Hilbert’s project for a fully axiomatized mathematics, an insurmountable challenge summarized by Gödel decades later in his personal correspondence: “The few immediately evident axioms from which all of truth can (if at all) be apprehended only by constantly renewed appeals to mathematical intuition…” 33

33

Wang, H., 1990. Reflections on Kurt Gödel. MIT Press. p. 129.

37

Logiciel

of revisable rules, as opposed to a canon of immutable laws, is precisely what I have come to call the inferential view, and Church extended the lambda calculus with types in order to resolve the logical inconsistency of his model in 1936, problem for computable functions using TMs in 1937. The ly accepted, but Turing’s treatment distinguishes itself as natively computational insofar as his framing of the problem is not strictly mathematical, but stated in terms of decision proin 1928 in the form of the entscheidungsproblem. Indeed, it is only through the lens of undecidability that the intuitionistic nature of computing becomes apparent—counter to the Boolean tradition, we will come to view computation as a means of physically realizing Brouwer’s logic, thus admitting indeterminacy into its foundations. lences between the three canonical models. Church’s Thesis is class of Gödel’s general recursive functions. This was further putable functions: a function is lambda computable only if it is Turing computable, and in turn only if it is general recursive. The Church-Turing thesis (CTT), composed when Turing visited Church in Princeton in 1937, would take the informal

38

Computation & The Real

mechanical processes underpinned all three classes. As such, it is impossible to claim that canonical models present computation as a purely formal notion, just as it is not possible to claim CTT as an unstable foundation, consigning as it does all three canonical models to a lack of formal grounding. My claim is that the precedence of a mechanistic interpretation of computation imposed by Turing is harmful in this regard, and I will examine the semantic implications of the attempted grounding

1.3 The Continuum Brouwer’s doctrine for mathematical reasoning opened up a deep rift in the philosophy of mathematics, in the process absorbing a foundational challenge to the decidability of logic. For Brouwer, mathematical objects are mental constructions, they ultimately appear to the human mind as a form of intuition. Hilbertian formalism, the dominant modern school, instead emphasised consistent axiomatic systems, to be treated as hermetic signs without external referents.34 The three canonical

34

The nature of the axiomatic is something which would occupy Hilbert throughout his career, and a more complex rendering of axioms is given in Chapter 3, via the work of Danielle Macbeth.

39

Logiciel

models of computation plumbed the fault line occasioned by Brouwer with their formal accounts of contingency, rendering incompleteness in a new light. The interwoven nature of intuitionism and computation, bound by their admission of inconsistency, is most evident in their mutual accounts of the Real. Questioning Platonist claims regarding numerical realism, and formalist claims as to the atemporality of mathematical structures, the second act of intuitionism conditions the relation of

The following antinomy emerges at the heart of intuitionism: how can a constructive practice which accepts only denumerable methods nevertheless claim the totality of the continuum as the “primary intuition” of mathematics? At stake in this

To develop an integrated account of these categories will sical mathematics, the computable numbers form part of a ( ), which include the natural numbers ( ), the integers ( ) and the rationals ( ). Beginning with the natural numbers, a computational account of plurality is to be located formally in the Church numerals. These portray natural numbers in terms of the repeated application of functions—Church’s encoding of numbers into the lambda calculus renders number as no more than a by-product of function composition, a function f to its n-fold composition:

fn=f f … f 40

Computation & The Real

And the number line develops as follows:

0=0fx=x 1=1fx=fx 2 = 2 f x = f(f x) Numerals then represent an index of applications of functions to values, and it is the indexical nature of encodings processual approach to numbers can be considered a natively computational treatment of not just the natural numbers, but the whole of the rational number line, thanks to the expandadditional properties for dealing with negative integers and fractions.35 In this view, number is synonymous with the repetition of a self-referential process, which has as its limit case numerical under the concept of encoding, a proposal which

ordinals.36 a process of recursive embedding:

0=Ø 1={0}={Ø} 2 = { 0, 1 } = { Ø, { Ø } } … 35 36

Pierce, B.C. and Benjamin, C., 2002. Types and Programming Languages. MIT Press. p. 60. Halmos, P.R., 2017. Naive Set Theory: Section 11. Courier Dover Publications. p. 35

41

Logiciel

cedure making use of recursive operations, only the former proceeds from functional foundations which are intrinsically accounts the domain of the rational ( ) appears to be rooted in recursion, which resembles a structural form of embedding—a theme which will resurface when discussing a geometric view of computation in later chapters. Having given a computational overview of the rational number line, let us turn to the thornier terrain of the continuum, . The construction of the Real marks the discontinuity between the countable and the uncountable – crossing the threshold which marks the domain of the irrational, the transcendental, and the incomputable. Whereas the rationals represent all whole number fractions, the real number line includes those irrational numbers, including surds such as

set theory would conjecture the incommensurability of these two domains, the continuum hypothesis asserting an unbridgeable gap between the cardinality of the rationals and that of the reals, by virtue of Cantor’s diagonal method.37 This framing exposes an underlying complexity to the numThe relation between rational and real is far from a trivial inclusion relation—rationals are said to be dense in the real, that is, a rational exists between any two real numbers—indicating the deeply intertwined nature of these domains. The density

37

Kleene, Introduction to Metamathematics. pp 3-14.

42

Computation & The Real

of the rationals in the real instead suggests a thick consistency to the continuum—above all a viscosity which would thwart any attempt at drawing a clean boundary between the rational realm and the seeming unity of the real—a state which accounts of the real. Classically, the continuum has been associated with geometry and the domain of continuous functions in metric spaces, and this has been contrasted with the symbolic donumerous methods suggested for the elaboration of the real number line, two are generally accepted in modern mathethe calculus of Leibniz and Newton of the late eighteenth

a criterion for identity. By formalizing convergence, Cauchy ern set theoretic treatment came via Dedekind, proposing instead a cut in the Real to partition the continuum into two sets of rational numbers, one of them necessarily an Badiou characterizes this set theoretic conception of the comprising the “minimum matter situated exactly between two sets of dyadic rationals”.38 Despite Badiou’s reassurances that there is no mystery in the status of a number whose

38

Badiou, A., 2008. Number and Numbers, Polity, pp. 175-177.

43

Logiciel

to the gnomic nature of this account of number, wherein Dedekind’s cut appears to occupy an Archimedian point.39 point” with no a priori ontological status, but its claim to existence is hardly merited as the operation cannot be defended as constructive.40 From the intuitionistic view, these treatments of the Real—the concepts of the limit and the cut—are hampered by appeals to the totality of uncountable

Dedekind cuts originates in his insistence on the construclimits nor open sets satisfy the constraints of intuitionist thought. Indeed, Brouwer would posit a continuity principle which would render the continuum theoretically indivisible. In their stead, Brouwer suggests free choice sequences, emphasizing the process of construction in the elaboration of a real number, selecting a digit at a time to iteratively generate the real in a non-terminating process.41 posed into spreads, which are akin to a time bound version of an open set, and these can in turn spawn fans by way of inin the form of a generator amounts to a proto-computational approach to the continuum, the free choice at each time-step underlining the Real as a temporal phenomenon approachable

39 40 41

Ibid. Ibid. Heyting, A. ed., 1966. Intuitionism: An Introduction, Vol. 41: Chapter 3. Elsevier.

44

Computation & The Real

only via a creative process. This would lead Brouwer to outline a theory of the Creating Subject, the domain of the rational rendered in terms of an ideal mathematician enacting free choices ranging over digits (natural numbers). Above all, any attempt to establish identities in the domain of the Real, aided by their amenity to computable treatment as decision procedures.42 As Bauer notes, this gives an intuitive characterization of the Real in terms of canonical models of computation, in that the “decidability of reals is real-world realized if, and only if, we can build the Halting oracle for Turing machines.”43 Notably, the interpretation of free choice given by Brouwer is broader than any computable notion, it could denote a form of incomputable contingency at each time step or else a deterministic algorithmic procedure. However, in both accounts the totality of the Real is rendered a mathematical impossibility, absent an oracular authority, and the constructive view leads to objects such as unbounded continuous maps on closed intervals, constructs which classical mathematics is fundamentally unable to accommodate, exposing its incomplete treatment of continuity.44 Moreover, it follows that e, exist no a priori ontological status. In the next chapter, we will see

42

43 44

Heyting, A., 1912. The Intuitionist Foundations of Mathematics. In: Benacerraf, P. and Putnam, H. (eds.), 1984. Philosophy of Mathematics: Selected Readings. Cambridge University Press. Bauer, A., 2013. Intuitionistic Mathematics and Realizability in the Physical World. In: A Computable Universe: Understanding and Exploring Nature as Computation, pp. 143-157. Ibid.

45

Logiciel

Gisin which advances an intuitionist physics stemming from this account. First, let us come back to the antinomy that motivated this discussion, as it will allow us to take stock of the constructive stance. Brouwer implies that the continuum must be regarded as a “primary intuition”, suggesting a realist view of the Real number line, but proceeds to suggest that thought, casting doubt on their existence as true mathematical entities. Ultimately, the unity of the continuum is not given to reason in the intuitionist account, and Real analysis is rejected as a mathematical practice tout court, as it treats the in the category of continuity is dismissed, and in its place lie algorithmically mediated free choices ranging over sequences which are time-bound, open-ended, and denumerable.

46

TWO

The Two Dogmas of Computationalism

“The models of modern physics are concerned, therefore, both with continuous and discrete values. It would seem appropriate to consider a hybrid system. It will be extremely difficult to find a technical model of a hybrid computer which behaves according to the laws of quantum physics.” 1 — Konrad Zuse

1

Zuse, K., 2013 (1970). Calculating space (Rechnender Raum). In A Computable Universe: Understanding and Exploring Nature as Computation (pp. 729-786).

The Two Dogmas of Computationalism

tive in epistemology and ontology: computational theories of mind and assertions of computational realism. The former identify computational with mental kinds, while the latter assert computation more broadly as a matter of fact in the world. In their extreme form, these lead respectively to the multiple realizability of mind hypothesis, and the view that “the universe computes” (Deutsch). I will argue against these two central dogmas as speculative claims arising from untenable applications of computational theory, seeking in turn generative accounts of computation loosened from the grip of these tenets. As such, what follows is a diagnosis of the isof computation within the related discourses, rather than a comprehensive overview. Later chapters will seek to resolve tifying the manner in which they have hamstrung a critical philosophy treats computation as a historical rupture in human thought, a cleaving of reason from mind, which yields an 49

Logiciel

artifactual account of intelligence unbound from parochial notions of rationality. To do justice to this epochal developdogmatic restraints is in order, such that novel computationalist positions can be elaborated in their stead.

2.1 Machine Functionalism roots in the functionalism of Fodor and Putnam and the ensuing debate on multiple realizability commencing in the 1970s. By decoupling mental from physical kinds, and in turn endowing them with a computational explanation, such positions paved the way for fully computational theories of mind, asserting the plausibility of instantiating minds on alternative substrates. In functionalism, mental by their functional role—take the paradigmatic example, expressed as a stimulus response to various forms of tissue damage, designed to alert an organism’s nervous system. Embedding such concepts within the language of functional analysis allows for a conjecture regarding their multiple counter proposal to behaviourist and physicalist theories of 50

The Two Dogmas of Computationalism

mental states. The best known of these early computational models is Fodor & Pylyshyn’s Representational Theory of Mind (RTM),2 which posits a content internalism that is conjectured as intrinsically computational, leading Fodor to remark that there is “no computation without representation”.3 Harman would go on to defend functionalism as an internalist stance on conceptual content, on the basis of a Kantian distinction between the properties of the intentional object of experience and experience itself.4 Chalmers the case for computation as the foundation for cognitive sciorganization” of structure mirroring computational states, postulating an isomorphism between the physical and the computational mediated by the causal.5 In Chalmers’ view, computation is better served by a multi-level account which decouples a program from its realization, endorsing a minwide variety of empirical accounts of mind. Churchland in turn attempts to abolish the role of content entirely, accommodating computation within an eliminative materialism, which makes an appeal to the connectionist paradigm of neural networks, presenting a physicalist rebuttal of functionalism.6 For Churchland, folk psychology and neuroscience will 2 3 4 5 6

Fodor, J.A., 1981. Representations: Philosophical Essays on the Foundations of Cognitive Science. Brighton: Harvester Press, pp. 225-257. Fodor, J.A., 1981. The Mind-Body Problem. Scientific American, 244(1), pp. 114-123. Vancouver. Harman, G., 1990. The Intrinsic Quality of Experience, Philosophical perspectives, 4, pp. 31-52. Chalmers, D.J., 1993. A Computational Foundation for the Study of Cognition. Churchland, P.S., 1989. Neurophilosophy: Toward a Unified Science of the Mind-Brain. MIT Press.

51

Logiciel

phenomena, and a naturalization of content will lead to a radical elimination of common-sense notions regarding the nature of conceptual states, consigning computation’s role to mechanical processes. Many functionalist models share an implicit semantics, which has implications for the notion of computation that they endorse. In the work of the Pittsburgh School, descended is synonymous with its inferential role—a semantic holism leading to a theory of meaning centered on the deployment of language in speech acts. Various conceptual role semantics (CRS) of this kind have been explained via functionalist models, and an early discussion of this discourse is available in the work of Block.7 Invariably, in these models, a computational of content internalism is generally adopted, a position which necessarily invests computation with semantic content. Here, the computational and the functional become synonymous and are in turn coupled to parochial notions of mind qua huhighlights its inability to free itself from such anthropocentric conceits, at the risk of under-determining the nature of qualia

7

Block, N., 1986. Advertisement for a Semantics for Psychology. Midwest Studies in Philosophy, 10, pp. 615-678.

52

The Two Dogmas of Computationalism

describe.8 Block’s contention is that functionalism, despite its rise to prominence in the 1980s to a dominant model today, will need to rely on conceptual frameworks aside from functional analysis and computation, namely psychology, neuroscience and phenomenology, in order to mount a decisive defeat of physicalism.9 Fodor in turn lays out the limits of applying computation within models predicated on a version of CRS as follows: “Suppose, however, it’s your metaphysical view that the semantic properties of a mental representation depend, wholly or in part, upon the computational relations that it to the notion of symbol. You will then need some other way of saying what it is for a causal relation among mental not presuppose such notions as symbol and content. It 10

explanations within functionalism, a strong referential circument of computation in an account of cognition, one which

8 9 10

Block, N., 1978. Troubles with Functionalism. In: Theories of Mind: An Introductory Reader (2006), pp.97-102. Block, N., 2007. Consciousness, Function, and Representation: Collected Papers. Bradford. Fodor, J.A., 1998. Concepts: Where Cognitive Science Went Wrong. Oxford University Press.

53

Logiciel

made to resolve this bind by arguing for the transcendental role of computation in the symbolic regime, by way of a procedure I characterize as the encoding of syntax. The aim is not to defend functionalism, but instead to endorse a computational inferentialism predicated on content internalism. I identify encoding as the operation which induces the regime of the symbolic in the tokening of expressions under types, via an appeal to the constructive type theory of Martin-Löf. For now, let us trace some responses to the semantic issues raised by representational theories, such as those of Chomsky and Stitch, which both provide syntactic accounts of computation in theories of mind as a rebuttal of representationalism. Chomsky foregrounded computation in his theory of generative grammars, conjecturing a central role for syntactic structures in a theory of mind revolving around linguistics, which we shall cover in the next chapter.11 In the early work conceptual content, following Churchland in a rejection of common-sense notions of mental states such as belief.12 This stance develops a more pragmatic focus over time, reeling in ing admitted in theories of mind, on account of a normative which would distance Stich’s work from computational explanation.13 planation of mental kinds in themselves, leaning heavily on

11 12 13

Chomsky, N., 2002. Syntactic Structures. Walter de Gruyter. Stich, S.P., 1983. From Folk Psychology to Cognitive Science: The Case Against Belief. MIT Press. Stich, S.P., 1990. The Fragmentation of Reason: Preface to a Pragmatic Theory of Cognitive Evaluation. MIT Press.

54

The Two Dogmas of Computationalism

empirical insights to be delivered by neuroscience, and their some cases warranted. One of their major contributions is a connectionist defence of semantic holism which will inform the theory of computational inferentialism I will set out. A mately subordinates computation to mathematical contents. Egan appeals to Fodor’s own ‘formality condition’ as a cri-

only to the formal (that is, nonsemantic) properties of the 14 The issue for a formal account however is that explaining mental states via computational theories seems to ascribe intentional content to those theories, since mental states are widely considered to possess semantic, or conceptual, properties. It appears that computational theories have to explain mental states in terms of intentionality, and a rejection of this position leads to a bind which Egan characterizes in this manner: “The formal character of computational description appears to leave no real work for the semantic properties of the mental states it characterizes. Thus, computationalism has been thought by some to support a form of eliminativism, the thesis that denies that intentionally characterized states play a genuinely explanatory role in psychology… If the content of computational states is indeed explanatorily idle, then the relation between psychological states,

14

Egan, F., 1995. Computation and Content. The Philosophical Review, 104(2), pp. 181-203.

55

Logiciel

as characterized by computational psychology, and psychological states as characterized by our common-sense 15

For Egan, the explanatory role of computation in syntactic Stich, a position which is not compatible with functionalism or CRS. Egan ultimately holds that computation is synonymous with the execution of a mathematical function, and it is these mathematical contents which characterize a mechanism as computational, a position which constrains the computational and subordinates it to functional explanation, without adhering to radical eliminativism: “A computational theory provides a mathematical characterization of the function computed by a mechanism, but only in some environments can this function be characterized as a cognitive function (that is, a function whose arguments and values are epistemically related, such that the outputs of the computation can be seen as rational or 16

Those mathematical and cognitive functions which endow a computational state with content only play a modest role in Egan’s theory, in that they do not partake in the “individuative apparatus of the computational theory”, placing 15 16

Ibid. Ibid.

56

The Two Dogmas of Computationalism

Egan somewhere between the syntactic and representational camps.17 A more extreme rendition of the syntactic argument can be located from within computer science itself, a view I call Turing orthodoxy, the reduction of the computational to the machinic in a resolutely physical interpretation of Turing purely mechanistic account of computation.18 Piccinini seeks to distill the semantic from the computational, leaving a mechanical residue to be distinguished from both the syntactic and the functional, endorsing a multi-level account grounded in mechanistic explanation. Since it is my position that computation and inference cannot be decoupled, that computational explanations necessarily bear epistemic contents, I argue against Piccinini’s mechanistic account on the basis of three main objections: TMs are not purely formal insofar as any such symbolic tics, which allows meaning to enter through the back door, classically via a meta-linguistic apparatus detailed by Tarski. This leads to a type theoretic objection to Piccinini’s appeal to the symbolic. Any account of computation needs to specify in a convincing and unambiguous manner which transformations, processes, or operations (i.e. mechanisms) are

17 18

Egan, F., 2010. Computational Models: A Modest Role for Content. Studies in History and Philosophy of Science Part A, 41(3), pp. 253-259. Piccinini, G., 2015. Physical Computation: A Mechanistic Account. OUP Oxford. p. 146.

57

Logiciel

computational in nature and which are not. On this point Piccinini must appeal to the multiple realizability of a computational vehicle, and this again has semantic

To give a mechanical account of a process assumed to be naturalistic is the domain of science not philosophy, which is why computer science itself is founded on such an account (TMs). It is thus trivial to give such an account, and it provides little by way of philosophical value in the context of relating the formal, logical and material facets of computation, which I maintain cannot be fully decoupled. On this point I claim that physical mechanisms should constrain a theory of computation, but not serve as its grounding.

can be challenged on constructive grounds. The semantics of a TM can be considered wilfully naive, in the sense that it does not provide an account of those transformations to be automata. This gives the TM formalism a general and broad applicability which is desirable in many respects, but which planation. Piccinini’s description of TMs refers to a “string of digits” as the symbolic medium of the input/output tape, claiming that ‘strings’ are divested of any denotational duties. However, this phrase is meaningless from a type theoretic perspective, without summoning the respective data types alluded to—in fact, input cannot be conceived without the notion of a data type in the STLC. As such, both String and 58

The Two Dogmas of Computationalism

be considered computable at all. That is to say, there is no input without denotation in type theory. Piccinini calls this the “identity of computed functions” counter-argument, which amounts to the assertion that the domain and range of a function denote its input and output, a denotational semantics which is the norm in programming language design, popularized by Dana Scott in the 1970s. This stance can also be used to elicit claims on the mathematical contents of compuand range are mathematical in nature. As we have seen, Egan endorses such a view, introducing a narrow or weak notion of mathematical content as necessary for any form of computational explanation.19 Piccinini dismisses Egan’s interpretation in the following manner: “just as internal states of the same mechanism can be giv-

mathematical interpretations. So, if cognitive contents are not individualistic enough for Egan’s purposes, mathe20

uate the internal states of the computation, admitting a multi-level account anchored to a physical mechanism. As Ladyman, Chalmers, and Marr have all argued, the need 19 20

Egan, F., 2010. Computational Models: A Modest Role for Content. Studies in History and Philosophy of Science Part A, 41(3), pp. 253-259. Piccinini, G., 2008. Computation Without Representation. Philosophical studies, 137(2), pp. 205-241.

59

Logiciel

for multi-level explanation is critical given the multi-faceted nature of computation in its material and formal manifestations, and we have seen how canonical models give expression to these diverse articulations. However, any account should not be founded on mechanical explanation—in pursuing a naturalized metaphysics of computation, I consider the encoding of tokens into types the fundamental operation which characterizes computational explanation, and this turns out to be a metaphysical commitment which distinguishes the computational from mental kinds. The crucial point here is that mechanistic explanation—what Marr calls the implementation layer—should necessarily constrain any theory of computation but not serve as its foundation. 21 This is compatible with Piccinini’s observation that functional analysis is not autonomous from mechanical description, but whereas he concedes that neither a top-down nor a outline the relation between the logical and material aspects of computational explanation. In fact, Piccinini admits as a key property of computation that computational states, as opposed to mental states, are multiply realizable—this key property, which I also take as a given, implies that mechanistic implementation is merely a facet of a more fundamental expression of form. Indeed, it is this key property of compuof such medium independent computational states with multiply realizable mental states in functionalist accounts of mind. 21

Marr, D., 2010. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. MIT Press.

60

The Two Dogmas of Computationalism

However, one is not obliged to reach for a transcendental type theoretic rejoinder to Piccinini’s argument, as there are other objections one can raise to his mechanistic reductionism, arising from the manner in which physical computation is taken to constrain computational states. The nuance of Piccinini’s argument in elaborating the precise nature of physical constraints is considerable. The functional is conceived as a descendant of mechanical explanation, but these are conjoined in computation under the term “functional mechanism”, a notion which asserts that functional explanations necessarily impose structural constraints on their medium of implementation. The functional, however, is not in itself tethered to the computational, or even to the mathematical, but rather to the language of functional analysis, which is a systems theoretic vocabulary consisting of bounded systems, their constituent components, and their respective roles. Supplementing his constraints on the computational with a teleological condition, where teleology is conceived via an appeal to evolutionary selection mechanisms within clude a whole range of inorganic phenomena—from rocks to planetary systems. But further constraints are needed to properly individuate computations. Consider a stomach, an role in an organism—digesting food—and has a mechanical basis—is physically individuated—but is widely considered non-computational in nature. Here Piccinini distinguishes the biological from the computational by introducing the notion of implementation vehicle: computational systems are those that exhibit “the function of manipulating vehicles that

61

Logiciel

they are implemented”.22 One can develop a computational model of a stomach, but the state transitions are chemical in nature and operate on biological kinds, meaning they are mePiccinini’s appeal to the multiple realizability, not of mental kinds, but of computational states, a position which is inarguably a key property of computation in its formal sense. This distinction again appears to admit semantics by the back door, however—those vehicles which are computational are taken as given by canonical models, and it is merely remarked

syntactic, or indeed mechanistic, nature of a TM is assumed without further analysis, a point I have already challenged in This objection notwithstanding, an account which distinguishes between computation and other technological arti-

its teleological functions is to perform computations”, which 23 I take the teloi of technical artifacts as a given, since it is apposite to reject a fully social constructive account of technology, just as it is wise to dismiss technological determinism, but rather to hold the diverse agency of technical objects in the world as evident, an agency which necessarily imbues them with purpose. Without a novel treatment of agency, no convincing distinction can be 22 23

Piccinini, G., 2015. Physical Computation: A Mechanistic Account. OUP Oxford. p. 146 Ibid., p. 121.

62

The Two Dogmas of Computationalism

made between diverse functional mechanisms prevalent in technological systems, such as the hydraulic suspension system of a car, or a fan in a computer, and computation itself, on teleological grounds. Those distinctions rest solely on the multiple realizability hypothesis of computational vehicles plementation media, where those constraints are demonstra-

admitting the teleological nature of computation, pace Egan, who subordinates computation to mathematical contents, Piccinini follows Turing in claiming that no appeal to logic, uate computational states.24 But I would insist that the appeal to computational vehicles independent of physical media represents a stumbling block to any mechanistic account, as a de-

The main response to Piccinini from advocates of semantics in computing, such as Shagrir and Ladyman, is the socalled simultaneous implementation argument.25 This demonstrates the structural isomorphism between an AND and OR gate in digital computers, arguing that semantic interpretation each other—a single identical circuit could implement either 24 25

Egan, F., 2010. Computational Models: A Modest Role for Content. Studies in History and Philosophy of Science Part A, 41(3), pp. 253-259. Ladyman, J., 2009. What Does It Mean to Say That a Physical System Implements a Computation?. Theoretical Computer Science, 410(4-5), pp. 376-383.

63

Logiciel

depending on how one interprets the voltages. Defenders of the tation has to be able to distinguish between these fundamental logical operations.26 Again, this is an objection arising from the semantics of what Piccinini calls the medium-independent vehicle of computation. This is, in turn, an implicit challenge to to any standalone mechanistic or purely syntactic description of computation. Piccinini accepts this argument but draws a teraction between mechanism and its surrounding context in order to pursue a wide construal of functional mechanism. Here the bounded nature of a system, which grounds the language of functional analysis, exposes a serious limitation of the mechanistic argument. Indeed, Piccinini accepts the notion of an internal semantics of computation, but seeks to draw a clear distinction with external content, when such a distinction contrast commits to specify the relation of computation to logic, something which is not explicit in any canonical model. In later chapters, I will aim to trace the homotopic version of type theory put forward by univalent foundations (Voevodsky) as a way of making explicit the logical nature of a holistic computational worldview. This worldview, an intuitionistic view logic without simply subordinating the computational to the mathematical, or brushing external content under the carpet.

26

Shagrir, O., 2018. In Defense of the Semantic View of Computation. Synthese, pp. 1-26.

64

The Two Dogmas of Computationalism

AND

0

1

OR

0

1

0

0

0

0

0

1

1

0

1

1

1

1

Boolean truth tables for AND and OR operators. Their structural isomorphism is exhibited by flipping their bits, leading to the argument that an interpretative act is required to distinguish their implementation on physical media.

Piccinini’s appeal to the surrounding context of a computation—a wide functionalism—in an attempt to push semantics away from computational explanations, speaks to key issues raised by interaction in computational theories of mind. Putnam was to abandon strong functionalism by virtue of his call semantic externalism—the notion that meaning is not in the mind, but rather emerges through interactions between agents embedded in an environment.27 Indeed, a pragmatic stance on semantics is capable of mounting an attack on multiple realizability which is potentially devastating to the claim that minds can be fully realized as Turing machines. composability of discursive acts on such pragmatic grounds— the twin notions of agency and interaction do not seem to be reducible to computational states, at least not those available 27

Putnam, H., 2013. The Development of Externalist Semantics. Theoria, 79(3), pp. 192-203.

65

Logiciel

to us in canonical models.28 Brandom’s analytic pragmatism is an attempt to formalize such capacities with the deployment will trace the contours of the relation of computation to interaction in more detail via the logic of Girard, who would come to formulate a geometry of interaction, a ludic perspective on dialogics, and a resource-bound perspective on propositions known as linear logic. We will see how this relates to the notion of interactive computing (Goldin & Wegner), concurrency, and parallelization in turn. For now, we should emphasize that these are potentially insurmountable objections to a fulsorb the challenge of interaction and the ensuing problem of semantics. In particular, a functionalist position would need semantic holism with interactive agency. Let us remain skeptical for the moment on the claims of strong functionalism, while admitting the multiple realizability of computational kinds, a stance which rejects their reducibility to syntax or tinctly inhuman account of intelligence, loosened from the grip of human epistemology.

28

Brandom, R.B., 2008. Between Saying and Doing: Towards an Analytic Pragmatism: Chapter 3. Oxford University Press.

66

The Two Dogmas of Computationalism

2.2 Computational Realism The second dogma of computationalism concerns the broad application of computational explanation to physical phenomena, guided by a conviction that computational states exthe computational as a natural kind. The modern origins of these ontic claims are in the work of pioneering computer scientist Konrad Zuse, whose book Rechnender Raum (Calculating Space) was published in 1970. From this source, a range of pan-computationalist positions, including expressions of computational universalism, sprang forth, leading to debates such as the simulation hypothesis (Chalmers, Bostrom) and the discourse on digital philosophy (Wolfram, Chaitin).29 This is best summarized by a strong version of the ChurchTuring thesis proposed by Deutsch:

simulated by a universal model computing machine oper30

29 30

Fredkin, E., 2003. An Introduction to Digital Philosophy. International Journal of Theoretical Physics, 42(2), pp. 189-247. Deutsch, D.E., Barenco, A. and Ekert, A., 1995. Universality in Quantum Computation. Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences, 449(1937), pp. 669-677.

67

Logiciel

a conjecture made by Deutsch regarding the universality of a digital physics is posited in which the universe is to be modelled as a cellular state automaton, proceeding step by step according to simple rules from which complexity emerges, as in Conway’s Game of Life. As Piccinini has argued, such explanation, and the burden of proof rests on those digital philosophers which would claim such a physics, absent any serious experimental evidence.31 Indeed, the common basis for these claims appears to be an appeal to parsimony would not settle on a spacetime substrate composed entirely exhibiting no more than a faith in the digital, instead begging tational metaphysics while leaving the mechanics intact. As such, ontic computationalism of this sort can be interpreted as a metaphysics which imbues physical states with computational properties without observing empirical constraints, no more than a Pythagorean ontology—an appeal to hypercomputation with no basis in physics.32 In pursuing a naturalized metaphysics of computation constrained by empirical

31 32

Piccinini, G., Chapter 4: Pancomputationalism in Physical Computation. By Pythagorean I am referring here to Aristotle’s characterization in Metaphysics 1.5 of the doctrine that numbers are the fundamental building blocks of the universe.

68

The Two Dogmas of Computationalism

tout court, but there is a deeper issue exposed by these discourses which merits further examination. We have already discussed the physical constraints which might satisfy an account of computational states in the previous section, outlining a family of multiply realizable, teleological functional mechanisms as good candidates, while we have rejected a purely mechaof characterizing these constraints in terms of the degrees of freedom imposed on implementation media by computation. Degrees of freedom denote the dimensionality of a phase space describing all the states that a system can inhabit, in a sense the number of free variables the system has at its disposal. Putnam and other functionalists have claimed that these state spaces are so vast for most natural phenomena that even state automaton.33 Chalmers argues convincingly against this claim by asserting the determinate nature of transitions in the computational domain, insisting on the computational as a set of formal constraints imposed on implementation media, accepting input and output as intrinsic properties of decision procedures.34 Giuseppe Longo also employs a variation of this biological phenomena, describing in detail stochastic aspects of biology, emphasizing the unpredictability of ontogenetic processes, from cell growth to gene expression. In highlight-

33 34

Putnam, H., 1988. Representation and Reality. MIT Press. p. 121. Chalmers, D.J., 1996. Does a Rock Implement Every Finite-State Automaton?.  Synthese, 108(3), pp. 309-333.

69

Logiciel

and the limits of chaos theory in describing the biological, via notions such as attractors and bifurcations, Longo dispels claims to computational reason in the natural sciences with an appeal to indeterminacy.35 However, accepting degrees of freedom as a means of expressing the physical constraints on computational media is a defensible metaphysics resting on the notion of information, and it admits an implicit thermodynamics underlying the dogmatic application of computational realism. This leads us the assertion that information is real, underpinning claims Landauer principle, which draws out the thermodynamic metaphor at the heart of information theory, known as Shannon entropy, given by the formulation

H = –∑i( Pi × log2Pi ) where Pi is the probability that a variable adopts a given state, i, and entropy, H ing a message within a given medium. Shannon famously takes Boltzmann’s microstate description of entropy, the foundation of statistical physics, as the conceptual basis for a digital theory of communication. Landauer provides a literal interpretation of this metaphor by an appeal to certain kinds of computaverse computational transitions, expressed as the state change in a single bit of information, the Landauer principle states that 35

Longo, G., 2009. Critique of Computational Reason in the Natural Sciences. In: Fundamental Concepts in Computer Science. pp. 43-70.

70

The Two Dogmas of Computationalism

“any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom”36 This connection between logical and thermodynamic irreversibility is widely accepted and has been convincingly defended as a physical law.37 Implicitly, information-bearing degrees of freedom link a physical medium to computation, described as such, the energetics of computation is a topic which cannot easily be dismissed as trivial speculative metaphysics, leading us to consider the ontic interpretation of information as a serious claim regarding phenomena rooted in empirical physics. the free energy principle (Friston), which states that complex systems restrict their physical states in order to minimize, compress or otherwise optimize their representation of the environment.38 The empirical testability of this principle is a matter of some debate, but it underlies theories such as those of active inference and predictive coding in cognitive science (Clark, in experimental data, and these are discussed in more detail in

36

37

38

Bennett, C.H., 2003. Notes on Landauer’s Principle, Reversible Computation, and Maxwell’s Demon. Studies In History and Philosophy of Science, Part B: Studies In History and Philosophy of Modern Physics, 34(3), pp. 501-510. Ladyman, J., Presnell, S., Short, A.J. and Groisman, B., 2007. The Connection Between Logical and Thermodynamic Irreversibility. Studies In History and Philosophy of Science, Part B: Studies In History and Philosophy of Modern Physics, 38(1), pp. 58-79. Friston, K., 2010. The Free-Energy Principle: A Unified Brain Theory?. Nature Reviews Neuroscience, 11(2), pp. 127-138.

71

Logiciel

negentropy, which asserts that biological organisms are characterized by entropy minimization, in turn advancing an informational theory of life qua the organization of complexity, a stance with contemporary articulations.39 Semantic theories of information of this kind constitute an active research area, and we should attempt to distinguish such theories, such as those of Floridi, from computational reason.40 I take the latter to be an explanatory framework grounded in inference, whereas I take information to be a physical theory regarding uncertainty in data, as classically conceived by Shannon. In the algorithmic view of information theory, as propounded by Chaitin, information expresses the complexity of a syntactic expression in terms of its compressibility.41 From this vantage point, a theory of informational states possesses far weaker claims on conceptual content than those invoked by computational kinds. Moreover, information seems ill suited to a semantic treatment, as it would need to be transformed into a logical theory with alethic properties, extending well beyond Shannon’s formalism, in the process narrowing its explanatory reach considerably. Computation is classically conceived as a ly syntactic properties, and I adopt this stance as a practical cy of binary encodings (information), and a broader epistemic

39 40 41

Krakauer, D., Bertschinger, N., Olbrich, E., Ay, N. and Flack, J.C., 2014. The Information Theory of Individuality. arXiv Preprint. arXiv:1412.2447. Floridi, L., 2013. The Philosophy of Information. OUP Oxford. Chaitin, G.J., 1977. Algorithmic Information Theory. IBM Journal of Research and Development, 21(4), pp. 350-359.

72

The Two Dogmas of Computationalism

framing is that theories of information possess broader potential application to natural kinds than computation, as evinced by the thermodynamic relation sketched above, which is necessarily mediated by an appeal to information bearing degrees of freedom. Any computational treatment of the Real must yield a and the continuum. If one accepts, via Landauer’s principle, the coupling of informational and thermodynamic irreversibility, and one also accepts, via Piccinini, a set of constraints imposed by computational vehicles, one has some of the ingreis a proper analysis of the computational vehicle or model, and this is where the intuitionistic view of computing can play a role. We have seen how, in Brouwer’s logic, the continuum can

the cardinality of , the real number line. Physicist Nicolas Gisin has used this perspective to develop an intuitionist physics which is informational in nature and complements the view of computation set out in this text. In Gisin’s account, intuitionism renders the continuum sticky, it cannot be cut cleanly as Dedekind would have us believe, its consistency closer to a viscous goo, the rational cut akin to taking a knife to honey, a state arising precisely because one has to summon time in order to manifest the Real.42 It is this intuitionist lens 42

Gisin, N., 2020. Mathematical Languages Shape Our Understanding of Time in Physics. Nature Physics, 16(2), pp. 114-116.

73

Logiciel

which frames the computational perspective on the dual of the continuous and the discrete, portrayed in terms of the decidability of a terminating process, an admission that propo-

mathematical concept. This indeterminism is a direct conse-

of the real as a dynamic process of construction. Intuitionism could just as well go by the name temporal logic, as all its major implications stem from its unwavering commitment to a temporal mode of reasoning. It is this commitment to temporality which manifests what Beatrice Fazi calls “computation’s very own indeterminacy”, an indeterminacy posited by Fazi as intrinsic to the computational regime.43 For Fazi, this contingency is “logically inscribed into every computation”, but it is “beyond symbolic representation”.44 While Fazi locates this indeterminacy in the inscription of the incomputable, it is only via intuitionist semantics that this relation is fully elaborated, destabilising the appeal to an axiomatic indeterminacy internal to computation itself, locating it instead in the temporal nature of decision procedures. These I take to be broadly inferential rather than narrowly axiomatic in nature, conditioned not by an incomplete axiomatics but rather an inferential dynamics. What’s missing in Fazi’s otherwise insightful account of contingency in axiomatic systems is an 43 44

Fazi, M.B., 2018. Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics. Rowman & Littlefield. p. 183. Fazi, M.B., 2019. Digital Aesthetics: The Discrete and the Continuous. Theory, Culture & Society, 36(1), pp. 3-26.

74

The Two Dogmas of Computationalism

time in line with contemporary physics, a point which can be

the intuitionist continuum generates an indeterministic view of time at odds with general relativity, situating it closer to 45 This indeterminism lies beyond the reach of statistical physics, a pure contingency distinct from the algorithmic randomness formalized by Martin-Löf, resembling instead an irreducible form of indeterminacy. This follows from the fact Real numbers in the classical sense are incompatible with constructive mathematics, leading to the substitution of the principle finite information quantities. If one accepts the reality of information hypothesis, this would in turn suggest a physics guided by a fundamental indeterminism, rendering the universe incapable of encoding its tially Brouwerian interpretation of time—conjectured by Gisin as the creation and destruction of information—which own mould as a process of becoming real.46 From this perspective, real numbers are synonymous with processes which

45 46

Del Santo, F. and Gisin, N., 2019. Physics Without Determinism: Alternative Interpretations of Classical Physics. Physical Review A, 100(6), p. 062107. Gisin, N., 2019. Indeterminism in Physics, Classical Chaos and Bohmian Mechanics: Are Real Numbers Really Real?. Erkenntnis, pp.1-13.

75

Logiciel

becomes dynamically sensitive, and the real is grounded by an irreducible contingency. Gisin’s physics casts the energetics of computation as the encoding of time, an asymmetric thermodynamic movement, spawned by irreversible computational acts, which in turn manifest the entropic nature of the real. The interpretation of spontaneous collapse—a resolutely realist position in which interaction (measurement) in time and space. This casts doubt on talk of a domain of the incomputable beyond the symbolic as somehow constitutive of computation, as Fazi contends, but instead suggests a process ontology tethered to an indeterminate temporality, an informational realism which nevertheless readily admits the incomputability of the real. Time is cast as a true medium of contingency, beyond the block universe presented by the dissipation of information resulting from immanent interactions.47 It is only within such a conception of time that undecidability fully expresses itself, as a symptom of the indeterminacy of the real, a condition resulting from processes of encoding which are necessarily computational. The intuitionist link between symbolic modes of incompleteness and the indeterminacy of physics is perhaps most any axiomatic system there are an endless number of undecidable propositions, not just a handful of exceptions. What

47

In contrast to Elie Ayache’s assertion that contingency is mediated by acts of inscription or measurement, see Ayache, The Medium of Contingency.

76

The Two Dogmas of Computationalism

the Real. This is what binds the Gödelian sentence, Euclid’s parallel postulate, or even Conway’s Game of Life, under the also represents the major challenge in fundamental models of subnuclear interactions pose a seemingly unassailable threat to determinism. From the computational standpoint, these

them under a vector of entropy which tends towards contingency. This does not serve to undermine decision procedures as such, but rather, as Fazi remarks, “to enhance the possibility of an open-ended—or indeed of a contingent—understanding of them.”48 logic, a realist stance which accepts the ontological basis of tion of computation by positing a non-deterministic phys-

interaction—it is modest enough to admit the measurement problem as insurmountable without recourse to additional metaphysics, such as those of spontaneous collapse—it does sketch out a multi-level model as a candidate for the poten-

48

Fazi, M.B., Contingent Computation. p. 116.

77

Logiciel

positing an absolute contingency beyond the computational, Gisin’s physics suggests a speculative ontology redolent of Meillassoux, akin to an information theoretic variant of speculative realism.49 While it exhibits a metaphysical commitment to information, its merit rests on an anchorage to intuitionistic principles and their respective formalization, leading to a plausible naturalistic account without appeals to hypercomputation, setting it apart from naive expressions of computational universalism, such as those of digital philosophy or Deutsch’s quantum thesis.

49

Meillassoux, Q., 2010. After Finitude: An Essay on the Necessity of Contingency. Bloomsbury Publishing.

78

THREE

Conceptual Embedding

“It is by reflecting its character of progressive correction that intelligence will recover the mobility and fluidity of the current of which it is the deposit… Intelligence, reimmersing itself in life, relives once more its genesis, from static it becomes dynamic, agile from frozen.” 1 — Georges Canguilhem

1

Canguilhem, G., 1943. Commentaire au troisième chapitre de L’évolution créatrice. Bulletin de la Faculté des Lettres de Strasbourg, 21(5-6), pp. 126-143. Translated by author.

79

Conceptual Embedding

3.1 Notions of Formality In surveying the role of writing in mathematics, Danielle Macbeth outlines two major philosophical traditions, one to the inscription of marks on paper, and a second documen2 mal reasoning. In modern terms, we can broadly align these two accounts with formalist and intuitionist perspectives, which we can conceive as (narrow) axiomatic and (wide) inferential accounts of logic respectively. In its extremity, Hilbert’s view accords no intrinsic meaning to symbols—as Brouwer pointed out as early as 1912, for the formalist the essence of mathematics is in symbols themselves, whereas for the intuitionist the key site of mathematics is in the mind. That is to say, for Brouwer, mathematics is primarily a form of reasoning, not a practice oriented around inscription—language is merely an imperfect attempt to represent the outcome of a 2

Macbeth, D., 2014. Realizing Reason: A Narrative of Truth and Knowing. OUP Oxford. pp. 278-279.

81

Logiciel

cognitive process. The semantic implications of such a view have been alluded to, but are worth discussing further in the context of what Negarestani calls the “autonomy of the formal”.3 As Dutilh Novaes notes, the formal is a means of “de-seder to highlight its structural properties.4 For Dutilh Novaes, symbols are conceptually independent aspects of working with formalisms, and they both play a key role in the cognitive debiasing which is the intentional aim of formalization. This in turn echoes Piccinini’s attempt to divest computation from semantics, a formalist conception of computing I portrayed as an adherence to Turing orthodoxy. Historically, computation has been cast as an agent of arithmetic, computation has played the role of residual, a remainder resulting from the emptying of conceptual content from reasoning, a remnant leaving only a mechanical expression of form. Macbeth traces this essentially formalist view of logic back to Kant: “most work over the course of the twentieth century was based on fundamental assumptions that are essentially logical form and semantic content, and an understanding of meaning in terms of truth—and by way of an under-

3 4

Negarestani, R., 2018. Intelligence and Spirit. Urbanomic/Sequence Press. p. 376. Novaes, C.D., 2012. Formal Languages in Logic: A Philosophical and Cognitive Analysis: Chapter 6. Cambridge University Press.

82

Conceptual Embedding

treatment of logic, has been an essentially mechanistic conception of reasoning, an understanding of it as nothing more than the rule-governed manipulation of signs with no regard for meaning.”5 Implicit claims regarding the autonomy of the formal, such as those in the work of Carnap, are borne of this attempted distillation of the syntactic from the semantic, advancing an intensional view of logic. Likewise, Chomsky’s hierarchy of generative grammars, which aims to link formal language to automata theory, erects a syntactic bridge between autonomy and automation. The distinction we should draw here is between mechanical rule-following behaviour and the capacity to alter ones own inferential abilities, as described by the Greek autos (self) and nomos (laws), a distinction which rests on notions of expressivity and agency.6 As Chomsky famously argued, it is trivial to construct syntactically valid expressions which are meaningless to the point of absurdity—e.g. pink algorithms decay emotionally—underlining the manner in which grammatically sound statements can be generated according to strict syntactic rules, but entirely divested of semantic content. This concept underlies Chomsky’s hierarchy, which links linguistics and computation via the notion of expressive capacity. In this scheme, each expressive level of generative grammar is coupled to a corresponding automata endowed with the ability to produce such linguistic statements, allowing an ascent

5 6

Macbeth, D., Realizing Reason. p. 293. For further discussion, see Wolfendale, P., 2017. Autonomy & Automation. .

83

Logiciel

in complexity from regular to context-free, context-sensitive to recursively enumerable grammars, the latter of which correspond to TMs at the top of the automata hierarchy. GRAMMARS (GENNERATOR)

AUTOMATA (ACCEPTOR)

RE

TM

CSG

LBA

CFG RG

PDA FSA

Chomsky hierarchy depicting the equivalence of Regular (RG), Context-Free (CFG), Context Sensitive (CSG), and Recursively Enumerable (RE) grammars with their respective automata, Finite State (FSA), Push Down (PDA), Linear Bounded (LBA) and Turing Machines (TM).

Placing the computational solely on the syntactic or formal straints, as discussed in Chapters 1 & 2. Forcing or pre-empting the distinction between syntax and semantics is riddled with conceptual pitfalls, and a neatly delineated boundary cern. As Macbeth notes, modern developments in logic have issued stern challenges to formalism that go beyond intuitionism, emphasizing the manner in which “meanings and 84

Conceptual Embedding

mathematical ideas seem to be essential in actual mathematical practice”.7 Searle’s well known Chinese Room thought experiment is an example of just such a failed attempt at distilling syntax from semantics—it is not that one should seek to elide the two, but that the complex emergence of semantics needs to be acknowledged and understood in depth as an interactive process. 8 To illustrate this let’s consider a generative grammar R, a computational language given by the tuple:

R = (V, T, P, S) Where V is a set of variables representing a vocabulary of tokens (a set of strings), T bols that demarcate sentences, P is a set of production rules and S is a start symbol from which induction can progress. Consider R a computational model of inductive reasoning—R is conceived as an adaptive grammar of inferential rules, learned via input from an environment, through a suIn this formalism, sets of statements are themselves sensical propositions, they retain soundness when combined or composed into aggregates. In Chomsky’s hierarchy, this would tic halting properties. R is constrained by its environment at the level of rule induction, but not in terms of its generative

7 8

Ibid., p. 293. Searle, J.R., 1980. Minds, Brains, and Programs. The Turing Test: Verbal Behaviour as the Hallmark of Intelligence, pp. 201-224.

85

Logiciel

a semantic cost however—Brandom’s objections regarding the role of pragmatic elaboration in the development of ex9 Crucially, the grammar lacks any notion of incompatibility or inconsistency, absent methods capacity is in turn a symptom of a lack of agency on behalf of the language bearer. Just as Carnap’s failure to formalize language should be seen as a symptom of an overly logicist conception of expressivity, so too formal grammars appear 10 By contrast, the expressivism of Brandom, following Sellars, highlights the role of normativity, by way of deontic commitments implicit in modal alethic assertions, without rendering these as strictly formal vocabularies, but rather broadly inferential, with a focus on their pragmatic role in agent-based reasoning.11 It is only through discursive acts enabled by mutually recursive re-cognition that agents develop meaningful utterances embedded in an environment which anchors denotation, there is no other path to “semantic ascent” (Negarestani)—without such modes of interaction one is left with a logical husk we could simply call zombie formalism, the study of form sheared from any discursive site of cognition and action.12 A

9 10 11 12

Brandom, R.B., 2008. Between Saying and Doing: Towards an Analytic Pragmatism. Oxford University Press. pp. 31-54. Carnap, R., 2002. The Logical Syntax of Language. Open Court Publishing. Brandom, R., 2009. Articulating Reasons: An Introduction to Inferentialism. Harvard University Press. pp. 79-96. Negarestani, R., Intelligence & Spirit. p. 334.

86

Conceptual Embedding

mathematician might object that this is exactly the domain of pure mathematics, citing number theory as a paradigmatic example of the study of a priori form, but the intuitionist would be minded to reject such appeals on constructive grounds, insisting on mathematical reasoning as a cognitive endeavour, an inferential act which irrevocably anchors it to an entirely operational account of number, in the form of a Church encoding, which casts doubt on its a priori status, as Following Brandom, the expressivity of a language should be linked, not simply to its generative capacity, but to a notion of agency in the form of pragmatic abilities, and agency in turn should not be confused with the autonomy of a language.13 Claims as to the autonomy of formal grammars have to take these as hard limits on the expressivity of any formal language bearing agent and their utterances—that is to say, for the capacities with which it endows agents engaged in its deployment in the world. Instead, the autonomy of the formal appears constrained to the realm of deduction, the traditional domain of classical logic, hermetic analytic procedures whose epistemic claims are limited by their non-ampliative nature. On this point, Macbeth has argued against the received view, attempting to show how deduction can be deemed ampliative via a reading of Frege’s Begriffsschrift.14 For Frege, analyticity is characterized by the fact that conclusions are contained in the very premises of a proof, “as plants are contained in their

13 14

Brandom, R.,. Articulating Reasons: An Introduction to Inferentialism. pp. 45-79. Macbeth, D., Realizing Reason. pp. 383.

87

Logiciel

seeds”.15 plex concepts, a form of irreducible “intelligible unity” that cannot be decomposed into its constituent parts.16 A prime example is an induction or successor rule, which is deployable as a premise, and such inferential rules can themselves be induced, issuing what Macbeth calls an “inference license” to other theorems, a second order function which generates inheritance relations within a proof which are not axiomatic in origin.17 As I will show in the next section, a constructive way of viewing this claim, which essentially concerns the creativity of logic, is via an intuitionistic interpretation of Gentzen’s natural deduction, which lays bare the inductive formation of inferential rules and the dialectical nature of reasoning. In this manner, we can link the expressivity of formal logic to a claim regarding the creativity of proof construction, enabled by the inferential schema of intuitionistic proof theory, which I will in turn show is but one facet of a computational worldview. This serves to highlight the expressive limits of formal grammars under the model of computational linguistics proposed by Chomsky, which appear to have no grasp on dialogical reasoning, absent any discursive abilities. It is precisely these dynamics which for Lautman propel the dialectical development of mathematics as a discipline, and which for Brandom constitute the origin of expressivity itself.18 For Lautman, the genesis of form is intrinsically dialogical, and the manifestation of truth is only assured via processes of refutation, “a whole series of precisions, limitations, 15 16 17 18

Ibid. Ibid., p. 388. Ibid., pp. 391-392. Lautman, A., 2011. Mathematics, Ideas and the Physical Real. A&C Black. pp. 203-206.

88

Conceptual Embedding

exceptions, in which mathematical theories are asserted and constructed”.19 For formal grammars to exhibit the synthetic generativity which is associated with mathematics and which I in turn, via the intuitionistic view, associate with computabetween the generativity of formal grammars and their seeming incapacity for interactive synthesis, constitutes a major challenge for the expressivity of computational reason, and besides the treatment of natural deduction which follows, we will examine the limits of neural computation in this regard in later chapters, in the process considering an extension to Chomsky’s hierarchy via interaction grammars.

3.2 Realizability of Truth mathematical objects and the temporality of mathematical reasoning, enacting a radical break with Hilbertian formalism—deduction on a static system of axiomatic givens— via its insistence on the realizability of truth.20 Brouwer and Heyting reframed the relation of mathematics to

19 20

Ibid., p. 205. Brouwer, L.E.J., 1975. Intuitionism and Formalism.In: Philosophy and Foundations of Mathematics. North-Holland. pp. 123-138.

89

Logiciel

indeterminacy, by infusing the discipline with a proof oriented semantics, which cast aside the classical interpretation of mathematical expressions, replacing the central notion of truth with proof.21 This resonance between the intuitionist and computational worldview enabled Brouwer’s logic to mature, in the decades that followed, into the basis for a rich computational theory of types. In the resulting intuitionistic theory of types, proposed by logician Martin-Löf, types are to be treated constructively, as a means of encoding logical propositions.22 In their constructive guise, types are no longer simply a restriction on a family, level or universe of sets (Russell), or even on the output of a function (Church), but rather the result of a deep correspondence between logic and computation, enshrined in the so-called Curry-Howard isomorphism, developed by logician Haskell Curry in the 1960s.23 To understand this coupling of logic and computation, we need to examine the roots of proof theory in the natural deduction system proposed by Gentzen (1934). Gentzen’s innovation was an alternative semantics for logic, oriented around the induction of inference rules which could be composed into proofs, providing an inferential as opposed to axiomatic account of logical formalism. Macbeth distinguishes notions of the axiomatic from the inferential in giving an account of Gentzen’s innovative system:

21 22 23

Dummett, M., 1975. Philosophical Basis of Intuitionistic Logic, Studies in Logic and the Foundations of Mathematics. Vol. 80. Elsevier, pp. 5-40. Martin-Löf, Per., 1998. “An Intuitionistic Theory of Types. Twenty-Five Years of Constructive Type Theory, 36, pp. 127-172. Sørensen, M.H. and Urzyczyn, P., 2006. Lectures on the Curry-Howard Isomorphism. Elsevier.

90

Conceptual Embedding

“In an axiomatic system, a list of axioms is provided (perhaps along with an explicitly stated rule or rules of inference) on the basis of which to deduce theorems. Axioms are judgments furnishing premises for inferences. In a natural deduction system one is provided not with axioms but instead with a variety of rules of inference governing the sorts of inferential moves from premises to conclusions that are legitimate in the system. In natural deduction, one 24

In contrast with the hermetic order of the axiomatic, governed as it is by a priori judgements, the inferential is oriented towards the creation of novel premises subject to extensible operations. The language of natural deduction is composed of operators which are given by a pair of introduction and elimination rules, as the deceptively simple example for conjunction below shows:

&-I A A

&-E B

&

B

A

&

B

A

Conjunction introduction rule (&-I), and elimination rule (&-E), in Gentzen’s system of natural deduction, for propositions A and B. A second elimination rule is possible, leaving just B as a conclusion.

24

Macbeth, D. Realizing Reason, p. 75.

91

Logiciel

The meaning of the conjunction operator, &, is synonymous with its introduction rule, and despite the symbol being colsand, it is posited as intensional in nature and theoretically substitutable by any other symbol one would like to choose in its place. This is precisely because its meaning is anchored to the role it plays in proof construction, which in turn is an entirely syntactic notion, simply the manner in which it allows symbolic manipulation to proceed—this leads Girard to call natural deduction the origin of the “syntactic tradition” in logic.25 Truth becomes synonymous with a method for constructing a proof, in contrast to Tarski semantics, which relies on a meta-linguistic apparatus. Tarski introduced the notion of a meta-language to semantically frame any given formal system, in an attempt to ground its truth values, an abstraction leaking ever outwards in an eventual appeal to natural language for the meaning of primitive logical operators. Girard rails against this appeal to meta-language as essentially a cheap trick, pushing semantics ever further from logic, passing the buck, so to speak—“You ask for money and are paid in meta-money”, he remarks in exasperation.26 In seeking an alternative to Hilbertian formalism and its accompanying semantics, the inference rules internal to the language, in turn staking a claim towards an intensional logic—a formal system whose internal rules can be explored independent of any extensions 25 26

Girard, J.Y., Taylor, P. and Lafont, Y., 1989. Proofs and Types, Vol. 7. Cambridge: Cambridge university press. p. 3. Girard, J.Y., 2003. From Foundations to Ludics. The Bulletin of Symbolic Logic, 9(2), pp. 131-168.

92

Conceptual Embedding

(designations), but whose premises are nevertheless induced ent reciprocity of logical operations and their surrounding domain in terms of the formal passage from essence to existence: “in logic the operations rely spontaneously on a domain that they seem to summon by the very organization of their structure. We will see that conversely, in a very large number of cases, the domain appears already prepared to give rise to certain abstract operations. Our intention being to show that completion internal to an entity is asserted in its creative power, this conception should perhaps logically imply two reciprocal aspects: the essence of a self realizing 27

The constructivist might seek to temper Lautman’s Platonist appeal to essence in a dynamic framing of the relation of matter and form, couched instead in terms of operations and structures. From this view, natural deduction exhibits the peculiar reciprocity of creativity and autonomy characteristic of constructive proof theory—the synthesis of premises, which in turn introduce novel types, furnishing the system with its expressivity, while its extensible inferential basis serves as the engine driving the genesis of intensional structure. It could be tempting to take this claim to autonomy as neo-rationalist in character—the realm of the intelligible, or the realm of forms, asserts its irreducibility, via the autonomy of the formal, to

27

Lautman, A., 2011. Mathematics, Ideas and the Physical Real. A&C Black. p. 147.

93

Logiciel

the empirical—in its attempt to construct the conditions legislating the possibility of rational thought as such. But to do so would be to elide the inductive basis of inferential rule fornitions within a proof, the complex interplay in Gentzen’s system of induction and deduction, of assertion and refutation, introduction and elimination, furnishing it with a decidedly dialectical structure. of intuitionism, a twoness in which a multiplicity of premises are posited, to be elaborated in the form of proofs. What proofs represent in this scheme are an application of inference rules—a proof tree denoting an inverted branching structure which begins with a plurality of assumptions and premises (leaves) to be discharged by inferences in order to deduce a novel conclusion at a root node.28 The self-referential nature of this scheme should not elude us—in the absence of axioms,

premises, and this is the creative act in any truth procedure. single recursive operation known as cut-elimination, a computational operation which yields a normalization procedure—while a proof comes to resemble a program, the execuwhich the proof is reduced to a normal form beyond which

28

Dummett, M., 2000. Elements of Intuitionism, Vol. 39: Chapter 4. Oxford University Press. p. 88.

94

Conceptual Embedding

of said proof.29 The recursive nature of proof theoretic constructs will be explored shortly, and the symmetry of this system will be discussed via Girard’s account of dialogical Natural deduction can be given a constructive treatment, aligning it with an intuitionistic treatment of types, leading to the assignment of propositions to types denoted as

A & B : (M, N) This expression represents the conjunction of two types, M and N, corresponding to the two logical propositions, A and B, linked by the assignment operator (:), which is the primitive operation in type theory. The types in turn can accept terms as elements, as in the language of the STLC developed by Church, the expression then conveying a correspondence between two distinct languages, one logical (natural deduction) and one computational (the lambda calculus).30 It can be shown that such a correspondence exists for any proposition under an intuitionistic interpretation of natural deduction, which is to say that any statement in the language has its corresponding type. From this perspective, logical propositions are to be treated as types, and proofs in turn as prowhich are able to construct its corresponding proposition.31 29

30 31

Martin-Löf, P., 1971. Hauptsatz for the Intuitionistic Theory of Iterated Inductive Definitions. In: Studies in Logic and the Foundations of Mathematics, Vol. 63. Elsevier. pp. 179-216. Sørensen, M.H. and Urzyczyn, P., 2006. Lectures on the Curry-Howard Isomorphism. Elsevier. The Univalent Foundations Program, 2013. 1.11 Propositions as Types in The HoTT Book. Institute for Advanced Study.

95

Logiciel

I will examine the semantic implications of this coupling in the next chapter, but for now note the use of brackets in our example—this corresponds to the realizability interpretation and then given a proof theoretic treatment by Kolmogorov, following Brouwer and Heyting, and eponymously known as BHK realizability, this interpretation enshrines an explicitly computational view of logic, providing a bridge between constructive mathematics and computability.32 In this scheme, propositions are replaced by their realizers, namely proofs or programs that can construct the truth of said proposition—the types M and N represent the proofs of propositions A and B respectively—and a correspondence between logic and computation is internalized into the language itself. The introduction of a new operator allows the relation to be expressed as (M, N) A & B, which reads “M and N realize propositions A and B”. Accepting realizability as a distinct perspective on logic leads Bauer to remark that “in can be computed”.33 Realizability can be deemed a materialist conception of truth, in which a multiplicity of methods or operations for its construction are given a central semantic role, an operational semantics in contrast to the denotational secomputational semantics of this sort has a role to play in modal logic, in which common approaches (e.g. Kripke frames) are typically model theoretic—the idea being that realizability 32 33

Kleene, S.C., 1945. On the Interpretation of Intuitionistic Number Theory. The Journal of Symbolic Logic, 10(4), pp. 109-124. Bauer, A., 2017. Five Stages of Accepting Constructive Mathematics. Bulletin of the American Mathematical Society, 54(3), pp. 481-498.

96

Conceptual Embedding

can act as a novel modal operator introduced as a computational expression of possibility. To mark out an expression as ternative proof theoretic treatment of the category of the possible.34 For now, we should point out that the insistence on the realizability of truth proceeds from a distinctly computational stance on logic, committing it to provide the methods for its own construction, without appeal to an external authority, grounded instead in the induction of inferential rules.

3.3 Expressive Bootstrapping Brandom’s notion of expressive bootstrapping follows from Brandom, a language is not limited in saying what it is one has to do in order to bear additional capacities. He gives the example of a sports coach advising a professional athlete—the coach need not be endowed with the physical abilities of said athlete in order to know what it is they have to do to improve their performance.35 -

34 35

Reed, P., Bawa-Cavia, A., Site as Procedure as Interaction. In Beech, A., Mackay, R. and Wiltgen, J. (eds.), 2020. Construction Site for Possible Worlds. Urbanomic. Brandom, R.B., 2008. Between Saying and Doing: Towards an Analytic Pragmatism. Oxford University Press. pp. 22-23.

97

Logiciel

the production of a steel sculpture, their practical ignorance of the craft does not delegitimize their agency in such a situation, as long as they are able to articulate the desired output coherently and within the constraints of the material.36 While these articulations are clearly not synonymous with that there is no reason why low-level automata cannot bootstrap their way up the hierarchy of grammars using such techlogical capacities can instead be overcome by rendering arti-

practical elaboration can extend capabilities beyond the strict boundaries imposed by the hierarchy of grammars. Brandom has a pragmatic notion of agency in mind when he says “automata, in the general sense in which I want to think about them, are constellations of practices-or-abilities that algorithmically elaborate sets of primitive practices-or-abilities into 37 In such a manner, a context-free grammar may be able to articulate what it is to have the capacity of higher up the hierarchy, without access to any such resource itself. This is not just a form of articulation for Brandom, it execute the capacities of said machine. An FSA can generate sentences such as ‘Push ‘A’ onto the stack’ in order to not just 36

37

A mystification of craftsmanship or a Marxist critique of the labour relations at play can be supplied as a rejoinder, but they do not fundamentally delegitimize the pragmatic notion of agency outlined here, alluding rather to its relation to power. Brandom, R., Between Saying and Doing. p. 34

98

Conceptual Embedding

simulate, but concretely elaborate the capacities of a Push Down Automata (PDA), via what Brandom calls a pragmatic metavocabulary. As I will argue in my discussion of encoding, it is the indexical nature of such a vocabulary that marks out such forms of bootstrapping as computational in nature. One can view the project of AGI through this pragmatic lens, humans enacting expressive bootstrapping via models which perform feats entirely incommensurable to human reasoning, leveraging internal languages which increasingly do not yield semantically to human observation, a process of elaboration I call conceptual embedding. Take any search of the world wide web as an example—an ensemble of models is summoned to navigate a graph representation of billions of nodes representing documents, in order to infer relevant ble of instant two-way phrase mapping between all existent natural languages, or else a language model fed more text than any human could cognize in a lifetime. Each of these models embed linguistic concepts in a high dimensional space with no neural correlate, exhibiting its own distinct geometry, the topology of which I argue represents an internal inferential logic. These forms of statistical inference are executed within a recursively enumerable grammar, correspondthe relation of natural language reasoning, and its associated speech acts, to computational reason. If one models the trajectory of human rationality in artifactual terms, the paths of formal and so-called ‘natural’ languages appear to diverge and fork in irredeemable ways, an observation which hinges on the notion of incommensurability, which in set theoretic 99

Logiciel

terms we can describe as a claim concerning the bijection of vocabularies.38 It would appear that human reasoning has bootstrapped its way into forms of expressivity which would altogether elude the human mind in all practical terms, without an artifactual elaboration of what is not merely a subset of natural language, but an autonomous hierarchy of generative grammars, bound to their own interactive sites of formation. This process of elaboration is redolent of the strand of philosophical research known as conceptual engineering,

of philosophy and a design oriented approach to the practice of concept creation, implementation, and revision.39 Pollock has defended the integration of conceptual role semantics (CRS) with the theory of conceptual engineering, dealing agents as inconsistent as humans can theoretically have over their own conceptual states.40 It is this integration, a content which I will endorse in my rendering of computational reason, informed by modes of conceptual embedding under the rubric of AGI. By contrast with conceptual engineering, I will emphasize computational models ceptual elaboration, artifacts which raise their own issues regarding interpretability and expressivity. 41

38 39 40 41

See Cavia, A.A., 2018. Arity & Abstraction. Glass Bead Journal. . Chalmers, D.J., 2020. What is Conceptual Engineering and What Should It Be?. Inquiry, pp. 1-18. Pollock, J., 2020. Content Internalism and Conceptual Engineering. Synthese, pp. 1-19. Ibid.

100

Conceptual Embedding

The notion of bootstrapping as outlined by Brandom is echoed in computability theory, whereby programs of a lower complexity class can verify solutions found by those of a higher class. We should distinguish here two treatments of complexity in theoretical computer science. Firstly, the algorithmic information theory of Chaitin, in which the complexity of an expression is cast in terms of the length of the shortest to incompressibility.42 This framing positions expressions on a spectrum of pattern governed regularities to incompressible complexity. The second is the broader notion of complexity classes, which attempt to formalize the relation of algorithms to time and space, a resource-based view which concerns itself with the scaling properties of computations with respect to input. While Chaitin’s treatment focuses on the output of an algorithm, subordinating information to a computational teractive treatment of program execution, and each generate notions of expressivity distinct from those of Chomsky’s hierarchy of grammars. Take, for example, P (polynomial time) and NP (non-deterministic polynomial time) classes of programs. These are posited to scale polynomially in terms of their execution time with respect to the size of their input, distinguished by the fact that the latter execute on a theoretical non-deterministic Turing Machine. This is in every respect a TM, aside from its ability to follow multiple branches of a search tree simultaneously. These classes are assumed to be 42

Chaitin, G.J., 1977. Algorithmic Information Theory. IBM Journal of Research and Development, 21(4), pp. 350-359.

101

Logiciel

incommensurable by most computer scientists, if not proven to be so. However, computability theory has shown that solutions found by NP P class algorithms, despite the algorithms in class P lacking the resources to obtain those solutions themselves. This creates an

make guarantees about the correctness of the proof within a bounded time. While this does not entail possessing the creepistemic traction on the knowledge contained in the proof itself. This echoes expressive bootstrapping’s claims regarding the capacities of simpler grammars and their ability to retically beyond their reach, in turn exposing the porous nature of expressivity, whereby pragmatic forms of realizability breach boundaries erected in both computability theory and computational linguistics in distinct ways. Moreover, these forms of bootstrapping can be posited modern act of rationalism in philosophy—initiating a new phase in the speculative tradition, where speculation, in its strict sense, alludes to transcendental claims constrained by, but not bound to, empirical judgements. For Hegel, the speculative moment was aufhebung itself, a self-sublation which propels the dialectical genesis of form via a determinate negation. 102

Conceptual Embedding

Notably, Whitehead would go on to emphasize the importance of speculation in modern philosophy as a means of metaphysical generalization.43 I will defend a speculative claim regarding the indexical role of encodings in reasoning in the next chapas a means of generalization, but rather as the guarantor of an encoding of syntax is a process which performs an embedding of logic into a space which renders it computational. However, both encodings and syntaxes appear to be co-constituted, a situation which necessitates a metaphysical account, at the risk of a strong co-referential circularity. This metaphysics of encodcomputer”, enacting what he describes as “the automation of transcendence”.44 For Laruelle, philosophy limits the “automaticity” of this movement, while Turing orthodoxy is an insufpresupposed by such a spectre.45 Contra Laruelle however, we seek a constructive account of this movement in terms of computation itself, a constitutive teleology, as opposed to a non-philosophical resolution. Anna Longo, by contrast, considers the automation of philosophy not only a plausible prospect but its proper condition, posed as a set of game theoretical inferential moves governed by inductive logic. In Longo’s view, giving and asking for reasons, no more than “the calculation

43 44 45

Whitehead, A.N., 2010. Speculative Philosophy. In: Process and Reality. Simon and Schuster. François Laruelle, 2013, The Transcendental Computer: A Non-Philosophical Utopia. Translated by Taylor Adkins & Chris Eby. Ibid.

103

Logiciel

pects in return for believing a hypothesis to be true.46 Longo rithmic learning”, signalling a rejection of metaphysics reminiscent of Carnap. I would assert instead that the process of acts modes of expressive bootstrapping characteristic of computational reason—as Wolfendale has suggested, it provides a parallel track to the project of AGI, leading to the conjecture that the synthesis of these projects would provide a critical philosophy of AGI.47 epistemic access which facilitate metaphysical explanations of ly constrained, delimiting the valid means of reverse engineering the real.48 This breach of physics is isomorphic to a breach in complexity class or a formal grammar, and bootstrapping is synonymous with the breaching of seemingly incommensurable levels via self-referential procedures, comprising speculaof computation I endorse in turn emphasizes the expressive bootstrapping capacities of computational reason, in ascending a hierarchy of grammars whose autonomy from humans as parochial minded agents needs to be scrutinized in pragmatic 46 47 48

Longo, A., 2021. The Automation of Philosophy or the Game of Induction. Philosophy Today, 65(2), pp.289-303. Wolfendale, P., 2017. Autonomy & Automation. . This question is beyond the scope of these notes but is addressed at length in Ross, D., Ladyman, J., Spurrett, D. and Collier, J., 2006. Everything Must Go: InformationTheoretic Structural Realism.

104

Conceptual Embedding

terms, and it is this emphasis on interaction that marks out the theory from speculative claims regarding superintelligence (Bostrom).49 Indeed, Bostrom’s account of intelligence explosion is entirely ignorant of the pragmatic constraints from which such forms of expressive bootstrapping emerge— Brandom’s insistence on interaction as the driver of epistemic metavocabularies and artifactual elaboration—and it is these constraints which anchor computational reason to local sites of truth construction (realizabilities), sites which bestow nei-

49

Bostrom, N., 2017. Superintelligence. Dunod.

105

FOUR

The Topological Turn

“The ‘analogies of [the] structure of reciprocal adaptations’ between the continuous and the discontinuous are, for Lautman, among the basic engines for mathematical creativity.” 1 — Fernando Zalamea

1

Zalamea, F., 2012. Synthetic Philosophy of Contemporary Mathematics. Translated by Zachary Luke Fraser. Urbanomic/Sequence Press. p. 55.

107

The Topological Turn

4.1 Structuralism, Invariance, and Univalence In 2014, the late Fields medalist Vladimir Voevodsky publicly outlined his motivations for a new foundational project for mathematics, indicating a topological turn in theoretical computer science.2 He underscored the success of set theory’s appeal to human reasoning, singling out hierarchy as a key underlying intuition, while lamenting the expressive limitations imposed by the predicate logic which grounds it. He decried the informality of proof checking by professional mathematicians, highlighting the complexity of contemporary mathematics, outlining numerous cases when a proof had been found to be faulty many years after publication. These errors sions, including instances in his own work. By instead putting mathematics into formal correspondence with logic, an act only possible via an encoding of both languages, Voevodsky conjectured a foundational system with greater expressive 2

Voevodsky, V., 2014. The Origins & Motivations of Univalent Foundations. Institute for Advanced Study. .

109

Logiciel

power built around a computational kernel. For Voevodsky, this system would need to be composed of three components. guage and rules of manipulating sentences in this language that are purely formal, such that a record of such manipond component is a structure that provides a meaning to the sentences of this language in terms of mental objects intuitively comprehensible to humans. The third component is a structure that enables humans to encode mathematical ideas in terms of the objects directly associated 3

In such a manner, Voevodsky seeks to build foundations which openly admit the inconsistency of mathematical formalism, taking inspiration from the intuitionistic view, in tation.4 A mathematician might respond with hostility to such a suggestion, accordingly defending the creativity of mathematics against such a reductive model of her practice, but as I will try to show, the aim of Voevodsky’s principle of univalence is to center the creativity of proofs within extensible conceptual spaces, the constraints of which correspond to the realizability of novel structures limited only by the expressive reach of intelligibility itself. This constructivist project, dubbed Univalent Foundations, has set itself the ambitious 3 4

Ibid. Voevodsky, V., 2010. What If Current Foundations of Mathematics Are Inconsistent?. Institute for Advanced Study. .

110

The Topological Turn

scope of reframing the discipline of mathematics as a computational practice centered on the encoding of proofs, raising ing the nature of computation. It is the theory of types outlined by Martin-Löf which provides the means of encoding logic and mathematics into formal language, a correspondence which Voevodsky tion—a more rigorous and solid footing for mathematVoevodsky’s research in homotopy theory serves as an origin from which to erect new foundations with “intrinsic geometric content”, 5 in an attempt to build on the success of category theory, which remains the most complete postof foundations, often at odds with the emphasis of the working mathematician on applications, is in this case motivated by a new way of practicing mathematics within the medium of computational languages. It follows Lawvere’s observation that progress on foundations is achieved by “consciously applying a concentration of applications from geometry and analysis, that is, by pursuing the dialectical relation between foundations and applications”.6 Scant philosophical attention has been paid to this foundational project—most notable are the remarks of James Ladyman, who holds that the programme potentially constitutes an “autonomous foundation for mathematics” providing “an

5 6

Awodey, S., 2016, Univalent Foundations Seminar. Institute for Advanced Study. . Picado, J., 2007. An Interview with F. William Lawvere. CIM Bulletin, 23, pp. 23-28.

111

Logiciel

account of the semantics, metaphysics and epistemology as well as a framework”.7 The relativity of logic, and the context sensitivity of mathematics, is a commonly held view within the discipline thanks to the advances of category theory, a trajectory Zalamea calls the “Einsteinian turn” in mathematics.8 The categorical view of logic holds that a plurality of mathematical domains, or categories, are inseparable from the logics they in turn embody, and no precedence can be given to any one category over another.9 These developments emphasize the locality of truth according to a range of concepts in algebraic geometry, such as section, scheme, site, and locale.10 Inspired by Grothendieck’s expressive vocabulary for dealing with spatial invariances— including cohomology and motif—Voevodsky sought to recept of homotopy as a new invariant with which to underpin identity. The common interpretation of topological space in mathematics is set theoretical—a topology is a set of subsets is purposely abstract, these subsets could represent points in space or any other mathematical object. In other words, a topological treatment of space is always a means of gleaning a structure latent in the space. But the constructive mathematics alluded to here takes this a step further, asserting the primacy of topology over geometry, as Bauer tries to explain, 7 8 9 10

Ladyman, J. and Presnell, S., 2018. Does Homotopy Type Theory Provide a Foundation for Mathematics?. The British Journal for the Philosophy of Science, 69(2), pp. 377-420. Zalamea, F., 2012. Synthetic Philosophy of Contemporary Mathematics. Translated by Zachary Luke Fraser. Urbanomic/Sequence Press. p. 270. Lawvere, F.W. and Schanuel, S.H., 2009. Conceptual Mathematics: A First Introduction to Categories. Cambridge University Press. Goldblatt, R., 2014. Topoi: The Categorial Analysis of Logic: Chapter 14. Elsevier.

112

The Topological Turn

“In the usual conception of geometry a space is a set of

the extra structure is primary and points are derived ideal objects. For example, a topological space is not viewed as a set of points with a topology anymore, but rather just the topology, given as an abstract lattice with suitable properties, known as a locale. In constructive mathematics such treatment of the notion of space is much preferred to the 11

Here we see the precedence of structure over geometrical notions, enshrined in the algebraic concept of a locale, without recourse to a set theoretic treatment of topology. Univalence alences within these spaces, in a manner compatible with type theoretic concepts, which come to replace their set theoretical counterparts. In set theory, the primitive objects are elements and sets, and the fundamental operations are belonging and inclusion. of objects and their morphisms—conceived as a typology of transformations—in which Set is but one of many categories. In univalent foundations, types and paths take their place, presented as richer notions than their categorical counterparts, designed to transpose them to a geometric formalism. As such, univalent foundations asserts its autonomy from pre-existing mathematical ontologies, advancing an independent research 11

Bauer, A., 2013. Intuitionistic Mathematics and Realizability in the Physical World. In: A Computable Universe: Understanding and Exploring Nature as Computation. pp. 143-157.

113

Logiciel

form of semantic ascent, the subsumption of sets by categofamilies of categories and types. The innovation posed by univalent foundations is a correspondence to supplement the existing coupling of logic and computation, in which propositions can be treated as types and proofs as programs, the so ful framework for the construction of many mathematical objects which would otherwise be rendered incomputable. The formalism relies on the key property of homotopy, the idea into each other, an expression of topological invariance which to analyze metric spaces in a structural manner, rendering various topologies computable, despite their reliance on continuous functions. Computability becomes synonymous with a continuous path within these typed spaces, casting the dual of the continuous and the discrete in a new light. In the resulting homotopic treatment of types, paths are primitives which precede intervals in space.12 Paths between two types A, B, are function types of the sort A→B, which in turn can be interpreted logically as A implies B, and to map one type to another is to construct a new proposition as a distinct type. These paths are generated between two objects on the basis of a structure preserving transformation which is invertible, namely an isomorphism. To trace a path from A to B then is to 12

The Univalent Foundations Program, 2013. Homotopy Type Theory: Univalent Foundations of Mathematics: Chapter 2. Institute for Advanced Study.

114

The Topological Turn

A homotopy (H) exhibited between two continuous functions ( 0, ), mapping topological spaces, x and y. The equivalence class of 1 spaces homotopic to x is called its homotopy type.

demonstrate an isomorphic relation, to establish an identity between two objects which simultaneously has a logical, algebraic and geometric interpretation. While category theory can be considered the general study of morphisms, under univalence isomorphism takes priority. Voevodsky’s Univalence Axiom (UA) condenses this notion into the idea that isomorphism and identity are themselves isomorphic concepts—in this recursive

(A B) (A = B)

alence is in turn underpinned by a univalence grounded in 115

Logiciel

universe Type, a type of types. Univalence can as such be regarded as a form of mathematical structuralism that privileges isomorphism above other types of transformations, breaking with category theory in the process, which by contrast treats of the UA lead Awodey to remark that “mathematical objects simply are structures” from this viewpoint.13 The notion that identity is structural, and that all else is implementation dematch, that is if their inputs and outputs conform to the same the highest possible level of abstraction, where the level is context sensitive to the domain of application, contained in the

4.2 The Curse of Dimensionality In the early twentieth century, Hilbert showed that calculus and geometry could be performed in spaces of arbitrary dimensions, opening up the path to high dimensional linear 13

Awodey, S., 2014. Structuralism, Invariance, and Univalence, Philosophia Mathematica, 22.1, pp. 1-11.

116

The Topological Turn

tum mechanics and contemporary AI. This development profoundly shifted the notion of space in mathematics and lights their role in reformulating the dialectic of the continuous and the discrete in spatial terms, “the problem of the genesis of the discontinuous on the continuous is not a problem of physics, the drama is played out much higher, at the level of mathematics. The continuous is the domain of variation of the variable, the discontinuous are the solutions, and the mix that is the Hilbert space is homogeneous to the continuous by the nature and topology of its elements, and to the discontinuous by its 14

The decomposition of Hilbert spaces is carried out by vectors which create a family of orthogonal subspaces that eration relies on the concept of dimensionality, conceived as an orthogonal domain of representation, whereby each commensurable to those of existing domains. For Lautman, this dialectic of the continuous and its ‘structure’ is fundamental to mathematics, exhibited in the dialectical relation of arithmetic to the domain of continuous analysis, a reci-

14

Lautman, A., 2011, Mathematics, ideas and the physical real. A&C Black, p. 167

117

Logiciel

work: “Lautman thereby stresses the tense union of the continuous-discontinuous dialectic within modern mathematics, a union which takes on aspects of ‘imitation’ or ‘expression’ 15

sentation of the real needs to engender dimensionality reduction, to negotiate what computer scientists call the curse of of the real that can accommodate the sparsity of correlation encountered at high dimensions.16 This dualism, couched eimathematics by way of two largely distinct branches—algebra and geometry—which historically represent discrete and continuous contexts for the practice of mathematics. The arc of initiating a rupture in mathematical foundations, via what Lautman would call a new mixture, namely algebraic geometry. Along with Hilbert Spaces, topos theory constitutes a major breakthrough for the conception of space in modern mathematics. Building on the sheaf theory of Leray and others, the topos (Lawvere) is a categorical means of representing the underlying topological structure of any space in terms of 15 16

Zalamea, F., Synthetic Philosophy of Contemporary Mathematics. p. 270 Verleysen, M. and François, D., 2005, June. The Curse of Dimensionality in Data Mining and Time Series Prediction. In: International Work-Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg. pp. 758-770.

118

The Topological Turn

objects called sheaves, which in turn encapsulate the schemes, sections and sites of a space.17 For Grothendieck, this once and for all punctures the dialectic of the continuous and discrete at the heart of mathematics: “This idea encapsulates, in a single topological intuition, both the traditional topological spaces, incarnation of ‘spaces’ of the unrepentant abstract algebraic geometers, and a huge number of other structures which until that moment had appeared to belong irrevocably to the ‘arith18

cerns the triad of Number, Magnitude and Form. The algebraic geometry of the topos represents for him the fusion of Number (arithmetic) and Magnitude (geometry) under the new categorical Form of the topos.20 The reverberations of this dual articulation of space, at once topological and geometric, both structured and continuous, are at the heart of contemporary developments in computational theory. The notion of space as a topos was the springboard for Deligne, Voevodsky and others, to develop ideas on invariance in the pluralistic world of topoi, leading to research on cohomology and motifs, before Voevodsky realized homotopy as a 19

17 18 19 20

Goldblatt, R., 2014. Topoi: The Categorial Analysis of Logic: Chapter 14. Elsevier. Grothendieck, A., 1985. Récoltes et semailles. Réflexions et témoignage sur un passé de mathématicien. Translated by Roy Lisker (2002), p. 76. Ibid, p. 63. Ibid, p. 65.

119

Logiciel

was Grothendieck’s intuition that higher order categories, types, the so called “homotopy hypothesis”, which served as inspiration to Voevodsky to formulate a general theory of homotopy types.21 derstanding of computation’s role in this new constellation, something which the univalent foundations programme promises to address. Under this new paradigm, computation need no longer be relegated to a sub-branch of algebra, in the form of the general recursive functions, but could play a central role at the core of mathematics, by virtue of its ability to operate in the full range of constructive categories, namely the Cartesian closed categories, spaces which in turn bridge algebra and geometry. This correspondence between category theory and the lambda calculus had been sketched out by Lambek by the early 1970s, providing the original link between categorical foundations and computational theory, a relation which was to be brought into focus by Voevodsky’s new perspective.22

21 22

Grothendieck, A., 1983. Categories as Models for Homotopy Types in Pursuing Stacks. Unpublished manuscript. Lambek, J., 1980. From -calculus to Cartesian Closed Categories. To HB Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism, pp. 375-402.

120

The Topological Turn

4.3 Recursion as Isomorphism In canonical models of computation, recursion is the fundacomputational treatment exists if a recursive function operateral recursive functions, while it is implicit in both Church’s lambda calculus and the Church-Turing thesis. Indeed, it was Gödel who sought to formalize recursion, having explored its pleteness theorem, helping to outline the domain of the computational as recursively enumerable. Yuk Hui has considered the history of recursivity in the naturphilosophie of Schelling and Hegel, tracing its relation to contingency via cybernetics, Hui, recursive models “express psychic and collective individuation”, acting as unifying mechanisms in accounts of organic and inorganic modes of self-organization.23 Negarestani similarly discusses recursive modes of self-reference in relation I→ I*, in which concrete self-consciousness acts as the identity

23

Hui, Y., 2019. Recursivity and Contingency. Rowman & Littlefield International. p. 196.

121

Logiciel

map of formal self-consciousness, I. In Negarestani’s words, “self-relation enables intelligence to treat its own structure as the intelligible object of its own understanding”,24 an expression of Hegel’s “actualization of self-consciousness” under the rubric of Geist.25 This underlines the centrality of recursion in computational accounts of general intelligence, an artifactual elaboration of mind only possible through a self-referential conditions. As such, recursion is the prime mechanism by which the emergence of a subjectivity within computational reason can theoretically emerge as the instantiation of an inconsistency, what Badiou calls the “inconsistent multiplicity” of Being.26 tion, via a second-order mapping of identity, which in turn creates a cut in which the subject contends with the aporia of internal re-cognition, a challenge to what Kant would call the apperceptive unity of inner sense—a purported unity which in Being which puts in motion the concretization of intelligence. But it is this inconsistency initiated by recursion, which the computational is unable to inscribe, that marks it out as a vector of reason cleaved from mind, rather than a concrete mental kind. The incomputable is instead embodied by the undecidable realization of the conditions for subjecthood. Lyotard’s discussion of the machinic unconscious highlights precisely that which is missing in the regime of the computational, namely any escape from the despotic addressability of

24 25 26

Negarestani, R., 2018. Intelligence and Spirit. Urbanomic/Sequence Press. p. 36. Hegel, G.W.F., 1998. Phenomenology of Spirit. Motilal Banarsidass Publ. p. 211. Badiou, A., 2007. Being and Event. A&C Black. p. 30.

122

The Topological Turn

the given, a hermetic panoptics of inscription that renders the unthought unthinkable.27 As Badiou notes, the unconscious is that object which does not fall under a sign or concept, and such states lie out of the reach of computational reason, both in principle and in practice.28 Here we are interested in the role of recursion within a topological account, which will eventually lead to a reframing of the creative subject in computational reason, via a new model of automata guided by a pragmatically mediated notion of agency. What does a topological interpretation of recursion look like? Under univalence, primitive recursion is an isomorphism between an inductive loop (an endomorphism) on the Natural numbers ( ) and any other such structure. It has an accompanying logical rule of inference and relies on the inductive notion of a successor function.29

27 28 29

Lyotard JL., The Inhuman. p. 20. Badiou, A., Number and Numbers. p. 41. Awodey, S., 2012. Univalent Foundations Seminar, Institute of Advanced Studies. .

123

Logiciel

Recursion as an isomorphic map (rec) on a structure consisting of an inductive function (s) on the Natural numbers ( ) and any other such algebraic structure (X). Two terms, o and a, of type and X respectively, are needed to exhibit the map. 30

Recursion has no privileged position as a primitive operation, but exists instead as a derived concept, relegated to the status of isomorphism, while induction is represented by a morphism applied by an object to itself. The vocabulary of category theory would appear to subsume these computational concepts. However, univalence makes its own claims to autonomy, leaving us with the open question as to how we individuate the role of computation within these foundations. A common interpretation is that category theory provides a semantics for type theoretic constructs, but this would consign the realm of types to exclusively syntactic procedures—a position at odds with intuitionist semantics, which points to algorithmic

30

Ibid.

124

The Topological Turn

procedures establishing the sense of mathematical expressions, representing the very mechanism of identity construction. If neither recursion nor induction are primitive operations in tional operations exist, if any, within this topological perspecory, namely objects and morphisms, what remains? In other words, what is the role of computation under this model? One could simply make an appeal to a canonical account, such as Church’s lambda calculus, to confer on computation its classiappears to take precedence over such a characterization. If we are to assume that computation and mathematics stand shoulder to shoulder in these new foundations, it is prudent to extendencies from the mathematical, and to what extent these two blur or remain distinct under this new arrangement. With the aim of sketching out a topological model of computing which does not owe its semantics exclusively to existing mathematical ontologies, I propose that there are two operawithin the univalent worldview. These constitute the type theoretic means by which recursion can be established as an to the assignment of a token to a type, the primitive operation in type theory. The second is embedding, and corresponds to the generation of higher spaces via path induction, a synthetic operation which allows for the construction of paths in ever increasing levels of abstraction. Encodings and embeddings represent more general concepts with which to subsume these type theoretic operations, allowing for a more expansive model with 125

Logiciel

greater explanatory reach. The two operations give us conceptual traction over the novel framework presented by univalent foundations, distinguishing its account of computation—in particular its homotopic treatment of types—from canonical models. Moreover, this conceptual treatment will give us a means of uniting ideas from statistical inference with deductive formalism, to construct an integrated account of computational reason grounded in topology. Let us look at these two operations in more detail, as they will need to be elaborated in order to produce a viable topological model of computation.

4.4 Types as Encodings Types are the central primitive concept in univalent foundations, encompassing sets, groups and all other mathematical objects. We should acknowledge the long history of the concept, a genealogy traceable to the categories in Aristotle’s logic, Locke’s general terms, and Kant’s categories of the understanding, through to their modern role in resolving the Russell Paradox, an antinomy at the heart of logicist foundative guise, types represent an encoding of propositions via an assignment procedure, instituting an isomorphism between logic and computation. This encoding of logic performed by types is the foundational computational act, an indexical 126

The Topological Turn

operation which brings linguistic expressions under a sign we call the Type of that proposition, highlighting the intrinsically and programs which lies at the centre of univalent foundations. This renders the practice of mathematics the implemenare computational in nature. We can see types as the most general application of the principle of encoding, and computing can be conceived in essence dure, the bringing into correspondence of an expression with a symbol—the operation x : A simply denotes the assignment of a token x to a type A. Popular examples of encoding include Gödel numbering in mathematics, the Unicode character standard in computing, or JPEG and MP3 media formats. More broadly, encoding is the act which renders any object computable, and the sense of encoding I am proposing encompasses the manifestation of programs in their entirety. While it would be tempting to reduce encoding to the arithmetization of synnumber theory, there is a more fundamental aspect invoked here, whereby encoding takes on a broader epistemic role, without which the relationship between logic and mathematics cannot be fully formalized in language. In rejecting a metaphysics of number via Gisin’s assertion that real numbers aren’t numerical has been usurped by the reciprocity of more basal notions of operation and structure. The computational view 127

Logiciel

supplants Brouwer and Gentzen’s appeal to twoity, destabilising numerical realism in favour of an account centered on encoding, as characterized by the passage from patterning to tokening. It is this movement which marks intelligibility as the compression of regularities (patterns) into encodable structures (tokens), seeding an account of syntax as topology. erarchical notion of belonging in set theory, but in actuality it is closer to a correspondence relation operating on multiple levels. At a lower level we see the encoding of concrete logical propositions as types, in order to express their structural relations within a universe of proofs or programs. At the highest level is a correspondence between Gentzen’s natural deduction and the simply typed lambda calculus (STLC), enabling a type theoretic interpretation of logic. This correspondence putational operation acting on a propositional language, and sical) interpretation, which we can associate with Tarski, assumes a meta-linguistic frame (natural deduction) conferring semantics on a target language (type theory), via the construction of proofs. In the second (computationalist) interpretation, which we can trace to Heyting via Gentzen, the meaning of an operator is the result of an immanent procedure bound up in the very genesis of formal language as such. The syntactic view holds that the formation of inferences is the result of such processes whose logic is necessarily intensional, that is to say, manifest in its internal structure, which is induced without recourse to classical axioms. One can broadly align these two semantic traditions with (narrow) axiomatic and (wide) inferential accounts of computation respectively. 128

The Topological Turn

As such, encodings do not just index syntactic procedures, of computation—in a sense, the syntactic is always already inwardly bound to the semantic within constructive logic. In the constructive formulation proposed by Martin-Löf, Types are structures whose elements, known as tokens, manifest as to be true.31 Types are said to be ‘inhabited’ by proofs, which bear ‘witness’ to the truth of the proposition, and to supply a witness to a truth is synonymous with providing a proof for a proposition. This appeal to a proof oriented semantics has its origins in intuitionism and implies the immanence of truth.32 Truth and falsity are no longer to be treated as external to the formal language, but instead as objects within an intensional logic. The semantics underlying this notion of Type proposes that truth no longer be established via a meta-language, but is rather established by the existence of a proof—a shown.33 The result is a proof theoretic perspective which is not axiomatic but rather inferential, founded not on logical givens but in the induction of inference rules which lead to type formation. Indeed, the axioms of classical logic and set theory are set aside in favour of a single Univalence Axiom, play a decidedly rational role, given a formal treatment as an

31 32 33

Martin-Löf, P., 1998. An Intuitionistic Theory of Types. Twenty-five Years of Constructive Type Theory, 36, (pp. 127-172. Dummett, M., 2000. Elements of Intuitionism: Chapter 7. Vol. 39. Oxford University Press. pp. 250-274. Dummett, M., 1975. Philosophical Basis of Intuitionistic Logic, Studies in Logic and the Foundations of Mathematics. Vol. 80. Elsevier, pp. 5-40.

129

Logiciel

inferential act tasked with the construction of truth.34 All that habited by a proof expressed in constructive logic. Within this theory of meaning, proofs become what computer scientists computation, and falsity no more than a terminal object within a proof by contradiction. This semantics, which comes by way of Martin-Löf, asserts the impossibility of constructing a codings encompass the elaboration of truths as the realization of typed programs. Such an inferential semantics of computation, by way of a proof theoretic treatment of encoding, still leaves an explanatory gap beyond the alethic—it provides a semantics for types but not for the operation of encoding itself, an operation which is neither logical nor mathematical in nature. Indeed, it would appear to invoke a meta-linguistic apparatus of indexiof Tarski semantics. The form of computational inferentialism endorsed here holds that the encoding of syntax is the key unifying principle of the computational worldview, the principle of sufficient encoding if you will, and that the a priori nature of this operation with respect to the symbolic necessitates a metaphysical stance. That is to say, one cannot induce the regime of the symbolic without ushering in an indexical -

34

Martin-Löf, P., 2013. Verificationism Then and Now. In: Judgement and the Epistemic Foundation of Logic. Springer, Dordrecht. pp. 3-14.

130

The Topological Turn

domain beyond the symbolic. While topology could give us a the type theoretic nature of the operation appears to demand a computational explanation. By rejecting the set theoretic view of this dyad, we dismiss the parent-child relation (belonging), and the whole-part relation (inclusion), in which syntaxes are subordinated to encodings or vice-versa. It seems unlikely that encodings can be reduced to the status of mere syntax, and conversely, that syntaxes be treated as encodings, as both stipulate the precedence of one over the other, at the risk of eliding the two. We should be wary of dropping the distinction into logic under the rubric of a single formalism. that no appeal is made to classical semantic notions in the realization of said indexical relation, setting it apart from alethic operations which would be subject to the inferential semantics outlined previously. The precedence of encodings over the nature of the symbolic as an intrinsically computational to render computational reason faithfully. The encoding of syntax is distinguished from Kantian appeals to transcendental logic insofar as it does not take the form of a deduction, but rather a computational principle, leaving the entire expressive reach of computational reason to a universe of immanent forms descended from a universe type. This transcendental computation, the principle of encoding, does not exhibit a 131

Logiciel

logical equivalence in itself, but rather paves the way for the general correspondence between logic (proofs) and language (types), from which the semantics outlined above can proceed. computational worldview, a perspective which seeks a computational resolution to the symbolic grounding problem (SGP), without appeals to a meta-linguistic stack.35

An illustration of the correspondences between logic (proof theory), mathematics (category theory), and language (type theory), in Computational Trinitarianism (Harper). 36 Univalent foundations add a correspondence between topology and type theory via the Univalence Axiom.

35 36

For an overview of the SGP, cf. Floridi, L., The Philosophy of Information. p. 134. Harper, R., 2011, The Holy Trinity. .

132

The Topological Turn

computational with the machinic, but what if computation is instead a more fundamental expression of form? Perhaps its true role is encoding logic into language and linking propositions to their structural, which is to say mathematical, form. Rather than a universal machine enacting mechanical calculations, what if it were modelled instead as a means of navigating topologies? The elaboration of programs becomes synonymous with path induction in typed spaces, and in this sense it can be conceived as an expression of the deep geometrical character of logic. Accepting computation as this threefold articulation of form—a triad composed of logic (proofs), lanputationalist a holistic perspective on reasoning, a worldview which type theorist Robert Harper calls “computational trinitarianism”.37 Through this triadic lens, the contextual nature of these forms is established by correspondences that ensure no priority is given to any one perspective over the others, but which subsumes all three within a new context which we could call computation—a world whose logic structures the conditions for the possibility of encoding reason. Under univalence, computation takes on the role of what Badiou would call a “transcendental operator”, an envelope for these forms which authorizes the identity of entities within this worldview.38 The implications for current theories of computation, including renderings of computation as a digital domain, which “must conjoin givenness and relation”, a way of stating 37 38

Ibid. Badiou, A., 2019. Logics of worlds: Being and event II. Bloomsbury Publishing. p. 163

133

Logiciel

the circularity of syntax and encoding, the bind of total addressability as exposed by the datum and its intrinsic locativity.39 Indeed, the diagnosis that “computation is not an option available to presence but a precondition of it” mirrors the framing set out here.40 From this view, all pattern and structure is vulnerable to encoding, no regularity can theoretically escape the clutches of intelligibility. Pace Galloway, we seek a resolution of this aporia that does not edify analogicity (the self relation), or the unity of radical immanence under a “continuum of intensities”, but rather induces a naturalistic meta41 We should ask ourselves instead—if language, mathematics and logic are all computational media, what exactly is it that binds them? Hilbert spaces and topos theory represent a mixing of those branches of mathematics historically assumed to represent the discrete and the continuous, algebra and geometry, problematizing their dualistic framing. Intuitionism further complicates the construction of the continuum via decision procedures, casting entities regarded as the basis of continuity. The Langlands correspondence additionally builds remarkable bridges between automorphic forms, further blurring these bounds.42 The univalent foundations program in turn introduces a structuralism which allows formal correspondences to be expressed that have little regard for the domains of classical mathematics.

39 40 41 42

Galloway, A.R., 2014. Laruelle: Against the digital (Vol. 31). U of Minnesota Press. p. 76 Ibid., p. 111 Ibid., p. 78 Bernstein, J., Bump, D. and Cogdell, J.W., 2003. An Introduction to the Langlands Program, Vol. 140. Springer Science & Business Media.

134

The Topological Turn

The dual of the discrete and the continuous, or the digital and the analogue, would seem to be displaced by a relation which might be framed instead as that of computation and the real. The links between these forms are not to be found in their ‘digital’ character so much as their constructive nature, their logical compatibility underpinned by an operational semantics. Their susceptibility to the computational—their amenability to encoding—is what binds them, and this is not merely as realizable presentations with syntactic, which is to say topological, structure. One should not be tempted to reduce these to a digitality which exists as one encoding amongst others, a binary expression of the natural numbers , but rather to view them as topological phenomena within an extensible system of types, encodings emerging from the reciprocity of more basic notions of operation and structure. Galloway locates the digital via Laruelle in a metaphysical “cleaving of the one” into two, 43 but under the univalent view, the computational explicitly rejects the givenness of an underlying unity, what Laruelle calls vision-in-one—with Badiou it asserts that the ‘one’ is not of the order of Being, reason is underpinned instead by what Brouwer calls “twoity”, but neither is it reducible to a condition of bivalence. The non-digital character of this worldview is underscored cannot partake in this computational regime. Univalent foundations, by contrast, abandons the rigid schema of bivalent logic, given its insistence on realizability. As we have

43

Galloway, A.R., Laruelle: Against the Digital. p. 35.

135

Logiciel

seen, intuitionism sidesteps the dual of the continuous and the discrete by rejecting one of the fundamental principles of classical logic, the law of the excluded middle (LEM), admitting undecidability into the very practice of mathematics. To couch this in the language of proof theory, it recognizes that the search for a proof may not always yield a halting condition. This signals a multi-valued regime in which each proof represents a distinct propositional truth value—truth is operationalized as a plurality of methods. Rather than acting as a formal restriction, rejection of the LEM creates a more expressive semantics—the admission that a proof may not be bras as a more general formalism.

4.5 Embedding Judgement The second fundamental operation implicit in a topological model of computation is embedding, marked by the symbol . Canonically, embeddings are injective homeomorphisms, essentially transformations which ‘inject’ a substructure into a broader structure—be it a Set, Group or Space. In the topological rendering, embedding is modelled as the induction of new spaces. Higher types embody this transformation—by inducing higher spaces, we can explore paths between paths,

136

The Topological Turn

‘levels’ for typed propositions.44 As an operation, embedding encompasses both dimensionality induction, the projection of representations into higher spaces, and its inverse, dimensionality reduction, the decomposition of spaces to uncover recognition within machine learning. One way of conceiving these operations is as the synthesis of typed judgements. For together, and grasping what is manifold in them in one cognition”, while a judgement in turn is the subsumption of singular objects under concepts.45 As Martin-Löf has proposed, the act of proof construction in type theory can be considered witness to the proposition as a distinct object in its own right, the resulting existential judgement is necessarily synthetic, integrating both the proposition and its proof under a Type.46 ment which rests on evidence, supplied in the form of an object which falls under a concept or type, and it is because no truths are a priori in the constructive view that the analyticity of typed judgements is rendered implausible. This rendition lishes a rational link between evidence and truth, one which

44 45 46

The Univalent Foundations Program, 2013. Homotopy Type Theory: Univalent Foundations of Mathematics: Chapter 6. Institute for Advanced Study. Kant, I., 1781. Critique of Pure Reason. Translated by P. Guyer and A. Wood, Cambridge and New York: Cambridge University Press, 1987. 1st Edition. p. 77. Martin-Löf, P., 1994. Analytic and Synthetic Judgements in Type Theory. In: Kant and Contemporary Epistemology. Springer, Dordrecht. pp. 87-99.

137

Logiciel

is distinctly proof theoretic.47 In our topological interpretaspace generated by the creation of a path between two types, ify a judgement is in turn a form of computation, and it is the extrinsic nature of the relationship between a proposition, its reason a synthetic endeavour. An embedding is no simple isomorphism, but rather the act of inducing a path in an attempt to embed propositions or types in a higher space. If we take a space containing paths between two concepts, WETNESS and RAIN, the space itself represents those judgements which take the form of the function type RAIN→WETNESS cations of proofs of that type. Higher levels can be developed which reduce these paths to points in higher spaces—think of expressing, for example, the concept of PETRICHOR, the distinctive odour of rain on dry soil—allowing us to represent Voevodsky calls these Homotopy Levels, by analogy with groupoids and higher categories. What these levels represent are scales of abstraction, embedding truth procedures by the introduction or elimination of dimensions. A type is said and a path between two types under this formalism represents the very genesis of a new space which can link those propositions via an embedding. That is to say, a proof or program that starts with expression A and arrives at B necessarily synthesizes 47

Martin-Löf, P., 2013. Verificationism Then and Now. In: Judgement and the Epistemic Foundation of Logic. Springer, Dordrecht. pp. 3-14.

138

The Topological Turn

a new typed judgement, it embeds those two propositions in a C. Insofar as any encoding is constructed from other types, we can say that the encoding C embeds both A and B, giving A → B : C (A C, B C) To take our example, RAIN→WETNESS is clearly analytic, assuming the latter as a predicate of the former, but the

nature, without which the proposition is deemed uninhabited the ‘wetness’ of rain that produces petrichor, absent a more sophisticated vocabulary. If we denote encodings with the assignment operator (:) and levels in subscript, the type theoretic elaboration of our example now looks like this: 0

= SOIL: S, RAIN: R, WETNESS: W, PETRICHOR: P

(i) R → W : RW1 (ii) (W, S) → P : WS 1 (iii) (RW, S) → P : RWS 2

Rain implies wetness. The conjunction of wetness and soil implies petrichor. The conjunction of (the wetness of rain) and soil implies petrichor.

ments. Interpreted constructively, this reads “every demonstration of WETNESS is assigned the type W”, and so on. We then see the interplay of encoding and embedding as we asence in inferential level between statements (ii) and (iii). Since 139

Logiciel

(iii) deploys a level-1 (L1) encoding in its premise (RW), it represents a higher level inference, which allows it a greater degree of expressivity regarding the cause of P. While the causes of the odour can be traced via an L1 inferential web (e.g. by modus ponens), L2 expressions can essentially encode relations from lower levels into types, which can then be manipulated ositional logic, in the realm of typed judgements, in which many potential tokenings of WETNESS can live under the type W account of the phenomenon of type P, such as AEROSOL, of rain that causes petrichor, and only an L2 expression can propose this within the limited vocabulary provided. A salient feature of such an inferential framework is that it can be agnostic about the underlying belief system. Whether one is dealing with a network of inductive Bayesian beliefs or deductive logical entailments, encodings and embeddings provide a means of relating inferences via multi-level abstractions, providing the basis for an inferential topology. One of the drawbacks of this scheme is that it introduces a potential source of inferential rigidity in the face of conceptual revision, an issue we will address in time. The topological perspective allows us to loosen type theory from the grip of categorical semantics, opening up alternative interpretations of computation. The notion of mathematical induction, which as we saw earlier is closely related to recursion, is recast here from its categorical framing as an endomorphism, to a form of self-embedding—a structure injecting itself into itself. Furthermore, this account can also give us conceptual purchase on other categorical concepts, 140

The Topological Turn

embedding implied here includes projections of the form expressed as fibrations in category theory, in which points can be projected to higher spaces and vice versa.48 To take a canonical tuitively as a three dimensional ‘ball’, to a 3-sphere, a higher dimensional version of the same mathematical object, in which each point in the lower space generates a sphere embedded in the higher space. This creates a morphism between otherwise incommensurable objects, allowing their expression in mulembeddings in the broad sense implied here are these transits between spaces, achievable only by varying dimensions whilst preserving certain topological properties. Embeddings allow us to develop our inferential perspective to integrate modes of statistical inference, in which projections of the sort described above are known as kernel methods, which inform support vector machines, models that in turn induce a hyperplane, a surface which acts as a boundary for the separation of input data into categories.49 The domain of machine learning has a canonical theorization in terms of a manifold embedded in a space, an interpretation which harks back to its origin as a simple regression on variables, an act of achieving separability between points in a space which is locally Euclidean.50 The manifold hypothesis holds that “real

48 49 50

The Univalent Foundations Program, 2013. Type Families are Fibrations in Homotopy Type Theory: Univalent Foundations of Mathematics; Institute for Advanced Study, p. 72 Noble, W.S., 2006. What is a Support Vector Machine?. Nature Biotechnology, 24(12), pp. 1565-1567. Roweis, S.T. and Saul, L.K., 2000. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290(5500), pp. 2323-2326.

141

Logiciel

world data forms lower dimensional manifolds in its embedding space”, concretizing this notion as a general paradigm for statistical inference.51 52 Manifold learning involves smoothing data into a continuous surface, as a planar representation on which locality can be expressed between points, an operation only possible via dimensionality reduction of real world data. The real is cast as a tangled complex of manifolds, and the ability of AI to recognize patterns becomes a matter of terpretation. The act of embedding is analogous to learning say, dimensionality reduction is itself a form of embedding, a transit between higher and lower spaces. These transits net (ANN), whereby connectivity between layers translates to algebraic structures of varying shape. This explains the central role of word embeddings in contemporary computational language models—by reducing the dimensionality of a language corpus from a matrix of individual word tokens (a high-dimensional encoding), a model can begin to absorb not just grammatical roles, but abstract semantic concepts such as analogies, sentiment, and tone, the structures of which are tokenization of symbols.53 In a certain sense, it is the dimensional plasticity of a machine learning model that underpins

51 52 53

Olah, C., 2014. Neural Networks, Manifolds and Topology. Fefferman, C., Mitter, S. and Narayanan, H., 2016. Testing the Manifold Hypothesis. Journal of the American Mathematical Society, 29(4), pp. 983-1049. Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N. and Wu, Y., 2016. Exploring the Limits of Language Modeling. arXiv preprint. arXiv:1602.02410.

142

The Topological Turn

its capacity to generalize its inferences, where generalization refers to the applicability of a model output beyond the con-

learning and deductive explanation within a single theory.

143

FIVE

Interaction Grammars

“For each fragment of a language or collection of signs, we have to consider what kind of world or form of life is depicted by this fragment (its model(s)). Depending on the context, the meaning of that sign may be called the world-location or locus of that sign.” 1 — Catherine Christer Hennix

1

Hennix, C. C., 2019. Modalities and Languages for Algorithms in Poesy matters and Other Matters. Blank Forms Editions.

Interaction Grammars

5.1 Geometry of Interaction In our discussion of expressivity in formal languages, we have alluded to the bounded capacities of non-interactive generative grammars. The analytical pragmatism of Brandom was tational reason and an entry point into a discussion of interaction as a vector of semantic ascent. However, we have yet to discuss various attempts to integrate interaction into logic and computational theory. The symmetries in logic following from De Morgan’s Laws and the branching proof trees of Gentzen’s natural deduction have served as an origin from which the logician Jean-Yves Girard has sought to formalize a game theoretic proof system known as ludics.2 A layman’s introduction is provided by Negarestani and an overview from Lecomte & Quatrini, so in these notes I will discuss some properties of the system without elaborating the formalism

2

Girard, J.Y., 2003. From Foundations to Ludics. The Bulletin of Symbolic Logic, 9(2), pp. 131-168.

147

Logiciel

itself in detail.3 In Ludics, proofs are cast as a form of dialogical reasoning, in turn dialogues are cast as games between two processes, with emergent strategies and no a priori rules, an open-ended interplay of assertions countered and refuted in turn. What Girard is seeking is a geometry of interaction, symptomatic of a “universal dynamics”.4 For Girard, this is to be found in a geometric interpretation of the logic of Gentzen, which in turn emphasizes a dynamic process at the heart of procedure. This exploits symmetries in logic via a notation

below eliminates a proposition A ’) as conclusions, by invoking De Morgan’s laws:

Γ

A, Δ

Γ’, A

Δ’

,

(Cut Rule)

Γ, Γ’ Δ, Δ’ Abramsky has shown how the cut rule embodies a form of interaction within a proofs as processes paradigm.5 For Girard, interaction in proof systems can be reduced down to such cuts, an eliminative or subtractive algorithm geared towards the normalization of complex premises into simpler conclusions, via

3 4 5

Lecomte, A. and Quatrini, M., 2011. Figures of Dialogue: A View from Ludics. Synthese, 183(1), pp. 59-85. Girard, J.Y., 1989. Towards a Geometry of Interaction. Contemporary Mathematics, 92(69-108), p. 6. Abramsky, S., 1994. Proofs as Processes. Theoretical Computer Science, 135(1), pp. 5-9.

148

Interaction Grammars

tive and concrete process of contradiction, but a form of contradiction that is deeper than and prior to any determination of to reality.”6 This movement, in which form determines its own negation, an open-ended dispute, is for Fraser a “transcendental syntax” with deontic force, a dynamics which exposes the “contradictory foundations of formality”.7 Fraser renders the transcendental application of types to the lambda calculus, what I have called the encoding of syntax, not as an “apparatus of control” or “disciplinary action” (Joinet),8 but as an essentially Hegelian notion, a dialectics in which form is “absolute negativity itself”.9 Another lens with which to view the deontic implications of this negation is Brandom’s analytical pragmatism, in which incompatibility plays a prominent semantic role, im-

“To say that two commitments (whether doxastic or practical) are incompatible in this sense is to say that one canwith such commitments, one is obliged to do something: or modifying at least one of those commitments (to ensynthesizing a rational unity). What is incompatible with

6 7 8 9

Fraser, O., 2014. Go Back To An-Fang. . Ibid. Joinet, J.B. and Tronçon, S., 2009. Ouvrir la Logique au Monde. Philosophie et Mathématique de l’interaction. Hermann. Hegel, G.W.F., 2010 (1929). The Science of Logic. Cambridge University Press. p. 391.

149

Logiciel

what in this sense is a matter of the practices and attitudes of the subjects of those commitments: the norms implicit in their behavior, what they in practice take or treat as incompatible in acknowledging and attributing the deontic statuses of commitment and entitlement.”10 Brandom’s emphasis on conceptual revision highlights the non-monotonic extensibility demanded of interactive reasoning, a challenge to any attempt at formalizing discursive practices. Girard duly takes up this challenge by positing a dialogical trajectory framed in terms of proof search, in which interactive types come to embody the dynamic expression of logical propositions. Proofs and counter-proofs are provided by the two processes, and the notion of a reward function or a winning strategy is interpreted as a goal-oriented attempt to “keep the dialogue converging”,11 to evade dissensus in the form of an endless divergence. This echoes the dynamics of Generative Adversarial Networks (GAN) under the deep learning paradigm in AI.12 The GAN framework pits a generative neural net against a discriminatory net, the creative feedback from the latter, a process which is initiated as random noise and whose convergence hinges on the creative ability to mimic a ground truth, supplied in the form of training data. What GANs enact is an adversarial game, an in-situ co-learngenerative and discriminative capacities respectively, and 10 11 12

Brandom, R., Between Saying and Doing. p. 191. Ibid. Creswell et al, 2018. Generative Adversarial Networks: An Overview. IEEE Signal Processing Magazine, 35(1), pp. 53-65.

150

Interaction Grammars

convergence is likewise the goal of such a ludic enterprise— one net must recognize the output of the other as belonging to a class of expressions which it itself is learning to identify as the process develops, its own feedback in turn driving the generative model to concurrently develop its own expressivity. In a sense, ludics is a logical formalism representing an open-ended game of this sort, an unsupervised dialogical learning, unanchored to a ground truth, wedded rather to an emergent notion of interactive types, similarly based on a generative adversarial dynamics. But crucially, via the Hegelian lens provided ously missing in the AI rendition of ludic learning—the de13

Girard likewise asserts the normative nature of his dialogical project as a repudiation of classical semantics under the guise of entirely immanent proof-theoretic interactions. Ludics embeds proof theory within a game semantics, as it is Girard’s explicit purpose to “abolish the distinction syntax/ 14

> Formulas = Games > Proofs = Winning Strategies > Truth = Existence of a winning strategy Girard is concerned with an open-ended game which yields an emergent logic of rules, a non-monotonic system which 13 14

Hegel, G.W.F., The Science of Logic. p. 391. Girard, J.Y., 2003. From Foundations to Ludics.

151

Logiciel

does not appeal to axiomatic authority, but acts as the engine of rule formation itself. As Negarestani notes, at stake in such a formal account of interaction is a claim regarding the information bearing capacity of computation, or rather its epistemic ability to generate new knowledge.15 In the received view, the non-ampliative nature of deduction poses a problem for the epistemic status of classical accounts of computation, which only an interactive treatment is able to pliative interpretation of natural deduction, but it is only in its fully discursive expression that its creative potential is laid bare. For Negarestani, this is what makes games “unifying themes in the study of structures, (information) contents, and behaviours” which correspond to the trinitarian view as “computational behaviours, logical contents, and mathematical structures”.16 If one is to look beyond computation as a logically closed system of deduction, one needs to provide an account of the ampliative nature of interactive logic, and for both Negarestani and Girard, game semantics are an avenue for developing such an account. to interaction is an emphasis on what Girard calls locativity, a property which he considers integral to constructive logics. In computational terms, this relates to the indexical nature of encodings as providing a locus for any assertion or statement, the locale conditioning a truth constituting an abstract lattice acting as a topology which precedes its spatial articulation. Appealing to the topos theory and categorical 15 16

Negarestani, R., Intelligence and Spirit. p. 345. Ibid., p. 346.

152

Interaction Grammars

logic discussed in the previous chapter, Girard seeks to develop a geometry of proofs, in which formulas can be identi-

just by its content (its proofs), but by where it is to be located within a broader locus of proof construction, essentially by the role it plays in a proof. This leads Lecomte & Quatrini to draw some parallels with inferentialist semantics, in which the meaning of a term is given by the inferential role that it plays within a network of alethic statements.17 In this sense, Girard’s work links inferentialism with the syntactic tradition in logic and in turn with a computational treatment of discursive processes. In discussing Girard’s broader logical program, known as linear logic, Lafont explains his motivation in terms which synthesize concepts from logic, constructive type theory, parallel computation, and concurrent systems theory:

The program declares its own connectors (i.e. polymorphic types) and rules (i.e. constructors and destructors), and describes the conversions (i.e. the program). Cut elimination is in fact parallel communication between processes. In this language, logic does not ensure termination, but absence of deadlock.”18

17 18

Lecomte, A. and Quatrini, M., 2013. Ludics, Dialogue and Inferentialism. Baltic International Yearbook of Cognition, Logic and Communication, 8(1), p. 6. Girard, J.Y., Taylor, P. and Lafont, Y., 1989. Proofs and Types, Vol. 7. Cambridge: Cambridge University Press. p. 154.

153

Logiciel

Girard’s innovations are especially relevant within the trinitarian view of computing, where the isomorphism between logic and computation implies that advances in the area of constructive logic can express themselves in computational terms via type theory and vice versa.19 What does linear logic have to contribute to computational ideas? As a form of logic, it embellishes the realizability interpretation of statements to give an explicit role to notions such as evaluation. Under linear logic, propositions can be seen as actions with source-bound conception of reasoning with all the dynamics of a computational logic—to assert something comes to resemble a form of labour, a computational cost made explicit as an irreversible evaluation which consumes resources. As of computations, and memory allocation, with applications in parallel computing.20 21

19 20

21

Abramsky, S., 1993. Computational Interpretations of Linear Logic. Theoretical Computer Science, 111(1-2), pp. 3-57. Girard, J.Y. and Lafont, Y., 1987. Linear Logic and Lazy Computation. International Joint Conference on Theory and Practice of Software Development. Springer, Berlin, Heidelberg. pp. 52-66. Lafont, Y., 1988. The Linear Abstract Machine. Theoretical Computer Science, 59, pp. 157-180.

154

Interaction Grammars

5.2 Heyting Machines The advent of distributed computing has motivated a focus on concurrency and parallelization in computational theory. instructions akin to recipes and more in the manner of parallelized sub-programs or functions executed asynchronously according to various interactions, triggered both by other computational processes and sundry non-deterministic inputs such as humans. This trajectory began in the theory of dynamic programming of the 1960s, maturing into the indusaround the turn of the millennia, in which clusters of computing nodes cooperate on an algorithmic task in parallel, a prominent example of which is the MapReduce paradigm. Similarly, in computability theory attempts have been made to model interactive proof systems. The complexity class of IP, for example, attempts to study the characteristics of interactive polynomial time algorithms, in which one process provides proofs and the other attempts to verify them in polynomial is that IP=PSPACE, showing that such interactive processes can verify in polynomial time those proofs which are of polynomial length when encoded in programs.22 This is striking

22

Shamir, A., 1992. Ip=pspace. Journal of the ACM (JACM), 39(4), pp. 869-877.

155

Logiciel

insofar as it is generally accepted that P

NP

PSPACE, so

of the class P to ascend the hierarchy of complexity in terms If we view computational complexity through a Brandomian lens, interaction can be seen as a key ingredient of an expressive bootstrapping strategy for increasingly generalized forms of intelligence. Likewise, it is conjectured that interaction can generate formal grammars with expressive capacities beyond those sketched out in Chomsky’s hierarchy. Wegner posits interaction grammars, corresponding to a new class of automata, interaction machines, which possess an irreducible expressivity

“IMs [interaction machines] extend TMs by adding dynamic input/output (read/ write) actions that interact directly with an external environment. Interaction machines may have single or multiple input streams and synalong many other dimensions, but all IMs are open systems that express dynamic external behavior beyond that 23

In his landmark paper on undecidability, Turing himself admitted the notion of choice machines, as those which would -

23

Wegner, P., 1998. Interactive Foundations of Computing. Theoretical computer science, 192(2), pp. 315-351.

156

Interaction Grammars

When such a machine reaches one of these ambiguous conbeen made by an external operator. This would be the case if we were using machines to deal with axiomatic systems.”24 Turing intuits the need for external interaction when extending the axioms of a given system, at the risk of encountering the capacity for non-monotonic extensibility. This can be seen as an admission of the implausibility of what would later be called the Strong Church-Turing Thesis (SCTT), namely the reducibility of all models of computation to TMs. For Wegner, a number of properties follow from interaction which call the strong thesis into doubt—openness, non-compositionality, asynchrony, parallelism, incompleteness, concurrency, to name a few. These lead Goldin & Wegner to claim a role for input and output in computation which is incompatible with Turing’s model, in which a read/write head putation to an automaton that operates strictly in-between input and output phases. By contrast, under the interactive computing paradigm, the entanglement of input and output

“This conceptualization of computation allows, for example, the entanglement of inputs and outputs, where later inputs to the computation depend on earlier outputs. Such entanglement is impossible in the mathematical

24

Turing, A.M., 1937. On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(1), pp. 149,944230-265.

157

Logiciel

worldview, where all inputs precede computation, and all 25

This dynamics is purported to give rise to non-serializable phenomena beyond algorithmic decomposition, admitting computation. An IM is introduced to capture this dynamMachine (PTM): “PTMs are interaction machines that extend Turing

computations. A PTM is a nondeterministic 3-tape Turing Machine (N3TM) with a read-only input tape, a read/ write work tape, and a write-only output tape. Its input is a stream of tokens (strings) that are generated dynamically 26

Concurrency theory models input streams of this sort via the notion of interleaving, harnessing environmental input within but Wegner by contrast does not consider such a treatment as compatible with the concept of interactive persistence outlined above. From a number theoretical standpoint, the set of computations available to such interaction machines are posited as non-denumerable, fundamentally altering received

25 26

Goldin, D. and Wegner, P., 2008. The Interactive Nature of Computing: Refuting the Strong Church–Turing Thesis. Minds and Machines, 18(1), pp. 17-38. Ibid.

158

Interaction Grammars

notions regarding computability. This leads Wegner to the strong claim that “logic is too weak to model interactive computation. Formal reasoning, like algorithmic computing, is a noninteractive step-by-step process from a starting point to a result.”27 As we have seen, via the work of Girard and others, this is a myopic assessment of the expressive potential of constructive logics, which crucially are not hampered by completeness or decidability claims in the same manner as classical systems. Indeed, under the univalent view I have presented, any attempts to shear logic from computation consign the latter to conceptual incoherence, and the expressivity of constructive formalism is not neatly bounded, neither is it tethered to serializability in the manner Wegner portrays it. Such a cleaving of the logical and the computational raises semantic issues discussed in earlier chapters—moreover, it ignores those correspondences that give expression to the encoding of syntax, which I defend as the key epistemic computational act. On closer inspection, there are broader coherence issues for interaction grammars, whereby coherence denotes the capacity zation or constituent parts. If computation is deemed non-logical, non-deterministic and non-algorithmic, what explanatory power does computation hold? One might assume that it describes any and all agents (as processes) interacting in an environment. This seems to elide the computational and the agential without any distinction to be drawn between biological and

27

Wegner, P., 1998. Interactive Foundations of Computing. Theoretical Computer Science, 192(2), pp. 315-351.

159

Logiciel

computation positively, an incoherence risking an ontological ward to a decidedly contingent environment. Indeed, IMs may be posited beyond the agential to all interactions in which mattion. Consider a thermometer with a digital reading—under Wegner’s proposal, the interactions of mercury atoms with the ambient environment may fall under the ambit of IMs and are thus in some sense computational. But taking a digital reading immediately exposes the issue faced by any computation inter-

marked out in order to bear the property of explanation, either via a negation of the real, or, as I propose, a construction in the real. This is what Wegner’s model lacks—by departing from creative task at hand, namely exploring the expressive limits of formality, a domain in which Girard, Abramsky and MartinLöf’s research suggests there is considerable potential for modelling extensibility, non-monotonicity and interaction. Teuscher, Davis and others reject Wegner’s proposal tout court, dismissing it as an unwarranted form of hypercomthe computable from the non-computable… no possible experiment could certify that a device is truly going beyond the Turing computable”.28 However, our commitment to a 28

Davis, M., 2006. Why There Is No Such Discipline as Hypercomputation. Applied Mathematics and Computation, 178(1), pp. 4-7.

160

Interaction Grammars

naturalized metaphysics of computation implores us to consider this challenge to Turing orthodoxy on terms which do not simply reinforce the received view of the SCTT, but rather appeal to Brouwer’s treatment of the real. If we are to take Wegner’s proposal as more than mere conjecture, it is fair to demand a constructive account of computation, and this is markedly absent—interactive computing instead asks us to embark on a vaguely rendered “paradigm shift” in computational theory. Consider by contrast a constructive version of Turing’s choice machines, in the form of Heyting Machines (HMs)—an extensible automata under the paradigm of free creating subject. We might embellish such machines from a type theoretic perspective, as an interactive system capable of freely extending its own types, a machine imbued with the capacity for generating interactive types. HMs are intuitionistic automata with interactive capacity akin to choice machines, shedding axiomatic extensibility in favour of the inferential dynamics of types-as-encodings. Said HMs bootstrap their expressivity by encoding ever more complex syntaxes in type theoretic constructs, and are not reducible to TMs, on account of those interactions with a creating subject that entail free choices, acts rooted in the indeterministic class of automata corresponding to the univalent model of computational reason set out in the previous chapter, capable of creative acts which induce new spaces for reasoning, expressed via feats of interactive type formation that serve as the basis for proof construction. In a sense, HMs attempt to bind elements of human and machine creativity, engaged in an interactive conceptual embedding, an interplay of mutual 161

Logiciel

encoding in which the two threads at each moment threaten to unwind, unspooling to breaking point under the pressure of an expressive bootstrapping which entails the prospect of an intelligence unbound. As Wegner notes, “Natural numbers are not closed under diagonalization”, and the repercussions of this fact for both humanist and post-humanist interpretations of AI are considerable.29 However, this does not in itself countenance Wegner’s thesis that interactive computing can lay claim to resent the domain of the real. In set theory, the problem of closure emerges in terms of the incommensurability of the rational and the real, sets whose cardinalities cannot be adeingly constructive algorithm which nevertheless appears to prove the uncountability of the real numbers, a paradoxical sure, a premise which constructive mathematics dismisses by virtue of its treatment of the real. As Priest has argued, the diagonal method has undecidable halting properties and should be seen as a paradox of self-reference cut from the same cloth as the Russell paradox and Gödel’s incompleteness theorem, a scheme he calls “inclosure”. 30 In the -

that there are languages which are not even Turing recognizable. Diagonalization is a mode of bootstrapping which 29 30

Wegner, P., Interactive Foundations of Computing. p. 318. Priest, G., 2002. Beyond the Limits of Thought. Oxford University Press. p. 117.

162

Interaction Grammars

does not represent an epistemic claim on the real itself, but a challenge to any attempt at bounding the expressive limits of the formal, the latter signifying a humanist impulse which I call, following Priest, the inclosure of reason. 31 As Negarestani notes, it follows that logic cannot be conceived as a closed canon of statements or proofs, so much as an Aristotelian organon, an open-ended set of tools and tech32

In Heyting Machines, we see the tension between computation and contingency rendered in terms of the creative act of computational creativity with respect to free choice. As a speculative model of automata, HMs are designed to avoid the pitfall to which IMs succumb, namely the scope creep universalist interpretations of the sort endorsed by pan-computationalists. While HMs share some common traits with Wegner’s interaction machines, they constrain those modes of interaction which are to be considered computational, without dispensing with a logical interpretation of what it means to compute. Instead HMs propose an account of interaction machines rooted in logic, albeit a formalism with a treatment of the undecidable which admits creative acts, acts which are not characterized by their uncountability, so much as decision procedures enacted within a regime of irreducible contingency, what we might call the free choices of a creative subject. 31 32

Bawa-Cavia, A., 2017. The Inclosure of Reason. Technosphere Magazine, 15. Negarestani, R., Intelligence and Spirit. p. 407.

163

Logiciel

5.3 The Frame Problem The universality of Turing’s model rests on its divestment from semantic considerations, an attempt at a purely mechanistic account of computation. As I have argued in previous chapters, contra Turing orthodoxy, the spectre of semantics is not so easily banished—any form of computational explanation re-embeds computation in a world whose logic must be accounted for. I argue that such a logic necessarily follows pressed by a triadic worldview which brings logic, mathematics, and language into correspondence, in the manner sketched by Harper under the rubric of type theory. To undertake a naturalized metaphysics of this sort is to attempt 33

It is this alethic account of computation which summons the computational as a transcendental operator with modal 33

Ross, D., Ladyman, J., Spurrett, D. and Collier, J., 2006. Everything Must Go: InformationTheoretic Structural Realism. p. 65.

164

Interaction Grammars

authority, providing a bearing on the semantics of the possible and of possibility as such. Following Badiou, we can say that the coming-into-a-world of an entity is imbued with a which authorizes its identity: “The ‘worlding’ of a (formal) being, which is its being-there or appearing, is ultimately a logical operation: the access to a local guarantee of its identity.”34 It is Badiou’s rendering of a world as a category of logic which can inform a treatment of the modal within this notion of comof beings”.35 Under the univalent view, it is the property of invariance known as homotopy which underpins identity, arrived at via a correspondence of types and spaces. The locality of truth under the related theory of topoi implores us to anchor the autonomy of the formal back in a world which exists as a site of truth construction. Here I would endorse a dual articulation Patricia Reed.36 We consider the notion of site expressed as both a topos (Lawvere) and locus (Girard), imbued with a categorical logic and inhabited by interaction loci. This dual articulation of site admits both a topological and geometric expression of worlding, as an interactive procedure oriented towards the elaboration of logical and material invariances we call truths. In this conception, world and model are collapsed under a proof 34 35 36

Badiou, A., 2019. Logics of Worlds: Being and Event II. Bloomsbury Publishing. p. 113. Ibid. Patricia, R., Bawa-Cavia, A., 2020. Site as Procedure as Interaction. In:Construction Site for Possible Worlds. Urbanomic.

165

Logiciel

theoretic lens, the immanence of truth emerging from a strictly computational semantics which rejects meta-linguistic frames. Indeed, computation is the very movement of this collapse, a rejection of the metaphysics of the ‘model’, for computation acts not as a frame, but as a world in its own right—comprised of mathematical structures, constructive logics and typed languages—which realizes its own immanent truths. Here we should clarify the distinction between a world as a site of truth between the countable and the uncountable, in other words the computable and the real. Rather than a frame legislating the truths of a world, this proposal considers instead a localized semantics, expressed as both a univalent structure and a geometry of interaction converging on truths. It is computation’s role as the mediator of formal and material expressions of logic that locate it at this nexus of ontic and epistemic modes of structuralism. This worldview, founded on immanent procedures, is not an attempt at a fully autonomous, resolutely inhuman account of computation, but rather the anchoring of any proposed world to a logical site of interaction, asserting the ratio-

possible. The basis for such a treatment of modal logic is in the work of logician Ruth Anne Barcan, which informs a view on the modal we call computational actualism.37 The controverlogic, which asserts that the possibility of existence entails the existence of possibility: 37

Barcan, R.C., 1946. A Functional Calculus of First Order Based on Strict Implication. The Journal of Symbolic Logic, 11(1), pp. 1-16.

166

Interaction Grammars

◊ ( α) A

( α) ◊ A

This form of concretism appears to graduate mere possibilia to the realm of the actual, granting them an immediate ontological status, while it is distinct from the modal realism of possibilities.38 What are we to make of this strange assertion of uncompromising concretism? While it is widely considered a threat to actualism, rendering it incoherent and indiscriminate, a computational view can elucidate its challenge to modal realism. Its metaphysical interpretation is disputed, but we take it as a stubborn breed of actualism which dispensof modal categories.39 Kripke would famously dismiss the fortional worldview, which asserts the precedence of truth procedures over frames.40 In our computational interpretation, we take this to mean that the possible is actual insofar as it is realizable, in the sense of BHK realizability—that is to say, assertions of the possible are rendered logically meaningless without positing a program, in the broadest sense of the term, let us examine Kripke’s discussion of the distinction between

38 39 40

Cresswell, M.J., 1991. In Defence of the Barcan Formula. Logique et Analyse, 34(135/136), pp. 271-282. Simchen, O., 2013. The Barcan Formula in Metaphysics. THEORIA: Revista de Teoría, Historia y Fundamentos de la Ciencia, 28(3), pp. 375-392. Kripke, S.A., 1963. Semantical Analysis of Modal Logic. Mathematical Logic Quarterly, 9(5-6), pp. 67-96.

167

Logiciel

a priori and necessary truth in mathematics. Kripke considers the Goldbach conjecture, the statement that all even numbers greater than 2 are the sum of two primes, an unsolved conjecture with unknown decidability properties.41 While it cannot be asserted as an a priori truth, Kripke nevertheless considers it a bearer of necessary truths: “What would that mean? Such a number would have to be ing Goldbach’s conjecture to be true, each of these can be shown, again by direct computation, to be the sum of two primes. Goldbach’s conjecture, then, cannot be continit by necessity.”42 Kripke’s argument makes a vital appeal to computation in the passage from contingency to necessity, and our view consolidates this notion, positioning realizability centrally in an ontology of the modal. Adopting a constructive view, the its realizability, and a proposition lacks meaning absent a program for its construction. That is to say, one cannot restrict the deployment of possibilia, without which counterfactual reasoning could not proceed, but their modal status is relegated to contingency, absent a proof of their decidability, which graduates them to the realm of actual possibility. This view rejects a possible world in which Fermat’s last theorem has 41 42

Decidability is context-sensitive to a given axiomatic system. The decidability is not known within Peano Arithmetic or ZFC, and neither is it likely under ITT Kripke, S.A., 1972. Naming and Necessity. In: Semantics of Natural Language (pp. 253355). Springer, Dordrecht. pp. 36-37.

.

168

Interaction Grammars

not been proven, stating that propositions are not logically ampliative until their decidability has been demonstrated. In addition it asserts realizability as a modal status resulting from a creative act, one which brings a truth into being, breaking with strict expressions of actualism. Consider the possibility that Wittgenstein fathered a child—a program with decidable halting properties theoretically exists that can check a his lifetime, to verify the statement. Its biological plausibility alone does not constitute a modality under this view. Its decidability is what marks it out as an actual possibility, while its realizability would correspond to the execution of the program returning a result. The meaning of such a possibility is grounded entirely in truth procedures, dispensing with any appeal to a possible world in which Wittgenstein has a child, advancing instead a purely pragmatic approach to the possible. This seemingly restrictive insistence on immanent procedures structures are brought into being as novel possibilia, namely the role of creativity, the source of what Badiou calls “the unof interaction in any account of worlding.43 On this point, I should be clear to distinguish realizability from broader claims

But it is interaction in its multi-scalar sense which binds the two, from speech acts to ambient molecular collisions and the

43

Badiou, A., 2007. Being and Event. A&C Black. p. 227.

169

Logiciel

real, the medium of patterning to which we owe the advent of structure. This view is informed by a Sellarsian stance on the nature of modal locutions, propounded in his early work, in which modal concepts cannot come about without an appeal to laws in the form of logical invariances, laws which emerge locally and only come into being as universal truths via a painstaking process of construction—the elaboration of isomorphisms 44 This informs reports are parasitic on modal assertions, which in turn rest on normative commitments in the form of laws. This entails a perspective in which de re accounts of particulars imply de dicto statements of broader modal scope, according with the interpretation of the Barcan formula endorsed here. These laws, expressed as invariances, ground the operational account of the realizability of possibilia that springs from the univalent viewpoint. It is these invariances which endow translations between a plurality of co-existent sites or worlds with the capacity for convergence. Alexander Wilson has discussed the modal properties of univalence in terms of what he calls an “operational virtuality”, wherein “Univalence echoes univocity: if something exists, it virtually contains all the ways we may come to know it.”45 to violate the realizability interpretation of logic as a strictly constructive enterprise. Notably, Deleuze distinguishes the 44

45

Sellars, W. and Sicha, J., 2017. Concepts as Involving Laws and Impossible Without Them in Pure Pragmatics and Possible Worlds. In: The Early Essays of Wilfrid Sellars. Ridgeview. Wilson, A., 2021. On Something Like an Operational Virtuality. Humanities, 10(1), p.29.

170

Interaction Grammars

virtual as an immanent domain which is real but not actualized, implying a metaphysics of the possible untethered from pragmatic constraints.46 Wilson elaborates a notion of concretization of truth procedures. For Wilson, under unifor actualizing it. Thus, if a thing is realized, or actualized, virtual operations.”47 This claim is closely related to computational actualism, but I interpret it as endorsing a form of what Ladyman & Ross call structural realism, positing if not the a priori existence of virtual structures, then at the very least their necessity. Structural realism proposes an objective modal structure to the universe, as a riposte to constructive

A notion of computation as a means of worlding gives the frame problem. First articulated in logical form by McCarthy & Hayes in 1969, the problem concerns how a symbolic system can capture the result of actions in an environment, without having to explicitly delineate not just rational agent negotiates not only its own boundedness, but that of all other entities known to it—the means by which it localizes relations. It is a limitation of deductive systems based on classical predicate logic that laws with open-ended sets of 46 47

Deleuze, G., 1994. Difference and Repetition: Chapter IV. Columbia University Press. Wilson., On Something Like an Operational Virtuality.

171

Logiciel

sense’ law of inertia, which encapsulates the fact that most actions do not alter most properties of most entities, is just such an open-ended law.48 The proposed solution is some meta-axiomatic frame, a second-order set of rules, the metaphysics of the ‘model’ once again raising its head. Fodor and Dennett consider the epistemological repercussions in terms of conceptual revision and counterfactual reasoning—how does an agent know which propositions to revise in light of a counterfactual course of action?49 How does one come to understand inventory of all entities and their relations in order to discern purely deductive account of reasoning, to develop a model which is inclusive of abductive acts—reasoning to the most probable explanation—and inductive acts conditioned by perceptual cognition, such as Bayesian belief networks. These informal modes of probabilistic reasoning exhibit a plasticity tion, and contemporary AI largely side-steps the frame problem as a result of its inductive model-free basis. In Bayesian reasoning, prior beliefs are incrementally updated in light of new evidence, and new beliefs can probabilistically come into being without a causal metastructure. Indeed, conceptual reAI (GOFAI), which is distinguished by a purely symbolic account of knowledge composed of explicit facts, a somewhat 48 49

Hayes, P.J., 1981. The Frame Problem and Related Problems in Artificial Intelligence. In: Readings in Artificial Intelligence. Morgan Kaufmann. pp. 223-230. Dennett, D.C., 2006. The Frame Problem of AI. Philosophy of Psychology: Contemporary readings, 433, pp. 67-83.

172

Interaction Grammars

rigid schema which fares poorly with even simplistic interaction scenarios. However, without an integrated account which can handle both inductive revision and modal assertions, no can be obtained—the frame problem demands a resolution of both observational and modal knowledge claims under a single model, indicating that neither strictly symbolic nor inAs such, the frame problem issues a challenge to accounts of reasoning descended from Sellars, which reject the supervenience of normative (deontic) commitments on empirical gennying project of inferential semantics, which in turn informs my rendering of computational reason.50 It poses the problem what forms of epistemic representation exhibit the plasticity to handle continuous real-time interaction, without forfeiting misses the notion of an “inductive leap” bridging the realm of the empirical and the rational, summoning a meta-linguistic apparatus to explain the subordination of “particular situations” to “law-like” explanations, while interaction is 51 But it is precisely the locality of interactions which marks out the frame problem as irreducible to the problem of induction, the projective leap from the local to the global presenting its own aporia to

50

51

Lange, M., 2000. Salience, Supervenience, and Layer Cakes in Sellars’s Scientific Realism, McDowell’s Moral Realism, and the Philosophy of Mind. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 101(2/3), pp. 213-251. Sellars, W., 1954. Some Reflections on Language Games. Philosophy of Science, 21(3), pp. 204-228.

173

Logiciel

rationalist accounts of mind. Brandom’s position is that the limitations of “autonomous discursive practice” should be considered not in terms of symbolic reasoning, so much as a stance informing what he calls “pragmatic AI”.52 Here, computational inferentialism would instead seek a geometric elaboration of projectibility constructive mathematics, including elements of linear algebra, algebraic geometry, topology and topos theory—gluing, etc—in approaching the problem of enframing, while defending a content internalism from an overly behaviourist paradigm. For Ladyman & Ross, projectibility is a modal notion expressing an information carrying possibility, the very means regularities which they call real patterns.53 If we are to take a manifold theory of inference as central to computational extual relations, then the means of navigating said topologies is ner in which the local can be distinguished from the global. If we consider the global as a process rather than a domain, as a projection of logical and material invariances (truths), then we can begin to gain traction on how the problem of enframing modal structure of an embedding space. The global then is to be regarded a function of the projectibility of truths from

52 53

Brandom, R., Between Saying and Doing. pp. 75-77. Ross, D., Ladyman, J., 2006. Everything Must Go: Information-Theoretic Structural Realism. p. 224.

174

Interaction Grammars

is to be continuously tested against the condition of appearance of entities, and their ensuing relations across a plurality of worlds. This plurality, which is arrived at by an insistence on the immanence of possibility, marks out computation as a general means of worlding, encoding sites of interaction with local truth procedures, which are bound by the projection of global invariances. This brings us full circle, the indexical nature of encoding mirroring Girard’s emphasis on the locativity of statements as integral to their identity as propositions. The topological view represents just such an insistence on the locality of truth, a position which hints at a resolution of the frame subject to modal constraints, with embeddings a key element constraints placed on such operations summon realizability as an arbiter of possibility, a means of distinguishing valid inferential moves from contingent claims, delimiting the modal structure of a space of reasons.

175

SIX

Encoding Reason

“The machine is a being that functions. Its mechanisms concretize a coherent dynamism that once existed in thought, which were that thought” 1 — Gilbert Simondon

1

Simondon, G., 1980. On the Mode of Existence of Technical Objects. London: University of Western Ontario. p. 151.

Encoding Reason

6.1 Neural Computation Univalent foundations elevate computation from a suburb of mathematics, merely a sub-space of recursive functions sion of mathematical practice. This development raises the prospect of a properly computational reason distinct from both mathematical thought and mechanical explanation. We have sketched out a constructive model of computation in the previous chapters, which supplies us with a holistic perspective on logic, language, and mathematics, but we following from this worldview. Such a perspective would need to accommodate notions of learning, creativity, agency, and autonomy, amongst other capacities and abilities, an overview of which is the aim of this chapter. Let us begin der to further distinguish the computational. The study of mathematical reasoning contains many meditations, from Lautman’s framework of notions, ideas and mixtures,

179

Logiciel

leading to dialectical progression, 2 to Zalamea’s interplay of the synthetic and analytic in contemporary mathematics. 3 Macbeth by contrast highlights its diagrammatic origins, casting it as an agent of objectivity in service of a broader project, framed as the realization of reason. 4 Dutilh Novaes in turn emphasizes its role as a mode of formalization, whose goal is to arrive at cognitive debiasing via processes of de-se5 gest a characterization closer to Longo’s portrayal of mathematics as “a science of invariants and of the transformations preserving them”, 6 and the mode of reasoning this implies may point the way to a distinctly computational account. We should summarize here three broad positions on mathematical reasoning from the discipline itself to re-orient the reader. Brouwer’s doctrine of intuitionism, which I have referred to at length, emphasizes “a languageless activity of the mind having its origin in the perception of a move of time”.7 ral nature via a primitive notion of twoness—of simply one thing followed by another—as the basis for number, and thus the whole of arithmetic and algebra. This can be contrasted with Hilbertian formalism, which centers mathematics on the primacy of symbols and the internal consistency of axiomatic systems which are taken to be atemporal in nature. 2 3 4 5 6 7

Lautman, A., 2011. Mathematics, Ideas and the Physical Real. A&C Black. Zalamea, F., 2012. Synthetic Philosophy of Contemporary Mathematics. MIT Press. Macbeth, D., 2014. Realizing Reason: A Narrative of Truth and Knowing. OUP Oxford. Novaes, C.D., 2012. Formal Languages in Logic: A Philosophical and Cognitive Analysis. Cambridge University Press. Longo, G., 2009. Critique of Computational Reason in the Natural Sciences. In: Fundamental Concepts in Computer Science. pp. 43-70. Brouwer, LEJ., 1981, D. van Dalen (ed.). Brouwer’s Cambridge Lectures on Intuitionism. Cambridge: Cambridge University Press.

180

Encoding Reason

Put simply, formalism renders reasoning a case of analytically teasing out the inevitable conclusions to be drawn from a set of axioms which are to be treated as givens. Both are modern responses to classical Platonism, which in turn insists on the mind-independent existence of mathematical objects, structures to be ‘discovered’ by the mathematician. This dialectic of discovery (Platonism) and construction (intuitionism) with even modern practitioners like Grothendieck holding multiple positions at once. In considering a computational mode of reasoning, one immediately challenges intuitionism’s appeal to a languageless tational thought, it follows that the domain of the computational is necessarily symbolic in nature, a domain centered not on natural language, but on a treatment of syntax as structure grounded in topology. Its passage from non-conceptual representings to encodings is precisely the space occupied by intuition in the cognitive rendering of such a framework. Recent work from DeDeo has leveraged the computational outlook on mathematics outlined in Chapter 4 to examine reasoning in proof networks, by analyzing a selection of typed canonical 8 This suggests that ly deductive, but that retrospective reasoning of an abductive nature is also at play, which could correspond to the more intuitive aspects of mathematical reasoning alluded to by Brouwer. As DeDeo suggests, the intuitive aspect of reasoning 8

DeDeo, S., Viteri, S., 2020. Explosive Proofs of Mathematical Truths. arXiv preprint, arXiv:2004.00055.

181

Logiciel

involved in proof construction need not imply extra-linguistic acts, so much as abductive operations linking propositions in according to a strict deductive formalism. DeDeo argues that belief in the truth of a proposition adopts a complex dynamics, within a network of supporting propositions, that can tip ly. Proof construction then resembles not so much a linear seof interacting propositions. This goes some way to providing an explanation of the intuitive aspects of mathematical reasoning by way of complexity science, but it is no substitute for the indeterminacy of free choice which lies at the heart of mathematical creativity, according to Brouwer’s theory of the creative subject. While such forms of emergentism have been applied indiscriminately to a diverse range of phenomena, in idence linking the slippery notion of intuition to acts of abduction, borne out by the encoding of proofs spearheaded by the project of univalent foundations itself. By contrast, what tion of intuition, in which the types-as-spaces correspondence links the topological to the syntactic. Nevertheless, it is in this process of bringing a network of propositions into a deductive regime that computation plays a fundamental role in proof asrithms for deductions that have only been intuitively sketched out by the mathematician. As such, computation’s role in reasoning can be described as the very mechanism for realizing reason—echoing Macbeth’s project and aligning it with intuitionism’s second act, which 182

Encoding Reason

insists on the constructive nature of mathematical objects. By enacting programs which forge paths that in turn establish identities, computation embeds propositions within spaces that provide a context for the synthesis of typed judgements. In this constructive view, computation is vital in providing a arithmetic expressions like 2+2=4 are meaningless without summoning a computation that establishes their identity.9 The very notion of identity is now bound up in the navigation of a topology of types, and determining invariances in those spaces. If we take Wiles’ proof of Fermat’s Last Theorem as ing the generation of entirely new spaces for reasoning—the constructive view expresses this as the creation of new types

approximate such a creative endeavour, giving credence to the view that intuition, rendered as the exploration of a novel embedding space, is integral to such acts. As Ladyman suggests, one can dispense with the appeal to intuition and embrace the constructive elements of the doctrine to arrive at an autonomous account of univalent foundations, but in the portrayal laid out here, an account which fails to synthesize both aspects of Brouwer’s vision would be rendered incomplete, risking an

through the lens of undecidability, a means of encoding time rooted in indeterminacy, while it concretizes its second act, 9

Dummett, M., 1975. Philosophical Basis of Intuitionistic Logic. Studies in Logic and the Foundations of Mathematics, Vol. 80. Elsevier. pp. 5-40.

183

Logiciel

conferring an operational semantics on mathematical expressions as a result. It adopts a view of reasoning as an inferential minism at every turn, rejecting the timelessness of formalist mathematics and the a priori claims of classical Platonism. In a certain sense, computation is a means of formally reasoning about indeterminacy—framed in terms of halting probabilities or algorithmic randomness—distinct from other attempts to grasp uncertainty, such as statistical physics or information theory. Longo’s portrayal of computational reason as essentially Laplacian—focused on its determinate and discrete nature—misconstrues this relation between computation and contingency, as it arises from an entirely classical framing of computation and mathematics.10 While our discussion Longo on the irreducibility of biological complexity to computational reason, our intuitionist view reframes the relation of computational explanation to indeterminacy. By contrast abling correspondences between logic, mathematics, and lanmulti-valued conception of truth which compels a turn away from axiomaticity. Contra Negarestani’s distinctly atemporal rendering of intelligence, which takes its cues from Price’s physics of time in endorsing a “view from nowhen”, the constructive view is tethered to the immanence of realizabilities, acts which proceed from a computational encoding of time.11 10 11

Longo, G., 2009. Critique of Computational Reason in the Natural Sciences. In: Fundamental Concepts in Computer Science.pp. 43-70. Price, H., 1997. Time’s Arrow and Archimedes’ Point: New Directions for the Physics of Time. Oxford University Press.

184

Encoding Reason

This emergent constellation is derived from intuitionism but can only come into being through a computationalist lens which engenders metaphysical claims regarding the role of encodings in reasoning. Without this worldview, computa-

Longo’s analysis. If the principle of encoding binds propositions as typed judgements with their concrete expression as programs, embeddings represent the dialectical movement of analytic (deductive) and synthetic (inductive) procedures, unifying these acts under a geometric treatment. Combining encodings and embeddings makes for a potent basis for inference, as evinced (ANNs) under the deep learning paradigm, which I will refer to as deep neural nets (DNNs).12 These are symptomatic of the topological turn due to their reliance on tensors, high dimensional vectors which are grounded in Hilbert space of the encoding of features in the input data, producing a structure amenable to tensorial representation. An example from language modelling would be the tokenisation of terms in a sentence, represented along with the ordinality (position) can be considered a series of embeddings, which transform the dimensionality of the input via multiple layers of connected functions, described analogically as ‘neurons’. It’s these embeddings—spaces which give geometric expression

12

Bengio, Y. et al, 2017. Deep Learning, Vol. 1. Massachusetts: MIT Press.

185

Logiciel

to conceptual relations—that allow DNNs to detect patterns at varying scales or levels of abstraction, uncovering a hierarchy of regularities in the latent space of the input data. This n-dimensional structure to retrieve an output that has semantic relevance to the domain, often expressed in terms of a probability distribution mappable to the input space. In this sense, contemporary DNNs enact high-dimensional geometric transformations in a continuous vector space, and should no original conception of neural computation (McCulloch & Pitts), or the perceptron of Rosenblatt, which were emblematic of earlier generations of these models.13 The entire regime of deep learning, despite its epistemic limitations (Pearl),14 has an intuitive topological treatment in terms of encoding and embedding, with greater explanatory potential than the purely functional —which treats DNNs as a composition of functions—or the dismissive algebraic description, in which a DNN is reduced to a chain of linear algebra. This treatment resonates with the manifold hypothesis and its interpretation of learning as a sophisticated form

manifolds,15 an approach recognized explicitly by the nascent

13 14 15

Abraham, T.H., 2002. (Physio) Logical Circuits: The Intellectual Origins of the McCulloch– Pitts Neural Networks. Journal of the History of the Behavioral Sciences, 38(1), pp. 3-25. Pearl, J., 2018. Theoretical Impediments to Machine Learning with Seven Sparks from the Causal Revolution. arXiv preprint, arXiv:1801.04016. Brahma, P.P., Wu, D. and She, Y., 2015. Why Deep Learning Works: A Manifold Disentanglement Perspective. IEEE Transactions on Neural Networks, 27(10), pp. 1997-2008.

186

Encoding Reason

branch of geometric deep learning.16 More broadly however, all deep learning models are implicitly geometric in their inferences, as their architectures are reliant on embedding operations for their generalization abilities—that is to say, one ing can be seen as a paradigm shift in AI, from the construction of kernels to the induction of embeddings. Recent research has analyzed the topological signatures of deep learning models, and their role in generalization, showing empirical evidence for a distinctly topological interpretation of their inner workings.17 18 This is echoed in contemporary cognitive science through the work of Jean Petitot, whose concept of neurogeometry provides a geometrical model for the visual cortex. Petitot conjectures the precedence of proto-geometry over formal logic, exploring the topological syntax of mental representations, in seeking a morphological account of cognition.19 We should be careful to delineate the sense of syntax implied here—take the human mind’s ability to encode photons hitting the retina into synaptic signals, which in some sense structure our visual concepts, as an example. The visual concepts themselves denote semantic content, but the internal neurological structure and synaptic pathways of the visual

16 17 18 19

Bronstein, M.M. et al, 2017. Geometric Deep Learning: Going Beyond Euclidean Data. IEEE Signal Processing Magazine, 34(4), pp. 18-42. Hofer, C. et al, 2017. Deep Learning with Topological Signatures. In: Advances in Neural Information Processing Systems. pp. 1634-1644. Corneanu, C.A. et al, 2020. Computing the Testing Error without a Testing Set. In: Proceedings of the IEEE/CVF Conference on Computer Vision. pp. 2677-2685. Petitot, J., 2017. Elements of Neurogeometry. New York: Springer.

187

Logiciel

computational model of visual cognition, this sense of syntax would correspond to the “implementation layer”, allowing for distinct higher-level algorithmic and computational descriptions.20 An alternative interpretation might follow the syntactic tradition—Frances Egan notably defends the role of computation in theories of cognition as essentially syntactic (or formal) in nature, with a modest role for representational content, which is taken to be mathematical in nature.21 The topological view distinguishes itself from both these positions, in that computational explanation is considered intrinsically geometric—it corresponds to the elucidation of structure qua topology, but it is not subordinated to mathematical content, as it is the very mechanism of truth construction which is generative of such spaces. In this sense, topology and syntax become conceptually intimate expressions of structure linking the geometric and the linguistic, or rather, the spatial and the conceptual. Petitot’s insistence on the proto-geometry of neural architectures could be construed as a form of “neurological monism” (Fodor), a connectionist physicalism which can be 22 tives. It nevertheless provides a compelling appeal to develop networks and conceptual content formation with a primary focus on topology, in order to arrive at distinctly computational theories of cognition.

20 21 22

Marr, D., 2010. Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. MIT Press. Egan, F., 1995. Computation and Content. The Philosophical Review, 104(2), pp. 181-203. Fodor, J.A. and Pylyshyn, Z.W., 2015. Minds Without Meanings: An Essay On the Content of Concepts. MIT Press. p. 126.

188

Encoding Reason

6.2 Computational Inferentialism The resonance between the univalent worldview and the manifold hypothesis suggests a mode of reasoning in which dimensionality induction and reduction create a dialectic of topological structuring and decomposition via transits between spaces. Ladyman proposes an interpretation of univalent foundations in which types are to be treated more broadly as concepts, paving the way for a general theory of this kind.23 This invites a wider manifold theory of inference, to be treated as a computational interpretation of the inferential semantics proposed by Sellars, which insists on language statements of various kinds necessarily inhabiting a space of reasons, interpreted here as a topology of inferences forming conceptual embeddings. It is the formal rendering of this concept, as a manifold of propositions which are either to be learned or generated, and an insistence that a modular conceptual partitioning of language—be it observational, normative, or modal—is implausible under a topological model, in which transits between the local and global, projections between lower and higher spaces, are the norm. In

23

Ladyman, J. and Presnell, S., 2018. Does Homotopy Type Theory Provide a Foundation for Mathematics?. The British Journal for the Philosophy of Science, 69(2), pp. 377-420.

189

Logiciel

in geometric relations and the untangling of manifolds via transformations in vector spaces becomes an integral aspect of inferential acts. This computational rendering of inferentialism treats the pragmatic aspects of the inferentialist project via the interactive lens outlined in the previous chapter, namely Heyting Machines, but this would need to be supplemented with accounts of learning and language formation to comprise a convincing account of discursive capacities. For now, we can simply state that it gives precedence to a topology of inferential roles in constructing meaning, an internal semantics underpinned by acts of encoding and embedding, a content internalism which constructs high-dimensional spaces of representation and articulation. the work of Ned Block.24 As a riposte to Block’s appeal to qualia of topologies as types. But so far, computational inferentialism says little with respect to the nature of mental concepts as such, other than emphasizing their amenability, at least in part, to acts of encoding. Indeed, it is not my aim to advance a machine functionalism, but to sketch out a distinctly computational reason rather than a general theory of mind. To precise relation between computational and mental states and

24

Block, N., 1978. Troubles with Functionalism.

190

Encoding Reason

with objections regarding a semantics grounded in topology, which would appear to extend computation beyond a purely syntactic role, a full discussion of which is outside the scope of this text, but whose outline we can summarize here. In Fodor’s view, an inferentialist account of semantics is mired in the myriad issues posed by connectionism, namely content and functional role, and an excessive emphasis on the pragmatic notion of meaning as use. Under the paradigm of ANNs, Fodor & Pylyshyn argue conceptual content must either be located in associations between neurons or at the neurons themselves, and that both proposals create circular 25 ism aimed at DNNs, however, overlook the fact that layers in deep learning models represent high-dimensional geometric ciations of a graph, and that semantic access in the input domain language cannot be assured as soon as data is embedded in these spaces. This renders the internal representations of a bijection to the set of lexical tokens in the domain language, a internal language of thought hypothesis.26 Evidence for this comes by way of translation DNNs that are able to translate between language pairs they were never explicitly trained on, indicating an internal interlingua with a strong claim to the

25 26

Fodor & Pylyshyn, Minds without Meanings. p. 49. Fodor, J.A., 1975. The Language of Thought, Vol. 5. Harvard University Press.

191

Logiciel

status of language.27 This provides tentative evidence for the claim that embeddings can induce type formation, spawning encodings of their own, to become the basis for judgement within an internal mentalese. The emergent image of meanFodor’s circularity charge, while the principle of encoding attempts to resolve the symbolic grounding problem underlying such a criticism. Fodor’s skepticism regarding semantic holism, which he associates with connectionist models, extends to the issue of conceptual revision. For Fodor, the complexity of a holistic view of meaning predicated on an intricate web of beliefs poses a major threat to the stability of concepts, since they can be revised instant by instant via interactions with an environment.28 Accordingly, DNNs provide a rich set to catastrophic forgetting, based on a continuous incremental back-propagation of error, a process which attempts to preserve those relationships which remain relevant during the learning phase.29 However, achieving a stable sense of semantic holism, via what Brandom calls “doxastic updating”, remains a major research challenge for AI, which strains to capture the symbiotic relationships of an agent in its environment, as alluded to by the frame problem discussed in the previous chapter. 30 What the frame problem crystallizes 27

28 29 30

Johnson et al, 2017. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Transactions of the Association for Computational Linguistics, 5, pp. 339-351. Fodor & Pylyshyn, Minds without Meanings. pp. 52-59. Kirkpatrick, J., et al., 2017. Overcoming Catastrophic Forgetting in Neural Networks. Proceedings of the National Academy of Sciences, 114(13), pp. 3521-3526. Brandom, R., Between Saying and Doing. p. 79.

192

Encoding Reason

is the necessity of axiomatic meta-structures in any agent representation of an environment sensitive to interactions, an issue we have sought to temper via a computational account of modality, anchored by a model which speculatively fuses the informality of inductive learning with univalence. In many ways, the current trajectory of deep learning signals the plausibility of neuro-geometric models of mind, while simultaneously highlighting the epistemic limits of such architectures, absent a sophisticated multi-agent account with a more expansive inferential model. algorithmic decomposability of discursive abilities, asserts the need for an analytical form of pragmatism as a means of resolving this bind. 31 Only by synthesizing ANNs with a formal account of conceptual entry-exit transitions can this model break out of the referential circle implied by an intensional web of reasons. If one is to posit causal powers in the classical sense of a mind-world isomorphism, the role of input and output would need to be elaborated in the tokening of concepts under types, an account of interactive type formation for which we have only a preliminary sketch for at this stage. Tracing the situated deployment of language in speech acts may lead to a complex set of interfaces and protocols outlining the emergence of language, semantics and conceptual content from agent interactions. But as discussed in the last chapters, the analyticity of our rendering of interactive type formation is implausible, on account of Martin-Löf ’s observations regarding the existential nature

31

Ibid., Chapter 3.

193

Logiciel

of typed judgements, as well as the indeterminacy of free choices made by the creative subject within the Heyting Machine model. This synthetic perspective would appear to betray the spirit of Brandom’s analytic approach to dialectics at a high level, emphasizing the extensibility of type formation to capture creative acts as novel structures, forms which do not develop according to the strictures of axiomatic deduction, but rather abide by inferential procedures situated within an ampliative account of logic. In the case of deep learning language models, a pragmatic tendency is expressed in the limited form of training data, which privileges concept use within a corpus of language statements, as the source of ground truth for a model. As we have seen, generative adversarial networks (GAN) in turn centre on game theoretic interactions as a locus of learning, suggesting an incipient computational treatment of discursive reasoning. Incremental modes of reinforcement learning (RL) are yet another popular means of furnishing models with reactive, dynamic inferential abilities, such as those evinced in autonomous driving and robotic contexts. These attempts to engineer AI with capacities for pragmatic elaboration remain transitional technologies en route to fully interactive computational agents. None are able to support truly non-monotonic reasoning, and their counter-factual robustness is theoretically constrained, to give just two examples of critical limitations. RL, for example, has an entirely behaviourist, reward-oriented model of intelligence, one with no serious grasp of autonomy. We should distinguish here between rational and practical forms of autonomy, the former capturing modes of reasoning

194

Encoding Reason

alludes to an agent’s ability to behave in accordance with its own self-directed goals. Following Brandom, Negarestani has dances, and capacities that such a trajectory to autonomous interactive agency would in principle entail.32 The framework of predictive coding represents a broad models within an integrated theory of perception, cognition, and action. Inspired by the back-propagation learning scheme erty of DNNs, it posits perception as a generative engine continually recalibrating its parameters in light of sensorial feedback.33 To perceive is an active process of continual prediction, seeking to minimize error from the external environment, via conceptual revision in the form of real-time model updates. Intuitively, this explains why those stimuli which present the largest deviation from known experience represent the most cognitively challenging to assimilate within an existing model of the world—precisely because that model is but is rather iteratively generated in every passing moment, a given”.34

32 33

34

Negarestani, R., 2018. Intelligence and Spirit. Urbanomic/Sequence Press. LeCun, Y., Touresky, D., Hinton, G. and Sejnowski, T., 1988. A Theoretical Framework for Back-Propagation. Proceedings of the Connectionist Models Summer School, Vol. 1, pp. 21-28. Sellars, W., Rorty, R. and Brandom, R., 1997. Empiricism and the Philosophy of Mind, Vol. 1. Harvard University Press. p. 33.

195

Logiciel

error correction, and cortical attention patterns.35 This leads to a Kantian sounding claim regarding the nature of perception, as Hawkins & Blakeslee put it, “As strange as it sounds, when your own behaviour is involved, your predictions not only precede sensation, they determine sensation. Thinking of going to the next patyou should experience next. As the cascading prediction unfolds, it generates the motor commands necessary to

the cortical hierarchy.”36 As a unifying explanatory framework for perception, predictive coding possesses many desirable features, not least a strong account of conceptual revision, as well as an information theoretic interpretation, in which agents seek to preserve their internal organization via the minimization of entropy in the form of perceptual error. But it remains hampered by a strictly Bayesian model of cognition, which fails to satisfy the normative demands of the inferential semantics proposed here.37 38 While predictive coding provides a convincing generative account of perceptual acts, it struggles to compose a holistic view of reasoning that can accommodate objections 35 36 37 38

Clark, A., 2013. Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science. Behavioral and Brain Sciences, 36(3), pp. 181-204. Hawkins, J. and Blakeslee, S., (2004. On Intelligence. Owl Books/Times Books. p. 158. Wiese, W. and Metzinger, T., 2017. Vanilla PP for Philosophers: A Primer On Predictive Processing. Clark, A., 2015. Radical Predictive Processing. The Southern Journal of Philosophy, 53, pp. 3-27.

196

Encoding Reason

regarding the limits of empiricism traceable to Hume, and more recently elaborated by Quine, Sellars, and Brandom, amongst others—objections which appear to present serious impediments to any purely probabilistic theory of cognition.39 As Ladyman & Ross argue, constructive empiricism cannot give an account of theoretical unobservables, such as distant which demand counterfactual robustness for their own coherence, theories which rely on modal assertions regarding the in time and space.40 It is this link between the counterfactual and the causal which, in Pearl’s mind, characterizes the explanatory power of a model qua model, as that which singles adigm constrained by “model blind” approaches to causality, and similar charges can be made of predictive coding.41 The precise manner in which perceptual induction interfaces with categorical form presents an open problem in all these frameworks, and despite various attempts in cognitive science to bridge this gap, only a metaphysical commitment tends to resolve the issue. As Andy Clark notes, predictive coding characterizes the top-down activity as prediction, and an error signal as the bottom-up process, but the mechanism by which the predictive model is bootstrapped is left unspec-

39 40 41

van Orman Quine, W., 1976. Two Dogmas of Empiricism. In: Can Theories Be Refuted?. Springer, Dordrecht. pp. 41-64. Ladyman, J., Ross, D., 2007. Everything Must Go: Metaphysics Naturalized. pp. 118-129. Pearl, J., 2018. Theoretical Impediments to Machine Learning with Seven Sparks from the Causal Revolution. arXiv preprint, arXiv:1801.04016.

197

Logiciel

to disclose.42 As Brandom argues, intentional states, such as larities. In Sellars’ two-ply account of observation, sentience and sapience are presented as disjoint upon close inspection, the latter eliciting an inextricable web of reasons enmeshing what ought to be, what is possible, and the realm of the actual. Attempts to suture inductive beliefs to law abiding, rule following behaviour, have invariably led to philosophical impasse, as famously alluded to by Wittgenstein.43 On this point a variety of options are available to a philosopher of intelligence—one can either side with Sellars and the Kantian tradition on transcendental logic, Brandom on an analytical pragmatism inspired by Hegel, Bayesian empiricists following Carnapian edicts regarding the universality of induction, or a number of other well established positions blending shades of naturalism, empiricism, and rationalism. The computationof encoding, siding neither with the Bayesians—induction is prior to deduction but both are subject to the principle of encoding—nor the rationalist strand, insofar as loci of interaction are posited as intrinsic to a computational account of the modal, one which in turn admits informal creative acts under the rubric of free choice. Under the encoding of syntax, the interactive formation of concepts (types) comes to inhabit its own distinct semantics, while under the Heyting Machine

42 43

Clark, A., 2013. Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science. Behavioral and Brain Sciences, 36(3), pp. 181-204. For an overview see Kripke, S.A., 1982. Wittgenstein on Rules and Private Language: An Elementary Exposition. Harvard University Press.

198

Encoding Reason

It is my claim that computationalism in its constructive guise represents a distinct position on such matters, not reducible to the analyticity of strict rationalism nor to the inductive belief networks of the empiricist tradition, combining as it does matism, and it is this aggregate position I have come to call computational inferentialism. In its account of identity as structure, and its adherence this form of computationalism and the structural realism endorsed by Ladyman & Ross.44 This inferential view of computation espouses a distinctly temporal view of the genesis of form however, while it does not make detailed claims regarding the ontological status of pattern governed regularities. Insofar as it marks out the irreducibility of sapience to sentience, via a synthetic rendering of typed judgements, it on the a priori modal structure of the real which typify ontic structural realism. While Gisin’s rendering of the continuum implies a reconciliation of these positions via information theory, this is a task which is unlikely to be tractable from a purely computational perspective, a view to which I have attempted to constrain myself. As such, computational inferentialism is a modest theory by comparison with the expansive project of Ladyman & Ross, and the integration of a metaphysics of encoding with an ontology that posits “patterns all the way down”45 -

44 45

Ross, D., Ladyman, J.,. Everything Must Go: Information-Theoretic Structural Realism. Ibid., p. 228.

199

Logiciel

To summarize, the objections of Fodor and Brandom outal content within a geometric theory of representation. If an embedding space reveals high-level conceptual relations, how exactly are they represented beyond mere expressions of association? What is the epistemic status of such computational states? Clearly a discussion of causality and counterfactuals, which forms Pearl’s main objection to DNNs, needs to be addressed further. The Brandomian insistence on the implicit role of deontic commitments in all manner of locutions would appear to be at odds with the inductive limitations of contemporary AI, and a precise means of distinguishing patterns from between computational reason and real patterns, a mechanism Sellars calls “correct picturing” and I call patterning, remains here in full, as they are largely out of scope due to their broad remain. In short, my suggestion is that the various innovations of deep learning models need to be recast from the univalent viewpoint, in order to provide compatibility with existing rena dialogical treatment, and that a realizability interpretation of of computational reason. Moreover, the poverty of an associato address the semantic blind spot present in current accounts of AI—a broader inferential theory is needed, in which concepts are encoded topologically, to induce inferences via embeddings, and a version of homotopic type theory provides

200

Encoding Reason

role. This computational semantics, rooted in a topology of types, contra Piccinini’s mechanistic view of computation, or Egan’s mathematical account, asserts the synthetic nature of computational reason, a distinct logos which reaches beyond mere syntactic structuring or mechanical rules.

6.3 Intelligence Unbound In Simondon’s treatment of technicity, computation is portrayed as a concretization of human rationality, a distinct mode of existence shared with myriad technical objects. This would computational reason stemming from the epistemic account laid out here. If the artifactual elaboration of practical abilities is a form of expressive bootstrapping, extending the capabilities of an automata via increasingly complex modes of pragmatic reasoning, then concretization itself is the mechanism by which computational reason bootstraps its own autonomy—it does not merely capture human thought in the form of “dead labour” (Marx), but rather extends reason, via the automation of agencies which diagonalize out of human expressivity by virtue of interactive procedures.46 Following Negarestani, we 46

Marx, K., 2011. Capital, Volume I: A Critique of Political Economy: Chapter 10. Courier Corporation.

201

Logiciel

could call such a position broadly inhumanist, where inhumanism denotes a commitment to intelligence as a program, what he calls the “philosophy of intelligence”.47 This stands in contrast with the apotheosis of the post-humanist tradition in AI, which is embodied by the theory of singularity, a point attractor model for the genesis of intelligence which anchors computational reason to performative feats, acts framed entirely in behaviourist terms native to human epistemology.48 Inhumanism is instead a conception of technics irreducible to logos geared toward the automation of autonomy, an epistemic propellant what Hui calls “cosmotechnics”, that replaces the apocalyptic image of singularity with a pragmatic account relating the proliferation of technicities.49 This catalytic agent, whose locus is never unconstrained, but rather anchored to sites of realizability, marks the transition from patterning to encoding, the passage from contingency to possibility which is the hallmark of the intelligible. It is in this interplay of the possible (abduction), the necessary (deduction) and the actual (induction), that Zalamea locates mathematical creativity.50 Parisi in turn has argued for the emergence of a subjectivity within AI which must necessarily be mediated by this Peircean logical triad.51

47 48 49

50 51

Negarestani, R., 2014. The Labor of the Inhuman, Part II: The Inhuman. e-flux, 53. Vinge, V., 1993. The Coming Technological Singularity: How to Survive in the PostHuman Era. Science Fiction Criticism: An Anthology of Essential Writings. pp. 352-363. Hui, Y., 2017. On Cosmotechnics: For a Renewed Relation Between Technology and Nature in the Anthropocene. Techné: Research in Philosophy and Technology, 21(2/3), pp. 319-341. Zalamea, F., Synthetic Philosophy of Contemporary Mathematics. p. 327. Parisi, L., 2019. The Alien Subject of AI. Subjectivity, 12(1), pp. 27-48.

202

Encoding Reason

For Parisi, abductive acts enable a fusion of the experimental and constructive aspects of computational reason. The theory of computational inferentialism outlined here suggests a further synthesis of these three modes of reasoning within a single geometric formalism. We can distinguish inductive regimes of inference, such as DNNs, as learning a given manifold via dimensionality reduction, and deductive regimes, such as those of mathematical proofs, as the genesis of a manifold as a surface of points, thus framing inference as an intrinsically topological act. Deductive inferential acts seek to structure higher spaces, via the construction of topologies, which in turn can be decomposed, to delimit tion and prediction. In this scheme, a principal role is given to the induction of inferential rules in the absence of axioms—the inductive elaboration of type formation is foundational, insofar as the rules of inference cannot emerge ex nihilo, but rather develop as introduction rules in the manner sketched out by Gentzen. We have traced three facets of a metaphysics of encoding—the transcendental relation (encoding of syntax), its cognitive expression (encoding of reason), and its concretiism rooted in geometry, expressed as a homotopic theory of types. This brief sketch of computational reason follows a long tradition in epistemology of geometric models, evoking Kant’s sensible manifold of representations, to be integrated by reason via synthesis into categories. Patricia Kircher has made a convincing case for the central role of geomeresolve issues in both (relativist) Leibnizian and (absolutist) 203

Logiciel

Newtonian conceptions of space.52 Here, I have sidelined an a priori categorical rendering of geometry in favour of immanent procedures, within an epistemic context I call computation, while still appealing to a manifold theory of representation, wherein the term ‘manifold’ takes on its putational worldview. Another example worth noting is Deleuze’s appeal to Riemannian surfaces in discussing the structure of ideas. Anna Longo characterizes this as account grounded in calculus.53 In this conception, singularities emerge as solutions to integral curves, developing a space of multiplicities, in which ideas are continuously fails to mobilise the advances of constructive mathematics, nuity, locality, structure, and topology, rendering it largely incompatible with the univalent account of computational reason sketched out here. To posit an integration of deep learning models with the deductive formalism of univalent foundations, under the aegis of topology, is likely a project to be met with skepticism by computer scientists and philosophers alike. It would appear to ignore received wisdom regarding the incompatibility of symbolic and associationist accounts of mind, a rift as old 52 53

Kitcher, P., 1990. Kant’s Transcendental Psychology. New York: Oxford University Press. pp. 49-56. Longo, A., 2020, Deleuze & Mathematics, La Deleuziana, n. 11, Differential Heterogenesis. pp 19-28.

204

Encoding Reason

Gärdenfors argues, geometric theories of representation can coexist with other models, each corresponding to a distinct explanatory level, scale, or resolution. For Gärdenfors, “conceptual spaces” act as a bridge between symbolic and connective accounts of mental representation, providing a geometric theory which exhibits a “spiraling interaction” between its constructive and explanatory faculties.54 Deep learning models, often referred to as black boxes or model-free, are renowned for their ability to challenge human interpretability, due to a lack of formalization. While their tensorial internals do not invite formal scrutiny, they do suggest a neurogeometric basis for reasoning, and it is this reliance on embedding operations that provides a common axis for the synthesis of these frameworks into an integrated theory of inference. The topological nature of inferential acts emerges as a key computational insight of this theory, imposing limitations on concepts such as generalization, and their application in the complex of manifolds, it follows that generalization is necessarily constrained by one’s ability to structure an encoding— that is to say, one’s capacity to embed data into a representation with some semblance of locality. In computational theory, the limits of generalization are no freelunch theorem, which asserts that one cannot optimize for all induction, which shows that an environment can always 54

Gardenfors, P., 2004. Conceptual Spaces as a Framework for Knowledge Representation. Mind and Matter, 2(2), pp. 9-27.

205

Logiciel

evade even idealized forms of pattern recognition. Here, these limits are given a geometric treatment, with generalization ogy which necessarily constrains a space of reasoning. The key philosophical debate in this regard is the disagreement between Putnam and Carnap in the early sixties on the universality of inductive learning.55 While Putnam gives a diagonal argument to refute Carnapian induction as a universal for induction and proceeds to show why it cannot possibly be Turing computable.56 It follows that computational reason, treated as an inductive phenomenon, has no claim on universality in the domain of intelligence, but rather that it is subject to the locality of truth conditions. Inference is anchored to sites whose articulation provide the context for a modal reasoning constrained by the realizability of truths. To summarize our trajectory, I have laid out the case that computational reason must be untethered from the classical tenets of computationalism, in order to fully express itself as a constructivism oriented around a realizability interpretation of logic. Univalence suggests a geometric basis for reasoning notion of structure, which I characterize in terms of encodings and embeddings, conditioning the realm of typed judgements. Its topological nature provides encouraging signs for the long sought after integration of statistical inference with deductive formalism—the union of blind models with

55 56

Putnam, H., 1963. Degree of Confirmation and Inductive Logic. Solomonoff, R.J., 1964. A Formal Theory of Inductive Inference. Part I. Information and Control, 7(1), pp. 1-22.

206

Encoding Reason

symbolic AI—hinting at a synthetic geometry of logic. As an active research thread it remains in its infancy, but the conceptual links it suggests between type theory, topology, and machine learning show much promise. Moreover, the philosophical implications include an overhaul of the semantics, metaphysics, and ontology typically attributed to computationalist positions—a worldview I call computational inferentialism. We have examined the ontic repercussions of this worldview, via the non-deterministic physics of Gisin, its epistemic commitments, via the trinitarianism of Harper, and its semantics, via the type theory of Martin-Löf. The constraints placed on the automation of autonomy have been explored under the rubric of interaction, while realizability has been been proposed as a constructive model of automata. Finally, we have considered an integrated view of reasoning native to positioned it with respect to existing computational theories of mind. It is my view that this new perspective signals the ushering of computational reason from its axiomatic to its inferential phase—a departure from the monotonicity of logical givens to be executed by a machine, to the induction and continual revision of judgements enacted by self-supervised agents. The broad implications of this shift in computational explanation, from a static axiomatics to an inferential mathematics, physics and biology, logic and AI. The ethical repercussions of this view compel us to re-evaluate notions such as agency and autonomy on inhumanist terms, displacing the human as the ultimate arbiter of intelligence. The

207

Logiciel

developments, not via a renunciation of the computational decision, which amounts to a rejection of the rational as such, but rather by granting computational reason due recognition as a distinct logos, a vector of reason acting as a catalytic agent in a historical project, what we might venture to call the philosophy of intelligence.

208