243 10 20MB
English Pages 352 Year 2015
Applied Virtuality Book Series CODING AS LITERACY — Metalithikum IV
Birkhäuser Basel
Applied Virtuality Book Series CODING AS LITERACY — Metalithikum IV Edited by Vera Bühlmann, Ludger HovestadT, Vahid Moosavi
3
TABLE OF CONTENTS
on the book series
6
Introduction — Coding as Literacy
12
What Makes the SELF-ORGANIZING MAPS (SOM) So Particular among Learning Algorithms? — Teuvo Kohonen
22
I
28
Elements of a Digital Architecture Ludger Hovestadt
I Timaeus 36 — II Pythagoras 45 — III Ptolemy 58 — IV Alberti 69 — V Lagrange 82 — VI Markov 97
II A Nonanthropocentric Approach to Apperception 116 Sha Xin Wei Jean Petitot’s Fiber-Bundle Approach to Apperception 119 — The Case for Continua 127
III Pre-Specific Modeling: Computational Machines in a Coexistence with Concrete Universals and Data Streams — Vahid Moosavi
132
1 how to Approach the Notion of Scientific Modeling 134 — II Formal Definitions and Categories of Scientific Modeling 136 — III Idealization in Scientific Modeling 137 — IV Universals and Modeling 141 — V Specific Modeling: Models Based on Abstract Universals 144 · V.I Limits of Modeling Based on Abstract Universals 146 · V.I.I Godel’s Incompleteness Theorem and Arbitrariness of Models Based on Abstract Universals 146 · V.I.II Curse of Dimensionality in Complex Systems 147 · V.I.III From Particular to Genericand the Concept of “Error” 147 — VI Pre-Specific Modeling: Models based on Concrete Universals 148 · VI.I Dedekind Cut: When a Particular Object is Represented by the Negation of Its Complement 149 · VI.II From Generic to Particular: Object-Dependent Representation 150 — VII Massive Unstructured Data Streams: An Inversion in the Notion of Measurements and Data Processing 152 — VIII Computational Methods Supporting Pre-Specific Modeling 156 · VIII.I Markov Chains 156 · VIII.II Self-Organizing Map 161 · VIII.II.I No More External Dictionary and No More Generic Object 162 · VIII.II.II Computing with Indexes Beyond Ideal Curves 165
IV SOM. self. organized. — André Skupin 168 V The Nature of Local / Global Distinctions, Group Actions, and Phases: A Sheaf-Theoretic Approach to Quantum Geometric Spectra — Elias Zafiris
172
I Observables and Geometric Spectrum 174 — II Group Actions and the Erlangen Program 175 — III Local Group Actions and Gauge Theory 176 — IV The Advent of Quantum Theory 178 — V What Is a Sheaf? 180 — VI The Program of “Relational Realism” 181— VII Quantum Mechanics as a Non-Spatiotemporal Gauge Theory 183 — VIII Quantum Geometric Spectra 185
coding AS literacy — Metalithikum IV
VI Self-Organizing Maps and Learning Vector 188 Quantization for Complex Data— Barbara Hammer I Introduction 191 — II Fundamental principles 194 · II.I Unsupervised Prototype-Based Techniques 194 · II.II Supervised PrototypeBased Schemes 197 — III Metric Learning 199 — IV Relational and Kernel Mapping 202 — V Recursive models 208 — VI Conclusions 211 — Acknowledgments 211
VII The Common Sense of Quantum Theory: Exploring 214 the Internal Relational Structure of SelfOrganization in Nature — Michael Epperson VIII GICA: Grounded Intersubjective Concept Analysis. 236 A Method for Improved Research, Communication, and Participation — Timo Honkela ET. AL I Introduction 239 · I.I Contextuality and subjectivity 240 · I.II Shedding light on subjectivity: crowdsourcing 242 · I.III Becoming conscious of individual differences as a way of increasing understanding 243 · I.IV False agreements and false disagreements 244 · I.V Making differences in understanding visible 244 — II Theoretical background 245 · II.I Cognitive theory of concepts and understanding 245 · II.II Subjective conceptual spaces 249 · II.III INTERSUBJECTIVITY in conceptual spaces 250 · II.IV Conceptual differences in collaborative problem solving 250 — III The GICA method 252 · III.I Introduction to subjectivity and context analysis 254 · III.II Preparation and specifying the topic 257 · III.II.I Determining relevant stakeholder groups 257 · III.II.II Collecting focus items from relevant stakeholders and others 257 · III.II.III Collecting context items 258 · III.III Focus session 259 · III.III.I Filling in the Tensor 259 · III.III.II Data analysis and visualization 260 · III.IV knowledge to action 264 — IV Discussion 265 · IV.I GICA as a participatory method 266 · IV.I.I Focusing on subjective differences 268 · IV.I.II Barriers for successful communication in participatory processes 269 · IV.II Summarizing our contribution and future directions 270 — V ACKNOWLEDGMENTS 272 — VI Further References 273
IX “Ichnography”—The Nude and Its Model 276 The Alphabetic Absolute and Storytelling in the Grammatical Case of the Cryptographic Locative Vera Bühlmann I THEME one, PLOT one: HUMANISM 280 · Blessed Curiosity 280 · The Alphabetic Absolute 285 · The Comic 287 · Mediacy and Real Time 291 · How to Address the Tense-ness of Radioactive Matter in a Universe’s Instantaneity? 296 · The Unknown Masterpiece : The Depiction of Nothing-at-All 303 · The Signature of the Unknown Masterpiece 317 — II THEME one, PLOT two: THE SUMMATION OF INFINITE TERMS IN SERIES 323 · Science, Liberalization, and the Absolute 323 · Two Kinds of Mathesis : General and Universal 326 · Cartesian Limits 328 · Algebra in the Service of Parabolic In-vention 331 — III THEME one, PLOT three: NAMING THAT OF WHICH WE KNOW NOTHING 334 · We Are Leibniz’s Contemporaries 334 · Algebra’s Scope of INFINITARY Discretion 335 · “Nature Is There Only Once”: The Promise of a General Metrics 338 · Symbolisms and Modes of Determination 340 · Psycho-political Struggle around the Cardinality and Ordinality of Sums (Totals) 342 · The Presumptuousness of Universal Measure 345 · Discrete Intellection of Invariances vs. Measuring the Continuity Owed to Constant Values 346
image references
351
COLOPHON
352
ON THE BOOK SERIES VERA BÜHLMANN, LUDGER HOVESTADT Only one hundred years ago, hardly any scientist of renown would have been unaware of philosophy, and hardly any artist or architect uninformed about up-to-date technology and mathematics. Today, our ability to explain and explicate our own work within a shared horizon of assumptions and values beyond our specific scientific community has, perhaps paradoxically, turned into an inability and resulted to some degree in a kind of speechlessness. Only rarely now is it thought important that we relate our work to, and integrate it with, an overall context that is in itself “on the table” and up for consideration. More and more, that kind of context is taken for granted, without any need for active articulation, refinement, or development. At the same time though, the media are full of news stories about catastrophes, crises, and an impending doom that cannot, it seems, be warded off. Climate change, shortage of resources and population growth, urbanization — this is just to name a few of the critical issues today. Quite obviously, the notion of such an overall context, both implicit and assumed, is extremely strained, if not indeed overstretched today. This all is widely acknowledged — the United Nations Educational, Scientific and Cultural Organization (Unesco) Division of Foresight, Philosophy and Human Sciences in Paris, for example, launched a discourse on this subject in their 21stCentury Talks and Dialogues under the heading “The Future of Values.” The companion book, published in several languages simultaneously in 2004, is structured in three parts, and includes one chapter on the ethical issues of values and nihilism lying ahead, another chapter on technological progress and globalization, as well as a third chapter on the future of science, knowledge, and future studies. What remains strangely implicit, and in that manner ignored here, in a way that is typical of this inarticulacy with regard to an overall context mentioned above, is the societal, scientific, and cultural role that inevitably is ascribed to technology against the backdrop of such discussions, along with the expectations that are associated with that role of technology. In the Metalithikum series, we tend to regard technology in the extended
6
coding AS literacy — Metalithikum IV
sense of technics at large. Along with its respective solution-oriented application to the sciences, culture, economics, and politics, we think that technology needs to be considered more fundamentally, especially regarding the semiotic and mathematical-philosophical aspects it incorporates. From this perspective, we see in technology a common factor for facilitating a discourse that seems to have been largely lost from today’s discursive landscape, the degree of its disappearance inversely proportional to the increasingly central role technology plays in every domain of our lives. Such a discourse seems crucial if we are to develop adequate schemes for thinking through the potentials of today’s technology, something that is in turn essential for all planning. Our stance is an architectural and, in the philosophical sense, an architectonic one. Our main interest centers on the potentials of information technology, and how we can get used to the utterly changed infrastructures they have brought us. But have our infrastructures really changed substantially? Or is it merely the case that a new level of media networks has emerged on top of technology with which we are already familiar? Are the “new” and digital media simply populating and exploiting, in a parasitic sense, the capacities of modern industrial infrastructures that have brought prosperity and wealth to so many? In his contribution to the UNESCO dialogues, Paul Kennedy was still convinced : “In the Arabic world, 3% of the population has access to the internet. In Africa, it’s less than 1%. This situation won’t improve as long as the infrastructures remain in their current state. It won’t change, as long as these countries lack electrification, telephone wiring and telephones, and as long as the people there can’t afford either computers or software. If knowledge is indeed power, then the developing countries today are more powerless than they were thirty years ago, before the advent of the internet.” Our experience since then has allowed us to see things a little differently. There are meanwhile as many mobile phones in use worldwide as there are people living on the planet. Six billion people out of a seven billion world population can meanwhile read, write, and calculate (at least in some basic sense). Only three decades ago, this proportional measure was not 6 / 7th, but 2 / 5th! We have seen the “Arab Spring” that brought simultaneous political revolutions in several Arabic countries, giving facticity to the cultural impact of digital media, and this to a degree that was unexpected or previously deemed improbable by many. And the credence of this facticity is not harmed, we think, by the fact that since then we have had to witness ongoing fundamentalist reactions as in Syria, where the situation is currently escalating into a veritable civil war. To say that the facticity of the cultural impact of digital media is not impaired in its credence thereby is not to downplay the seriousness of these complex situations. Of course, in political, economic, religious reorientation all is at once at stake, and the idea that technological
On The Book Series
7
modernization be sufficient for consolidating the complex conflicts that arise in such phases of reorientation must strike one as quite naive. Even if technology affects how people live on the collectively existential level of infrastructure, it cannot do away with conflicting cultural values whose roots lie in different mentalities. Modernization of technological infrastructure, as “democratizing” and empowering for the people they may be, might even have an infantilizing effect on societies who begin to depend on it, because it fosters the idea that all kinds of problems may be solved technically, and can in essence be taken care of by respective experts and specialists. The same must be observed with greatest prudence also on a global scope, where we cannot help but observe an increasingly tyrannical polarization of values into a crude and simple distinction of good and bad — orientated around the two poles of (1) sustainability or the collective care for the health of the larger whole (the climate), and (2) terrorism as a new, diffuse form of violence. We call this polarization tyrannical because it refutes interpretative investigations into the nature of these complex issues and instead focuses on “objective” measures like a numerical index for CO2 pollution or registered documentation of power abuse; interestingly, the same technology is used by military and intelligence agencies alike with defenders of civil rights such as Edward Snowden. Thus we see the same technological means instrumentalized sophistically from all sides. We read these constellations as strong indicators for just how limited the applicability of our noetic schemes is for thinking through long-term developments. These schemes have evolved from our experience of prosperity in times of strong modern nation-states and industrial technology with matching economics. They go along with notions of centeredness for thinking about control, notions of linearity and nested recursion, of processes and grids, and of mechanical patterns of cause and effect used for planning. It is a truism, perhaps, to point out that these notions do not fit information technology very well. They are stressed and overstrained by the volatile associativity that emerges from logistic networks and disperses throughout user populations. Going by our inherited notions, industrial infrastructures appear to be used as a playground for what is called, somewhat helplessly, “consumer culture” or “the culture industry.” But in the case of India, for example, what came back as a result of the success of mobile telephony, astonishingly, were new infrastructural solutions. With no banks and no cash machines on hand, people simply invented the means to transfer money and pay by SMS. Yet the standards developed for micro-banking today can be referred to and linked up with solutions that exist for other areas, such as energy provision maintained by photovoltaics and micro-grids, for example. This is not the place to present scenarios. But let us remember that in India, Africa, and the Middle East, information technology has achieved what
8
coding AS literacy — Metalithikum IV
no administration, no mechanical infrastructure, no research, and no aid has been capable of : enabling people in developing areas of the world to use standard, state-of-the-art, technological infrastructures, not state administered and directed for their own benefit. We would simply like to invite you to consider the profound extent to which codes, protocols, or algorithms, standards such as ASCII, barcodes, MP3, or the Google and Facebook algorithms, have challenged our established economic, political, and cultural infrastructures. From this we get a sense of the potentials that come with information technology, directly proportional to these challenges. We deliberately call them potentials, because we are interested in developing adequate noetic schemes for integrating them into thinking about information technology from an infrastructural perspective. We are interested in how these potentials and dynamics can be applied to finding ways of dealing with the great topics of our time. We are interested in how we could understand computing as a literacy that is at once more capacious and more demanding than the strict reduction of complex issues to simplified and mechanically treatable measures of truth values. As Marcel Alexander Niggli and Louis Frédéric Muskens wrote in their article on mechanization and justice for the second volume of this series : “We might advance with greater ease once we admit that law bears greater resemblance (and hence is linked more strongly) to quantum physics and its often perplexing complexities.” Since information technology itself is constituted by quantum physics, this argument may well be extended to any field and domain that is organized today by this new form of technics. In another contribution to the UNESCO dialogues mentioned above, Michel Serres observed, somewhat emphatically : “Today’s science has nothing to do with the science that existed just a few decades ago.” Computers and IT bring us the tools for statistical modeling, simulation and visualization techniques, and an immense increase in accessibility of data and literature beyond disciplinary boundaries. With the colloquies that are documented in the Metalithikum book series, of which this is the fourth volume, our main interest lies in how to gain a methodological apparatus for getting familiar with the potentials and dynamics that are specific to information technology and applying them to dealing with the global challenges that are characteristic of our times, by referring them to a notion of reality we assume will never be “fully” understood. The prerequisite for making this possible is a regard for, and estimation of, the power of invention, abstraction, and symbolization that we have been able to apply, in past centuries and millennia, in order to come up with ever-evolving ways of looking at nature, cities, at trade and exchange, at knowledge and politics, the cosmos and matter, and, increasingly reflected, at our ways of looking, speaking, representing. Rooted in their respective historical cognitive frames of reference, we
On The Book Series
9
have been able to find ever-new solutions for existential challenges. There has most likely never been any such thing as a prototype for coordinate systems : their detachment from substance-space and its formal symbolization result from acts of abstraction. Plato may have already considered the idea of a vacuum, yet he thought it “inconceivable”; nevertheless, this notion of the vacuum inspired abstract thought for ages, before Otto von Guericke invented the first vacuum pump as a technological device in 1654. Electricity was thought of as sent by the gods in thunderstorms before the algebraic mathematics of imaginary and complex numbers were developed along with the structures that allowed us to domesticate it. Today, we imagine the atomic structure of matter by means of orbital models gained from a better understanding of electricity. So, in short, we do not share the idea that characterizing our time as post-anything is very helpful. While we agree that we seem to be somewhat stuck within certain mindsets today, we do not consider it at all plausible that any kind of concept or model, political or otherwise, will ever come close to anything resembling a natural and objective closure. The concepts behind any assumption of an End to History — whether this be in the Hegelian, the Marxian, or the more recent Fukuyama sense — stem from the nineteenth century, when Europe was at its peak in terms of imperialist expansion. To resurrect them today, in the light of our demographic, climatic, and resources-related problems, to us seems a romantically dangerous thing to do. By now it is safe to say that technology is not simply technology, but has changed character over time, perhaps even, as Martin Heidegger put it, it has changed “modalities in its essence.” In order to reflect this spectrum, we propose to engage with a twin story, which we postulate has always accompanied our technical evolution. Historically, the evolution of technics is commonly associated with the anthropological era called the Neolithic revolution, which marks the emergence of early settlements. We suggest calling our twin story “metalithikum.” As the very means by which we have been able to articulate our historical accounts, metalithic technics has always accompanied Neolithic technics, yet in its symbolic character as both means and medium it has remained largely invisible. The metalithikum is ill suited for apostles of a new origin, nor is it a utopian projection of times to come. Rather, we wish to see in it a stance for engaging with the historicity of our culture. As such, it might help to bring onto the stage as a theme of its own an empirical approach to the symbolics of the forms and schemes that humans have always applied for the purpose of making sense. This certainly is what drives our interest in the Metalithikum colloquies, which we organize once a year in a concentrated, semipublic setting. As participants, we invite people from very different backgrounds — architects and engineers, human and natural scientists, scholars of humanities,
10
coding AS literacy — Metalithikum IV
historians — or, to put it more generally and simply, people who are interested in better understanding the wide cultural implications and potentials of contemporary technology. This as well characterizes the audience for whom this book is written. We are very grateful for the opportunity of collaborating with the Werner Oechslin Library Foundation in Einsiedeln. The library chiefly assembles source texts on architectural theory and related areas in original editions, extending from the fifteenth to the twentieth century. Over fifty thousand volumes document the development of theory and systematic attempts at comprehension and validation in the context of humanities and science. The core area of architecture is augmented, with stringent consistency, by related fields, ranging from art theory to cultural-history, and from philosophy to mathematics. Thanks to the extraordinary range and completeness of relevant source texts and the academic and cultural projects based on them, the library is able to provide a comprehensive cultural history perspective. When we first talked to Werner Oechslin about the issue that troubled us most — the lost role of Euclidean geometry for our conceptions of knowledge, and the as-yet philosophically unresolved concepts of imaginary and complex numbers and their algebraic modeling spaces — he immediately sensed an opportunity to pursue his passionate interest in what he calls “mental chin-ups” as a form of “mental workout,” if not some kind of “thought acrobatics.” We would like to express our thanks to him and his team for being such wonderful hosts. We would also like to thank the editors at Birkhäuser (Vienna), David Marold and Angelika Heller, for all the support we have received for our project, and for realizing this fourth volume.
on the book series
11
INTRODUCTION — CODING AS LITERACY VERA BÜHLMANN, LUDGER HOVESTADT, VAHID MOOSAVI “Je préfère une tête bien faite à une tête bien pleine.” [I prefer a well-made head to an empty head] Michel de Montaigne In a recent article entitled “Quantum Words for a Quantum World” we find a reminder of a remarkable scene in Alfred Hitchcock’s movie Torn Curtain (1966), which tells a story of spying and science. It features a scene where two physicists confront one another on some theoretical question. Their “discussion,” the author of the article suggests, “consists solely in one of them writing some equations on the blackboard, only to have the other angrily grabbing the eraser and wiping out the formulas to write new ones of his own, etc., without ever uttering a single word.”1 This picture of theoretical physics as an aphasic knowledge entirely consisting of mathematical symbols may 1
12
Jean-Marc Lévy-Leblond, “Quantum Words for a Quantum World” (pp. 1–18), lecture held at the Institute Vienna Circle, “Epistemological & Experimental Perspectives on Quantum Physics,” Vienna, September 1998. Draft manuscript accessed online on www.academia.net (March 21, 2015).
coding AS literacy — Metalithikum IV
be very common in popular representations, the author maintains, but “we know [it] to be wrong […] and we have to acknowledge that, far from being mute, we are a very talkative kind; physics is made out of words.”2 Of course, there is some distance between architectural theory and theoretical physics. However, insofar as contemporary architecture encompasses both engineering and design that is aided by computers, as implied by the name of our discipline, Computer Aided Architectural Design (CAAD), the two fields occupy a closer contextual relationship than might at first be apparent. The software environments provided to assist architectural design all provide their formulas and formulaic elements, neatly packaged into a clothing we know from drama and theater: we have “stages,” “casts,” “behaviors,” “properties,” and “actions” all prefabricated (formulated) in code. The as-yet, brief history of these environments proceeds in paths of greater and greater generality of those “formulaic elements” and they do so according to several different paradigms; a major one being the approximation of a unified system within which all the governing factors in the construction of spatial form and organization can be combined and put into accord with the greatest possible liberties — greatest possible thereby referring to the smooth mechanization of how the system as a whole can be operated. Two predominant examples that follow this paradigm are the Building Information Models (BIM), as well as Parametricism; the former follows this paradigm by establishing a kind of a “semantic ontology” pyramidlike structure that can grow in “greatness” only from that which can be built with the elements that provide the base; the latter suggests that instead of a pyramid we have a dynamic apparatus, in which the hierarchical organization between classifications remains unsettled. A system as a dynamic apparatus can grow in “greatness” not only in one direction (the height of the pyramid), but in any direction because its organization is structural, not semantic. It provides a receptive environment for new elements that might be specified and added, its systematicity exists through its capacity to correlate the other elements in the system to the newcomers. We can perhaps say that BIM and Parametricism are committed to one and the same idea — to harness the power of a great system — but according to different modalities : the former in the modality that values increase of efficiency in general, the latter valuing increase of efficiency in and according to particular situations. In other registers, while the latter clearly stresses a “radical” economization of architecture, the former suggests a “planned” one. Another strong paradigm in the short history of these provided environments, in which architecture and design are “aided” by computers, can be seen in the discretization and distribution of the kind of knowledge that goes into the “greatness” of the unified systems in 2 Ibid.
Introduction
13
the abovementioned paradigm. Rather than establishing a systematicity through introducing a “general equivalency principle” (measurement devices or frameworks that govern either the “identity” of any “unit” classification in BIM, or a meta-metrics that arranges competing magnitudes solely according to the specified, singular, and locally pragmatic goal of the system in parametrics), this paradigm is instead interested in developing grammars and syntaxes that would be capable of affording the greatest possible scope of expression for the plays that are performed on the stages provided by computational environments for designing, modeling, and planning. Examples here would be Space Syntax, Shape Grammars, Christopher Alexander’s Pattern Language, but also Rhino’s Grasshopper, ESRI’s City Engine, Logo, Processing, and others. This paradigm is also committed to a rationalization of the formulaic. It uses quantitative empirical methods to analyze contexts according to their syntactic, grammatical, pattern-based schemata such that “reality itself” can inform the kind of architecture computers “aid” one to design and build. We can easily imagine how representatives of these paradigms, exposed to competition both between each other as well as internally, stand in front of the blackboard in Hitchcock’s movie and behave just like the physicists do : writing and erasing formulas, without speaking a word, but instead calling certain objectivities to the stand as “witnesses” for the “correctness” of their evaluation of the problem. Despite their aphasic behavior, the author of our text knows that physicists are a very talkative kind, and in very comparable manner, designers, engineers, and planners know that they too are a very talkative kind. Is it not to withdraw, at least to some degree, from precisely this querulent talkativeness that they take a step back and revert to equipping formula in code with greatest possible capacity to provide coexistence, consensus, common sense, between the disputing parties and all the stakes that are entailed in the problems at hand? We are convinced that this withdrawal can only be successful in relaxing tensions and confrontations if, instead of trying to find a lowest common denominator and a least common multiple (an englobing reference matrix), we regard such “frameworks” as “ciphers” (place-value systems) that establish a “code” (an algebra), and the code as being constituted by “alphabets” (finite ordered sets of elements). Like this, we can formulate locally concrete materially-categorically articulated universals. This means that it crucially depends upon a speculative mode of thinking that is both non-anthropological and non-cosmological insofar as it discredits any particular model of the Human or the Cosmos. But at the same time, the mode of thinking sought thereby must be logical, cosmic, and humane. We regard quantum physics, with its challenges regarding (among many more) the local / global distinction, as the empirical grounds capable of informing such a mode of thinking.
14
coding AS literacy — Metalithikum IV
The scene in Hitchcock’s movie represents a larger interest in the article from which it is taken, and it does so in our interest in this book as well. It addresses the problem of how we can learn to develop a quantum-understanding of our quantum-world that is capable of integrating the effortdemanding backgrounds into the very language that organizes such an understanding — even if the “technical vocabulary” is apparently so very detached from “concrete” reality. We say “apparently” because we all use electronic devices on a daily basis — we are used to photographs, recordings of music, washing machines, elevators, street lights, online shops, and e-mail, while also relying on body scans, blood scans, and x-ray scans with regard to our health. We fly across the planet and we are frightened by the simulations that simultaneously place a certain responsibility for the health of the planet in our hands; in short, we use computers on a daily basis for all kinds of purposes. Hence, the “reality” of quantum physics is not all that abstract, we actually live quite comfortably — but also quite “ignorantly” — in all the new manners of inhabiting places that it affords. It is only when we try to address these circumstances in words, when we try to reflect and be critical, when we try to take responsibility for how we act by questioning ways of how to proceed — in short, when we want to consider possibilities — that we find ourselves to be in trouble. We therefore revert to a vocabulary that feels trustworthy because it is already established, an englobed matrix of “plain” speech — a matrix that depicts vectors, however one that has been in use by scientists for over a century. If it seems “plain” now, it is only because it had time to settle down and sink in the modes of thinking that establish “common sense.” Indeed, most of the classical terms he3 seems to take for granted as having a clear meaning, were introduced in physics during the nineteenth century and certainly did not belong to ordinary speech. Consider for instance ‘energy’ — a term foreign to the language of Newton : the very concept was not clarified before the middle of the last century, and the word was certainly not used in common parlance, as it has come to be in the past decades. A stronger argument yet could be made around ‘entropy’. Even the apparently elementary idea of ‘potential,’ although formally introduced by Laplace for gravitation and Poisson for electricity, was named only later on by Green, and at the end of the century was still considered as very abstract and introduced in academic courses with much caution.4 The argument of Jean-Marc Lévy-Leblond is quite straightforward : we need quantum words for a quantum world as much as we needed dynamic 3 4
Niels Bohr, with his demand that all statements referring to the quantum world must be couched in classical language in order to make sense with respect to our common experience. — V. B. Lévy-Leblond, “Quantum Words for a Quantum World,” pp. 4–5.
Introduction
15
and electric words for a dynamic, electromagnetic world. If we are inclined to object now to something along the lines of : but is a quantum world not a dynamic, electro-magnetic world, full of potentials from which have been and can be extracted all the possibilities modern science and engineering have realized during the past one hundred years and is going to realize in the future? — then we are certainly in good company with many who would share this point of view. But this is exactly the problem. It stands behind our aphasic and harsh shielding-off from querulous discussions by fashioning particular formulas in apparently “neutral” code as if they were just what we have always known — plain speech. Ultimately, LévyLeblond is convinced, and we share this view : Quantum theory eventually is not more discrete or continuous than classical theory; it is only much more subtle as to the interplay of continuity and discreteness, for both these notations now relate to the same (quantum) entities instead of bearing upon different ones (classical waves or particles).5 So then, the question we posed in the conference (whose contributions are collected in this volume) regard a new kind of “literacy” a literacy in coding, we suggested, a literacy that overcomes the machine-operator distinction just as it overcomes the distinction between intelligence “proper” (“human” or “natural”) and intelligence “artificial.” While recourse to a notion of “literacy” is often discarded today because it apparently holds on to the problematic (anthropocentric) notion of some genuinely “human” and “cultural” subject vis-à-vis some genuinely “natural” object, we think it is a dangerous short circuit if we attempted to rid ourselves of that legacy. Rather, we want to listen to and understand what that novel language that — despite all the abovementioned difficulties — has emerged in the past decades can tell us. We want to take seriously that it applies the jargon of theater and drama to the setups of computational modeling environments, and we want to take seriously that it refers to the particular codes applied as “alphabets.” We want to take seriously that probabilistic analysis analyzes ‘fictions’ and that such fictions are not spelled out between two book covers but are depicted in snapshots of assumed spectra (of light, of flavor, or any kinds of intensities among properties) that can be “measured” only in the circuitous terms of “frequencies” and “phases.” By tentatively trying to comprehend code as an abstraction and a generalization of the classical understanding of the phonetic alphabet as a geometry of voiced (articulated) sound,6 we want to explore the idea of seeing in code a 5 6
16
Ibid., 12. We regard Eric Havelock’s study on how to account for (and even only how to take note of) the relatively sudden leap in abstraction that was thought and formulated not only into words but also into novel ways of reasoning in ancient Greece as an invaluable source of reference that the development of such an analogy could draw from. See Eric Havelock, Preface to Plato (1961).
coding AS literacy — Metalithikum IV
geometry of spectrality. It is clear that this spells out a bill of exchange larger than can be answered with this book. It nevertheless reveals the horizon toward which our ambitions strive, and we hope at least quickens a broader interest in this idea. The book collects its contributions from information scientists, mathematicians, philosophers, design-culture theorists, and architects that attended the 5th Metalithicum Colloquy held at the Werner Oechslin Library in Einsiedeln, Switzerland, from May 22 to 24, 2014. The overall theme of the conference was to consider computational procedures beyond a strictly case-based analytical paradigm, and instead as embedded in a more comprehensive “computation literacy.” The main item of reference for the discussions was one particular procedure called Self-Organizing Maps (SOM) in relation to data-driven modeling. Regarding SOM as receptive to a form of skill or mastership (ein Können, as we would say in German) that allows for many degrees of sophistication, exposes within it significant inherent capacities that seem as yet to be largely unexplored. On a more speculative level, our reference point was to attempt thinking of the “data” in data-driven modeling within quantum-physical terms. Despite the diversity of the backgrounds and expertises brought together, it turns out that there is a common thread that runs through the contributions : namely a quest for terms that afford the projective, fictitious, and yet measurable articulation of a “common ground” for example, according to information as a multiplicative notion; transferable structures and geometrical kinds; pre-specific models that feed from and grow specific in their distinctive character in the environment of data streams; categorially measured and articulated concept maps; how modeling conceptual spaces can contribute to achieve clarity regarding the possibility and conditions of intersubjectivity between different concept spaces; architectonic ichnography, and the grammatical case of a cryptographical locative that renders the locus of fiction measurable according to the discernment of a plot that is being narrated in a story. The articles are introduced below according to their order of appearance in the book. Teuvo Kohonen is an information scientist and the inventor of the SOM algorithm. Due to health reasons and his age, he was not able to join the colloquy in person. However, he generously contributed a short introduction to our book about what the SOM algorithm can do, and how it has evolved since its inception some thirty years ago. Ludger Hovestadt, architect and information scientist, considers in his article “Elements of a Digital Architecture” geometry as “the rationalization of thought patterns amid known elements.”7 He develops the provocative suggestion that we should, on the one hand, distance ourselves from the idea that there is only one “true” geometry, while, on 7
Ludger Hovestadt, “Elements of a Digital Architecture,” page 34 in this volume.
Introduction
17
the other hand, also distancing ourselves from claiming recognition for the plethora of “new geometries” as they are today being delineated, e.g. projective, affine, convergent, Euclidean, Non-Euclidean … For Euclid as well as for Félix Klein, geometry originates in “encryptive” and “algebraic” thinking and brings about a geometric manner of thinking only on that very basis. Hence it is to computational code that we must look to find a new geometry, one that can accommodate in a new manner all the classical concepts and distinctions that make up knowledge as knowledge, including architecture as architecture (rather than architecture as design or engineering or science or art). The text is an epic poem written in computational meter that works like a sudoku. Sha Xin Wei, mathematician, artist, and philosopher, delineates in his article “A Non-anthropocentric Approach to Apperception” an a-perspectival mode of apperception that does not presume any particular model of a human perceiver. His point of reference thereby is a discussion of how models of computer vision are conceived in machine learning. Drafting a transversal lineage between Edmund Husserl’s phenomenology, Jean Petitôt’s interest in regarding qualities as extension in discrete notions of time and space, and Gilbert Simondon’s conception of the individuation process, he develops his own approach to conceive “information” as a multiplicative notion, according to which we can embrace abstraction by deploying mathematical concepts without aiming at the production of representations that are supposed to describe reality. Rather, a multiplicative notion of information allows one to see in abstraction, modes of material articulation. For him, quantum mechanics “articulates the profound observational inextricability of the states of the observer, the observed, and the apparatus of observation.”8 With this he formulates a “fiber-bundle mode of articulation” in which we have a “non-ego-based, number-free, and metric-free account of experience that respects evidence of continuous lived experience but does not reduce to sense perception or ego-centered experience.”9 Vahid Moosavi, a systems engineer and information scientist, in his article entitled “Pre-Specific Modeling. Computational Modeling in a Coexistence of Concrete Universals and Data Streams” criticizes as well what he calls “ idealization in modeling.”10 Moosavi distinguishes different modeling paradigms that he regards, each in its own right, as a pair of glasses that impact the way in which we encode the real world. The paradigms he discusses are Computing Power, Computational and Communicational Networks, and Data Streams. With regard to the latter, he proposes a mode of modeling that he calls “pre-specific.” According to this mode, one doesn’t select a set of properties to represent the object 8 Sha Xin Wei, "A Nonanthropocentric Approach to Apperception,” page 130 in this volume. 9 Ibid. 10 Vahid Moosavi, “Pre-Specific Modeling,” page 132 in this volume.
18
coding AS literacy — Metalithikum IV
in an exhibited manner, discrete and decoupled from the object’s environment. Rather, given a plurality of coexisting data streams we can depict our object in the totality of the connectivity we see it to be entangled in; that is, in the conceivable relations it maintains to other objects. Moosavi thereby introduces a manner of modeling that conceives of an instance (the “object” of a model) as the implicit complement to the totality of all those properties one can negate. Thereby, the real world instance depicted by the model is regarded by the model itself as infinitely richer than any representation that it could yield. Andre Skupin is a geographer and an information visualization expert. He is one of the pioneers in the application of SOM in geographical and spatial analysis. In addition to spatial analysis, in many of his previous works he establishes a kind of interface between geographical maps and the final maps of SOM in the form of abstract landscapes. By this he can transfer the language of geography into many classically non-geographical domains, such as creating a semantic landscape of Last.fm music folksonomy.11 In his contribution in this book, Skupin analyzed the scientific literature dealing with self-organizing maps, based on more than four thousand papers ranging from Teuvo Kohonen’s well-cited paper from 1990 up to papers that will appear in 2015. The titles and abstracts of these texts underwent a series of computational transformations, with the aim to uncover latent themes in this field of literature. The SOM-based visualization he contributed to this book highlights one rendering of this process. The result can be understood as one version of a kind of “base map” of the SOM knowledge domain. Elias Zafiris, a mathematician and theoretical physicist, considers in his article “The Nature of Local / Global Distinctions, Group Actions and Phases : A Sheaf-Theoretic Approach to Quantum Geometric Spectrums,” the limits of observation as a theoretical paradigm. The local / global distinction in quantum mechanics is of a topological nature and does not involve any preexisting set-theoretic space-time background of embedding events. Observation is based on measurement, and Zafiris suggests that rather than holding on to the idea that a particular measurement can represent space in a homogenous and englobing manner, we can obtain topological spaces via the practice of measurements. Such a topological space counts as the geometric spectrum of the particular algebra underlying the measurement. This is how he sees “observability” being constituted together with “operability” : the same geometrical form can be manifested in many different ways or assume multiple concrete realizations. Zafiris suggests to speak of “geometrical kinds” that incorporate the distinction between “the actual 11 Joseph Biberstine, Russell J. Duhon, Katy Börner, Elisha Hardy, and André Skupin, “A Semantic Landscape of the Last.fm Music Folksonomy using a Self-Organizing Map” (2010), published online: http://info.slis.indiana.edu/~katy/research/10-Last.fm.pdf.
Introduction
19
and what is potentially possible.”12 Yet crucially, such geometrical kinds can still be considered in terms of geometric equivalence; the space where they are equivalent, however, is one of projections, fictions, a quantum regime. Barbara Hammer is an information scientist and mathematician. In her article “Self Organizing Maps and Learning Vector Quantization for Complex Data” she discusses different paradigms in the field of machine learning and data analysis that distinguish themselves from each other by different approaches of how learning vectors can be quantized. Thereby, Hammer discusses the limits of the common practice of relying on the Euclidean distance measure for decomposing the data space into clusters or classes. Her point of interest is how to deal with situations where electronic data cannot easily, and without imposing theoretical bias, be converted to vectors of a fixed dimensionality : biological sequence data, biological networks, scientific-texts analysis, functional data such as spectra, data incorporating temporal dependencies such as EEG, time series, among others. Here, data is not represented in terms of fixed-dimensional vectors, but rather dimensionalities are phrased in terms of data structures such as sequences, tree structures, or graphs. She considers how prototype techniques might be extended to more general data structures. Michael Epperson is a philosopher. Together with Elias Zafiris he has developed the philosophical program of a Relational Realism, whose central point of departure is that in quantum mechanics the conditionalization of probabilities (the “reasoning” of probabilities) requires a rule that depicts the evaluation of quantum observables as a fundamentally asymmetrical relational process. Outcome states yielded by quantum mechanical measurement are not merely revealed subsequent to measurement, he explains, but rather generated consequent of measurement.13 Epperson discusses in his article “The Common Sense of Quantum Theory : Exploring the Internal Relational Structure of SelfOrganization in Nature” the limits of a globalized “Boolean” — bivalent — logics within a classical paradigm that assumes that all observables possess well-defined values at all times, regardless of whether they are measured or not. However in quantum mechanics, probability not only presupposes actuality, but actuality also presupposes probability. This yields a mutually implicative relation between form and fact, fact being “evaluated observables.” Quantum theory cannot solve the philosophical problem of predicating totalities, but what it can specify, he argues, is an always mutually implicative relation between global and local, such that neither can be abstracted from the other. 12 Elias Zafiris, “The Nature of Local/Global Distinctions, Group Actions, and Phases,” page 172 in this volume. 13 Michael Epperson, “The Common Sense of Quantum Theory,” page 214 in this volume.
20
coding AS literacy — Metalithikum IV
Timo Honkola is an information scientist. In his coauthored article “GICA : Grounded Intersubjective Concept Analysis. A Method for Improved Research, Communication, and Participation” he introduces a method quantifying subjectivity that recognizes that even if different people may use the same word for some phenomenon, this does not mean that the conceptualization underlying the word usage is the same : “In fact, the sameness at the level of names may hide significant differences at the level of concepts.”14 The GICA method uses SOM to model conceptual spaces that are built upon geometrical structures based on a number of quality dimensions. Such modeling can distinguish and articulate in explicit form (1) subjective conceptual spaces, and (2) intersubjectivity in conceptual spaces. Vera Bühlmann, media philosopher and semiotician, presents in her article “Ichnography — The Nude and Its Model. The Alphabetic Absolute, and Storytelling in the Grammatical Case of the Cryptographical Locative” different modes of how the ominous “all” can be plotted as “comprehension” via narrative, calculation, and measurement. Her interest thereby regards how the apparent “Real Time” induced by the logistical infrastructures established by communicational media becomes articulable once we regard “Light Speed” as the tense proper to spectral modes of depicting the real in its material instantaneity. The “real” in such depiction features as essentially arcane, and its articulation as cryptographical. The articulation of the real thereby takes the form of contracts. Bühlmann suggests to take cryptography at face value, i.e. as a “graphism” and “script,” whose (cipher) texts she imagines to be signed according to a logics of public key signatures : while the alphabets that constitute such a script are strictly public, a cipher text’s “graphism” cannot be read (deciphered and discerned) without “signing” it in the terms of a private key. This perspective opposes the common view that we are living in “post-alphabetical” times, and instead considers the idea of an “alphabetic absolute.” It bears the possibility for a novel humanism, based not on the “book” (Scriptures) but on the laws of things themselves. The article traces and puts into profile classical positions on the role of “script” in mathematics, the possibility of a general and / or universal mathesis, the role of measurement in relation to conceptions of “nature” e.g. by Descartes, Leibniz, Dedekind, Cantor, Noether, Mach. Furthermore, we don’t want to introduce this book without mentioning especially the work of Klaus Wassermann on Self-Organizing Maps. It is his way of thinking about them that raised our interest in this particular algorithm in the first place, and we have drawn much inspiration and insight from his blog The Putnam Program | Language & Brains, Machines and Minds (https ://theputnamprogram.wordpress.com). 14 Timo Honkola, “GICA: Grounded Intersubjective Concept Analysis,” page 236 in this volume.
Introduction
21
What Makes the Self-Organizing Map (SOM) So Particular among Learning Algorithms? Teuvo Kohonen Note: This overview is intended for those already somewhat familiar with the Self-Organizing Map (SOM) algorithms who would like to know more about their origins and applications. For a more detailed description of the various principles and applications of the SOM, please refer to Teuvo Kohonen, “Essentials of the Self-Organizing Map,” Neural Networks 37 (2013): 52–65, and the e-book by Teuvo Kohonen: MATLAB Implementations and Applications of the Self-Organizing Maps, http://docs.unigrafia.fi/publications/kohonen_teuvo/
Teuvo Kohonen is currently professor emeritus of the Academy of Finland. Prof. Kohonen has made many contributions to the field of artificial neural networks, including the Learning Vector Quantization algorithm, fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method, and novel algorithms for symbol processing like redundant hash addressing. He has published several books and over 300 peer-reviewed papers. His most famous contribution is the Self-Organizing Map. For most of his career, Prof. Kohonen conducted research at Helsinki University of Technology (TKK). He was elected the first vice president of the International Association for Pattern Recognition from 1982 to 1984, and acted as the first president of the European Neural Network Society from 1991 to 1992. For his scientific achievements, Prof. Kohonen has received a number of prizes including the following: IEEE Neural Networks Council Pioneer Award, 1991; Technical Achievement Award of the IEEE Signal Processing Society, 1995; Frank Rosenblatt Technical Field Award, 2008.
22
coding AS literacy — Metalithikum IV
The Self-Organizing Map (SOM) is a data-analysis method that visualizes similarity relations in a set of data items. Assume that we have a large set of input data items and each item is represented by several features. The features may consist of numerical attributes, such as statistical descriptors of an item, but many other types of features can also be used. The simplest measure of the similarity of two items is the similarity of their feature sets in some metric, but again, more complex definitions of similarity. The SOM may be regarded as a special projection. In so-called nonlinear projective methods, every item is represented by a point on a plane, and the distance between two points (items) is roughly inversely proportional to their mutual similarity. Similar items are located close to each other, and dissimilar items further apart in the display, respectively. It is said that the items are then represented in an abstract geographic order. However, in its genuine form the SOM differs from all the other projective methods because it represents a big data set by a much smaller number of models or “weight vectors” (this term has been used in the theory of artificial neural networks). Each model has the same number of parameters as the number of features in the input items. However, the SOM model may not be a replica of any input item but only a local average over a subset of items that are most similar to it. The parameters of the models are variable and they are adjusted by learning such that, in relation to the original items, the similarity relations of the models finally approximate or represent the similarity relations of the original items. In this sense, the SOM works like the well-known “k-means clustering” algorithm. However, during the learning process, the SOM also arranges the k-means into a geographic order according to their similarity relations. It is obvious that an insightful view of the complete database can then be obtained in a single frame. Unlike in other projective methods, representations of items in the SOM are not moved anywhere in their “geographic” map. Instead, all of the models (with still undefined parameters) are assigned to fixed places — namely, into the nodes of a regular two-dimensional array. A hexagonal array, like the pixels on a TV screen, provides the best visualization. Initially, before learning, the parameters of the models can even have random values. The correct final values of the models or “weight vectors” will develop gradually by learning. The representations — that is, the models — become more or less exact replicas of the input items when their sets of feature parameters are tuned toward the input items during learning. Instead of selecting the values of the parameters randomly in the beginning, simple and effective ways of parameter initialization are now available for their rough and quick regular initialization such that the final learning is sped up by many orders of magnitude — as compared with learning, which starts from a scratch. There are two main classes of SOM algorithms. In the original one, the corrections to the parameters are made sequentially and iteratively, one
What Makes the SOM So Particular among Learning Algorithms?
23
input item at a time. The parameters of the model that is most similar to the input item, and a subset of those models that are geographic neighbors of the selected node in the array, are adjusted so that they become more similar to the attributes of the input item. This procedure is repeated for all of the input items iteratively until the distribution of the models is a good approximation of the distribution of the input items. This method has now been abandoned because the corrections to the parameters of all models can be determined more effectively, simultaneously, by a concurrent batch computation process. An extra benefit of the batch learning process is that if it is carried out in a proper way, it converges exactly in a finite number of iterations. Even if the number of models can be much smaller than the number of input items, one can also have as many models as input data items, in which case, after learning, the models become the exact replica of the input data items, and the resulting SOM then represents the similarity diagram of the original items. These kinds of maps usually represent taxonomies of the input items. If the input items fall into a finite number of specific classes, the learned geographic map of the items becomes easier to interpret if the clusters of models are calibrated (annotated) according to their classification. The most common calibration is made by the k-nearest-neighbors method. Every input item now carries a class label. One can test the k input items that are most similar to the model of a particular node. This model is annotated according to the majority of the class labels of these k input items. In the case of a tie in majority voting, one may increase the value of k until the tie is resolved. Further after this training step, a calibrated SOM can be used for the classification of new, unknown input items by looking for the best-matching model in the array and taking its class label. What makes the SOM so particular among the learning algorithms? One reason is that it was originally intended for a model of feature representations in the brain. There is ample indication that both concrete and abstract features have ordered representations in the brain, and that these representations can be modified by learning. Although the mathematical formulation of the SOM does not have exact counterparts in physiology, both mathematical treatment and biological learning rely on similar functional principles and processes. The array of models in the SOM corresponds to a layer or cluster of brain cells that can be activated selectively by specific sensory input and other occurrences. The various brain maps resemble different artificial SOM. Depending on the amount of specific stimulation, the number of biological units recruited to a representation is somehow proportional to the frequency of stimulation, and thus to the statistics of the input. Similar processes take place in the array of models in the SOM : if the number of input items of a particular class is increased in learning, more models are automatically recruited to this class, which improves the relative classification accuracy of items of that category. Also intriguing are the
24
coding AS literacy — Metalithikum IV
structures of representations of a large database : in the largest SOMs constructed so far, which represent vast numbers of complete documents, the relevance between mapped items is reflected at various specific abstract levels, a property that has so far been addressed only to the brain.
Fig. 1: The SOM in this picture was constructed of all articles of the complete Encyclopedia Britannica, used as input data items; each complete article is projected into one node of the SOM network, and corresponds to a location in this picture. The similarity between the articles was defined by the similarity of their vocabularies. Only a very small local fragment (the orange area) of the SOM is shown. By clicking a node (point) in the map region labeled “shark,” one obtains the listing of articles on sharks projected into this node. The “larva” node contains articles on insects and larvae, and a node in the middle of the area of birds contains articles on birds. A bit more remote areas (outside this fragmental window of the SOM) contain articles on whales and dolphins, and areas beyond them describe whaling, harpoons, and Eskimos, which proves that the topics change smoothly and continuously over the map, in relation to their local similarity. By clicking searched titles, complete articles can be read.
What Makes the SOM So Particular among Learning Algorithms?
25
I Elements of a Digital Architecture Ludger Hovestadt
I Timaeus 36 — II Pythagoras 45 — III Ptolemy 58 — IV Alberti 69 — V Lagrange 82 — VI Markov 97
Ludger Hovestadt is Professor for Computer Aided Architectural Design (CAAD) at the Swiss Federal Institute of Technology (ETH) in Zurich. His approach, broadly speaking, is to look for a new relationship between architecture and information technology. He aims at developing a global perspective that relates to and integrates with developments in different fields such as politics and demographics, as well as technology, in a postindustrial era. He is the inventor of the digitalSTROM. chip and founder of several related companies in the fields of smart building technology and digital design and fabrication. A showcase of his recent work can be found in Beyond the Grid—Architecture and Information Technology: Applications of a Digital Architectonic (Birkhäuser, 2009). www.caad.ethz.ch.
28
coding AS literacy — coding Metalithikum AS literacy — Metalithikum IV IV
People are both fascinated by and afraid of computers. This text gives you an idea of what the coding of computers is about. Computers are not machines. And because we left the era of machines, architecture no longer is about a geometry of lines, such as it was introduced by Alberti or Palladio in the Renaissance. The elements of today’s architecture are of an algebraic nature: they are “whatever can be the case.” This text is written as an epos. Its long and adventurous journey is set up by the Timaeus and visits Pythagoras, Ptolemy, Alberti, Lagrange, and finally Markov. By following this journey you take part in the creation of a new geometry of something we might call Digital Man. This man is symmetrical I Elements of a Digital Architecture
29
to Renaissance Man, who discovered the modern world and who became so natural to us over the last 500 years. Digital Man opens up a new plateau, which has been fascinating and frightening to all of us since the end of the nineteenth century. You will find an instrument to create your identity within a digital architecture. It incorporates the Euclidean geometry cultivating space, as well as the Cartesian space cultivating time, and by doing exactly that it enables you to move in between times to make your own architecture. This is what the elements of digital architecture are about. This is what all masterful architecture of the last 100 years is about.
30
coding AS literacy — Metalithikum IV
What is information? What does “coding” mean? These are questions we, as architects, want to ask ourselves.1 For Norbert Wiener, information is neither matter nor energy, and therefore not in space or time. So what is it? We don’t want to formulate this question as a problem that we can get to the bottom of and find a solution for. Rather, we see in it a challenge and we meet this challenge with a hypothesis that at first glance may seem somewhat baffling : Coding is a new form of geometry.
And, as with any new geometry to date — the geometry of Euclid and the geometry of Descartes — this new geometry unlocks a new world. Once we look closely, we are surprised to realize that only analytical geometry is in fact drawn, whereas Euclidean geometry is described by text. The illustrations of Euclidean geometry that we are so familiar with today are in reality a nineteenth-century translation into the representative world of a then current analytical geometry for didactic purposes.
1
This text gives very few references. If you are interested in more details, you easily can take the given names, concepts, diagrams or images to get references and further readings in the Internet.
I Elements of a Digital Architecture
31
Yet even analytical geometry does not only operate with lines, but primarily with numbers. Where, for example, is the point of intersection of two straight lines through the coordinates : L1((1,–1/2), (2,0)) L2((1,– 8), (2, –10))
This can be solved as a drawing:
Or it can be solved using this well-known proportional arithmetic: a = dy / dx a = (y2– y1) / (x2– x1) a1 = (0 +1 / 2) / (2–1) = 1 / 2 b = y-a*x b1 = 0–0.5*2 = –1 y = a*x+b L1(x) = 1 / 2*x+ –1 a2 = (–10 + 8) / (2–1) = –2 b2 = –10– (–2)2 = –6 L2(x) = –2x–6 L1(x) = L2(x) 1 / 2x-1 = –2x–6 2.5*x –1 = – 6
32
coding AS literacy — Metalithikum IV
2.5*x = – 5 x = – 2 y = – 2x– 6 y = – 2*– 2– 6 y = 4 – 6 y = – 2
Point of Intersection S(-2,-2) This procedure is cumbersome, especially when used on complex geometric queries, and indeed, since the advent of computing twenty to thirty years ago, hardly anyone who has learned this at school still actually applies it. Today, this type of query is coded. And the code no longer uses arithmetics to measure geometric elements, but instead uses symbols to operate with algebraic elements. Such a code might look something like this : In[73]:= R1 = Infinite Line [{{1, –.5}, {2, 0}}];
R2
= Infinite Line [{{1, –8}, {2, –10}}];
Solve[{x, y} ∈ R1 & & {x, y} ∈ R2, {x, y} ] In[73]:= {{x → –2., y → –2.}}
So we now only formulate the parameters of the query; the pathway toward a solution, which in the arithmetic procedure was still of some interest to us, has become generic. And in a similar vein, we now generate the familiar graphic representation : In[73]:= Graphics[{ {Blue, R1, R2 }, {Red, Point [{x, y}] /. %}] Frame → True]
I Elements of a Digital Architecture
33
We therefore want to distance ourselves from the idea that there is only one fundamental geometry, and that geometry has anything to do with the drawing of lines and circles. Geometry is the rationalization of thought patterns amid known elements. Thus we also distance ourselves from the idea of an inflationary number of different geometries, as they are today being delineated : projective, affine, convergent, Euclidean, Non-Euclidean … We would regard none of these as geometries, because they all have come about, just as the didactic illustrations of Euclidean geometry mentioned above, as a result of an “algebraification” of mathematics during the nineteenth century and are not originally geometries. Rather they are — as we would say today — renderings of algebraic expressions into visual-spatial dimensions. And so this plethora of geometries has its origin in algebraic, not geometric, thinking. These are therefore not geometries. They only look like them. At first glance.
It is a different story with digital code. Here, as we have seen, algebraic expressions are being signed, as we would call it. These signatures, not the numbers, are the elements of the code. So if geometry is the rationalization of thought patterns amid known elements, then code is the rationalization of thought patterns amid signatures, the elements of symbolic algebra. Code is a new geometry. New in the sense that with these signatures we align ourselves with numerals, which may be regarded as the elements of analytical geometry and characters, which may be regarded as the elements of Euclidean geometry. To the elements of a new geometry correspond new notations : Euclidean geometry develops in tandem with the development of phonetic notation; analytical geometry with a mobile, mechanical notation, that is, the printing press. Coding develops in tandem with an operational notation, that is, computing.
34
coding AS literacy — Metalithikum IV
And : a new geometry always unlocks a new world : during the Antique, characterizing things through phonetics unlocks space. During modernity, numeration of space through movement unlocks time. And today, we suggest that the signing of time through operations unlocks values. EUCLIDEAN Geometry
ANALYTICAL Geometry
Code as Geometry
characters
ciphers
signatures
phonetic writing
functional printing
operational coding
space
time
value
The Form and Method of this text are unusual. It is not analytically reflective. Rather, the text posits a symmetrical body of thinking, which, in keeping with group theory in mathematics, utilizes the concepts of associativity, neutrality, and inversion. It follows the hypothesis that, in the tradition of Galois, groups atomize time by means of algebra. Thus we build symmetries to the methodology of Descartes who of a fashion in this way atomized space by means of algebra and captured time by means of geometry, just as Democritus atomized things by means of algebra and by means of geometry captured space. The text then is a symmetrical constellation outside of any time and thus in itself shows the form of a digital architecture. Certainly, these symmetries may appear far-fetched, and also perhaps somewhat arbitrary. But in the course of this text, akin to a game of sudoku, the symmetries will stabilize without making it necessary to specify the concepts employed. And in this, the ability to keep the concepts alive while still being able to operate with them, lies the particular strength of our new geometry. So with this text, we want to arrange symmetries in a thought construct and compose a fugue of operational thinking.
I Elements of a Digital Architecture
35
I Timaeus
There are very few texts with a similar importance to Western thinking as Plato’s Timaeus. This is the passage where the demiurge creates the world : Plato, Timaeus, 35 a, Translated by Benjamin Jowett He took the three elements of the same, the other, and the essence, and mingled them into one form, compressing by force the reluctant and unsociable nature of the other into the same. When he had mingled them with the essence and out of three made one, he again divided this whole into as many portions as was fitting, each portion being a compound of the same, the other, and the essence. And he proceeded to divide after this manner: First of all, he took away one part of the whole [1], and then he separated a second part which was double the first [2], and then he took away a third part which was half as much again as the second and three times as much as the first [3], and then he took a fourth part which was twice as much as the second [4], and a fifth part which was three times the third [9], and a sixth part which was eight times the first [8], and a seventh part which was twentyseven times the first [27]. After this he filled up the double intervals [i.e. between 1, 2, 4, 8] and the triple [i.e. between 1, 3, 9, 27] cutting off yet other portions from the mixture and placing them in the intervals, so that in each interval there were two kinds of means, the one exceeding and exceeded by equal parts of its
36
coding AS literacy — Metalithikum IV
extremes [as for example 1, 4 / 3, 2, in which the mean 4 / 3 is one-third of 1 more than 1, and one-third of 2 less than 2], the other being that kind of mean which exceeds and is exceeded by an equal number. Where there were intervals of 3 / 2 and of 4 / 3 and of 9 / 8, made by the connecting terms in the former intervals, he filled up all the intervals of 4 / 3 with the interval of 9 / 8, leaving a fraction over; and the interval which this fraction expressed was in the ratio of 256 to 243. And thus the whole mixture out of which he cut these portions was all exhausted by him. This entire compound he divided lengthways into two parts, which he joined to one another at the centre like the letter X, and bent them into a circular form, connecting them with themselves and each other at the point opposite to their original meeting-point; and, comprehending them in a uniform revolution upon the same axis, he made the one the outer and the other the inner circle. Now the motion of the outer circle he called the motion of the same, and the motion of the inner circle the motion of the other or diverse. The motion of the same he carried round by the side to the right, and the motion of the diverse diagonally to the left. And he gave dominion to the motion of the same and like, for that he left single and undivided; but the inner motion he divided in six places and made seven unequal circles having their intervals in ratios of two and three, three of each, and bade the orbits proceed in a direction opposite to one another; and three [Sun, Mercury, Venus] he made to move with equal swiftness, and the remaining four [Moon, Saturn, Mars, Jupiter] to move with unequal swiftness to the three and to one another, but in due proportion. We are interested in the five initial concepts: same, other, essence, form and nature
We also want to keep in mind that Timaeus’s creation of the world is narrated around numbers. And: these numbers are of a quite different kind to our understanding of numbers today. Greek numbers are not iterative
I Elements of a Digital Architecture
37
and they are not starting with a 0: 0, 1, 2, 3, 4 …
They start with a part of the whole and are working with magnitudes of 2 and 3: 2, 3, 4, 9, 8, 27 …
which is 2, 3, 2*2, 3*3, 2*2*2, 3*3*3 …
which the Greeks call the double and the triple intervals. We would say these multiplicities of the same are self-references of different orders. Therefore it is of some importance not to think about Greek numbers as an interplay of ciphers (0 … 9), but as an interplay of two principal characters: 2 and 3
These two characters are complemented by the 1
and as a triple 2 3 1
they can be characterized as same other essence
There are also three principal operations on these characters: multiplication division equivalence
38
coding AS literacy — Metalithikum IV
which again are characterized as the same, the other, and the essence.
To help us further understand how to mingle the character-numbers, the Timaeus only gives a few hints. A more explicit description of the same stage play within the Greek body of thinking can be found in the Pythagorean harmonic order. This is the Pythagorean stage play, or this is how the other (3) looks at the same (2) in their multitudes
The magnitude between the first multitudes of 3 and 2 is written as: 3 / 2
The magnitude between the second multitude of 3 and 2: 9 / 4
The magnitude between the third multitude of 3 and 2: 27 / 8 …
That is not enough. There is another actor, the essence, the part of the whole, the 1
I Elements of a Digital Architecture
39
And this is the stage play of these three actors: how does the essence (1) look at the other (3) look at the same (2) in their multitudes
The magnitude between (the magnitude between the first multitudes of 3 and 2) and (the magnitude between the part of the whole and the part of the whole) (3 / 2) / (1 / 1) = 3 / 2
The magnitude between (the magnitude between the second multitudes of 3 and 2) and (the magnitude between the first multitudes of 2 and the part of the whole) (9 / 4) / (2 / 1) = 9 / 8
The magnitude between (the magnitude between the third multitudes of 3 and 2) and (the magnitude between the first multitudes of 2 and the part of the whole) (27 / 8) / (2 / 1) = 27 / 16 (81 / 16) / (4 / 1) = 81 / 64 (243 / 32) / (4 / 1) = 243 / 128 (729 / 64) / (8 / 1) = 728 / 512
And of course also the same (2) is looking at the other (3) and perceives other magnitudes. how does the essence (1) look at the same (2) look at the other (3) in their multitudes
The magnitude between (the magnitude between the first multitudes of 2 and 3) and (the magnitude between the part of the whole and the first multitude of the 2) (2 / 3) / (1 / 2) = 4 / 3
40
coding AS literacy — Metalithikum IV
The magnitude between (the magnitude between the second multitudes of 2 and 3) and (the magnitude between the part of the whole and the second multitude of the 2) (4 / 9) / (1 / 4) = 16 / 9
The magnitude between (the magnitude between the third multitudes of 2 and 3) and (the magnitude between the part of the whole and the second multitude of the 2) (8 / 27) / (1 / 4) = 32 / 27 (16 / 81) / (1 / 8) = 128 / 81 (32 / 243) / (1 / 8) = 256 / 243 (64 / 729) / (1 / 16) = 1024 / 729
If one puts these ratios (multitudes) into a circle, one gets the well-known contemporary illustrations of the harmonic order, of these two series of magnitudes circling the interval between 1 and 2.
Of course we do not claim that this is the only possible reading of the Timaeus. Rather, we challenge this masterpiece of Western thinking in a way that seems interesting to us. And we hope that staging this play in this way would be interesting for Plato as well. With this understanding we again read the beginning of the Timaeus to get an idea of the interplay of the five concepts same, other, essence, form, and nature.
I Elements of a Digital Architecture
41
He took the three elements of the same, the other, and the essence, and mingled them into one form, compressing by force the reluctant and unsociable nature of the other into the same. As an example we take this equation: (16 / 81) / (1 / 8) = 128 / 81
We have the five concepts: The same, the multitudes, can be seen as 16 = 2*2*2*2 81 = 3*3*3*3
or as the principal character 2
The other, the magnitude, can be seen as the ratio between the multitudes 16 / 81
or as the principal character 3
The essence, the principle ratio, can be seen toward the part of the whole: 1 / 8
or as the characteristic, or the modul, 1
The form can be seen as the result:
42
coding AS literacy — Metalithikum IV
128 / 81
And finally the nature, the incorporated arithmetics, can be seen as the way of articulating, of shaping the form: (16 / 81) / (1 / 8)
Also, we do have: 2 as the same, 3 as the other, 1 as the essence, * as the multitude (same) / as the magnitude (other) = as the essence: Therefore the formula: (16 / 81) / (1 / 8) = 128 / 81
can be read in this fugue: (((the multitudes of the same) in magnitude to (the multitudes of the other) ) in magnitude to (the essence in magnitude to (the multitudes of the same) )) and (((the multitudes of the other) in magnitude to (the multitudes of the same) ) in magnitude to
((the multitudes of the same) in magnitude to the essence) ))
I Elements of a Digital Architecture
43
And finally this might be an adaptation of our fugue to the harmonic circle: The essence might be the circle, the form the rotation to a certain key, and the nature as the pattern that appears as of points on the circle. Therefore the different characters, the same and the other, the 2 and 3, are of the same essence, but of different natures (displayed as gray and black dots). In music we know them as major and minor.
44
coding AS literacy — Metalithikum IV
II Pythagoras
We now want to use the conceptual game above to learn from the rationalization of form in space that Pythagoras established with his famous theorem aa + bb == cc or 3*3 + 4*4 == 5*5 or 3*3 + 2*2*2*2 == 5*5
I Elements of a Digital Architecture
45
This is our first obeservation: a and b are of the same, they are multitudes. Whereas c is of the other, a magnitude. Or, if we want to stress the concepts of the same and the other further: 2, 3 and all their multitudes are of a finitude, whereas for example 5 as all the other primes is not part of the finitude, they are without parts, they are of an infinitude. a, b
c
multitude
magnitude
same
other
of the same
without parts
finitude
infinitude
46
coding AS literacy — Metalithikum IV
This is the configuration of more constitutional concepts of our fugue: In an atomistic setup actors are of identical elements. They are identities. The sensible aspect of identities, the words, the characters, or the shapes of the actors, take place on the geometrical stage. The intelligible aspect of identities, the nature, the essence, the form of their phonetic talk, take place on the logical stage. Whereas in the inverse axiomatic setup actors do not have parts, they are indivisible, they are individuals. The sensible aspect of individuals, the forms of the character’s play, are orchestrated arithmetically. The intelligible aspect of individuals, the shape, the essence, of playing, is orchestrated algebraically.
I Elements of a Digital Architecture
47
atomistic
axiomatic
which is of the same
which has no parts
identities
individual
finite
infinite
characteristic forms
formal characters
natural shapes
shaped essence
sensible
intelligible
words
nature
characters
essence
geometry
logic
arithmetics
algebra
stage
orchestra
To complete our fugue: With Pythagoras, a master of an atomistic body of thinking, the finite elements of the same are understood as necessities, as multitudes, and from this thinking the infinity of the one without parts is looked at as a contingency, as magnitude. Therefore it is within an atomistic body of thinking that we say: if an a and a b are of finite elements, respectively multitudes, and c is of an infinity, respectively a magnitude. Anticipating the arguments of the following text, we find an inverse stage play with Ptolemy, a master of an axiomatic body of thinking. The infinity of the one without parts is looked at as necessity, as multitude,
48
coding AS literacy — Metalithikum IV
and from this thinking the finite elements of the same are looked at as contingencies, as magnitudes.
We now complete the composition of our fugue in detail.
a and b, the multitudes, act on the geometrical stage, the finitude, as identities, as names, in the shape of filled squares. a and b, the multitudes, play within the arithmetical orchestra, the infinitude, as an identity, as numbers, as a multiplicity of the principal characters 2 and 3.
I Elements of a Digital Architecture
49
sensible of the multitude geometrical stage
arithmetical orchestration
finitude
infinitude
names
numbers
shape
characters
filled squares
multitudes of 2, 3
c, the magnitude, acts on the geometrical stage as an individual in the form of an outlined square between the shapes of the two identities / multitudes. Known elements to count on, identities, have shapes, whereas unknown elements to be measured, individuals, have forms. Geometry measures the endless space between identities within the infinite.
50
coding AS literacy — Metalithikum IV
Geometry uses logic on identities to rationalize the forms of space on the atomistic stage. Within the arithmetical orchestration c is articulated by a formula or algorithm 2 2 2 2 + 3 3 == 5 5
which is between the characters of the two identities / multitudes. Known elements to count on, identities, have characters, whereas unknown elements to be measured, individuals, have formulas. Arithmetics measures the endless space between identities within the infinite. Arithmetics uses algebra on identities to rationalize the formulas of space on the axiomatic stage. c sensible magnitude geometrical stage
arithmetical orchestration
form
formula
outlined square
2 2 2 2 + 3 3 == 5 5
I Elements of a Digital Architecture
51
Staging a and b as intelligible multitudes, which we call identities, we are looking for something like the shape of logic, or the shape of nature. We suggest to mask it with a filled circle. Orchestrating a and b as an intelligible identity we are looking for something like the character of algebra or the character of the essence. This should be the essence of all multitudes, the 1, the module. a, b intelligible multitude logical stage
algebraic orchestration
shape of logic
character of algebra
shape of nature
character of essence
filled circle
1
52
coding AS literacy — Metalithikum IV
To stage c as the intelligible magnitude, as an identity, which would be something like the form of logic, or the form of nature, with Pythagoras we can find the ratio between the multitudes by rational cuts of a circle, or an outlined triangle. Orchestrating c as an intelligible multitude, as an individual, which would be something like the formula of algebra, the formula of the essence, we gain the equivalence relation. c intelligible magnitude geometrical stage
algebraic orchestration
form of logic
formula of algebra
form of nature
formula of essence
outlined triangle
==
I Elements of a Digital Architecture
53
Pythagoras sensible a, b
c
multitude
magnitude
geometry
arithmetics
geometry
arithmetics
stage
orchestration
stage
orchestration
shape
characters
form
formula
filled squares
2, 3
outlined square
22 22+33 == 5 5
Pythagoras intelligible a, b
c
multitude
magnitude
logic
algebra
logic
algebra
stage
orchestration
stage
orchestration
shape of nature
essential character, module
form of nature
essence of formula, equality
filled circle
1
outlined triangle
==
Thus far these assignations, understood as the first voice of the composition of our fugue. The multitudes a and b can be seen
54
coding AS literacy — Metalithikum IV
either as the geometrical shape of the same (filled square),
as the geometrical form of the other (outlined square)
I Elements of a Digital Architecture
55
or as the essence, a multitude of modules (a rationalized array of logical shapes, i.e. filled circles), which we would like to name ideal shape. These simultaneous levels of abstraction are of major importance for this text, they are the key to synchronizing the different voices of our fugue.
To close the circle with the harmonic order of Pythagoras.
If the circle is the logical form to sense nature — or we want to say: it is the cipher of nature — then the harmonic circle, in its different rotations, provides rational keys or the characters to realize the form of nature, or: to render the logical to geometrical form. Therefore: The 1 is the key
56
coding AS literacy — Metalithikum IV
to characterize the universe, the 2 and the 3 are the elements to encrypt the world.
With the Greek temples we find an architectonic articulation where the sensible is primary: where characteristic, modulated geometrical shapes are staging the geometrical form of an arithmetical formula.
Whereas with the Roman Pantheon, several hundred years later, the intelligible becomes primary: a modular, characterized logical shape, the circle, is orchestrating logical forms around an algebraic equality, a centered void.
I Elements of a Digital Architecture
57
III Ptolemy 600, 800 years later we find an inverse world. We will choose the theorem of Ptolemy (c. 90 CE – c. 168 CE) to discuss this inversion.
Like the theorem of Pythagoras, this theorem is working with triangles and circles, with the same, the other, and the essence. But: unlike Pythagoras Ptolemy does not rely on the characteristic or modularized shape of things (filled squares, filled circle) to rationalize the form in between (outlined squares, outlined triangle) to generate identities, which are of the same. Ptolemy relies on the rationalistic or equalized forms (the outlined triangles, outlined circle) to analyze the shape within (filled triangles, filled square) to specify individuals, which have no parts.
58
coding AS literacy — Metalithikum IV
Pythagoras
Ptolemy
500 BCE
100 CE
analytic shape
rationalistic form
rationalize form
analyze shape
in between
within
generate
specify
identities
individuals
The casts of multitude and magnitude have swapped their roles completely. In today’s notation Ptolemy’s equation is ac + bd == ef
Pythagoras used the multiplicity of the two characteristic elements 2 and 3
as his necessities. the “tools” he relies on, to measure the in-between 5
whereas Ptolemy’s anchor points are two by two of these equations, each of which had been the form, algorithm, and essence of Pythagoras: aa + and cc + and aa + and cc +
bb == ee dd == ee dd == ff bb == ff
They are intermingled
toward the one formular ac + bd == ef
I Elements of a Digital Architecture
59
The equality of Pythagoras, the magnitude of the essence, and the 1 of Pythagoras, the multitude of the essence, are the points of inversion from Pythagoras toward Ptolemy. The same and being of the same, the identity of Pythagoras, is inverted into the one that has no parts: the individual of Ptolemy. Now the multitudes are no longer modularized characters 2,3
they are modulated formulas, ad + be == cf
and the magnitudes are no longer rationalized forms of distinction of the identical 2 2
2 2 + 3 3 == 5 5
they are analyzed shapes of equality of the individual, which we know as the prime numbers starting with 1 1, 2, 3, 5, 7, 11 …
Pythagoras
Ptolemy
modularized characters
modulated formulas
2, 3
ad + be == cf
rationalized forms of distinction
analyzed shapes of equality
2 2 2 2 + 3 3 == 5 5
1, 2, 3, 5, 7, 11 …
identical
individual
60
coding AS literacy — Metalithikum IV
This is the composition of the second voice of our fugue in detail.
With this interchange of casts the Ptolemy scenario is the inverted Pythagoras scenario: Logic and algebra now are on the side of the sensible: The finitude of the multitude of the sensible now is staged logically in a form of the known representation as outlined triangles. The infinitude of the multitude of the sensible now is orchestrated in an algebraic formula. sensible multitude finitude
infinitude
logic
algebra
stage
orchestra
form
formula
outlined triangles
ab + cd == ef
I Elements of a Digital Architecture
61
The finitude of the magnitude of the sensible now is staged logically in the shape of filled triangles. The infinitude of the magnitude of the sensible now is orchestrated arithmetically with modulations of the individuality, the prime numbers. sensible magnitude finitude
infinitude
logic
algebra
stage
orchestra
shape
modulation
filled triangles
1, 2, 3, 5, 7, 11…
Geometry and arithmetics are now on the side of the intelligible: The finitude of the multitude of the intelligible is staged geometrically in the form of an outlined circle. The infinitude of the multitude of the intelligible now is orchestrated within the arithmetical balance.
62
coding AS literacy — Metalithikum IV
intelligible multitude finitude
infinitude
geometry
arithmetics
stage
orchestra
form
formula
outlined circle
balance (==)
The finitude of the magnitude of the intelligible is staged geometrically in the shape of a filled rectangle. The infinitude of the magnitude of the intelligible now is orchestrated arithmetically within the infinitesimal, the generic. intelligible magnitude finitude
infinitude
geometry
arithmetics
stage
orchestra
shape
modulation
filled rectangle
generic (∞)
I Elements of a Digital Architecture
63
Therefore the multitudes aa + cc == ee and bb + dd == ff
can be seen either as the logical forms of the same (outlined triangles), as the logical shape of the other within the same (filled triangle), or as the essence, a multitude of modules (a rationalized array of geometrical forms, i.e. outlined circles), which we would like to name ideal form. same
other
essence
Pythagoras geometrical shape
geometrical form
ideal shape
filled square
outlined square
array
Ptolemy logical form
logical shape
ideal form
outlined triangles
filled triangles
graph
64
coding AS literacy — Metalithikum IV
There are two different plays staged in Ptolemy’s body of thinking, depending on whether the sensible or the intelligible gets the primary role.
With the Romanesque basilica we find an architectonic articulation where the sensible is primary, where calculated, balanced logical forms are staging the logical shape of an algebraic mode.
Whereas with the Gothic cathedral, several hundred years later, the intelligible becomes primary: a balanced, calculated geometrical form, the circle, is orchestrating geometrical shapes around a generic arithmetics, the infinite void horizon.
I Elements of a Digital Architecture
65
sensible multitude stage
magnitude orchestration
stage
orchestration
Pythagoras geometry
arithmetics
geometry
arithmetics
shape
characters
form
formula
filled squares
2, 3
outlined square
2 2 2 2 + 3 3 == 5 5
Ptolemy logic
algebra
logic
algebra
form
calculus
shape
modus
outlined triangles
ab + cd == e f
filled triangles
1, 2, 3, 5, 7, 11 …
intelligible multitude stage
magnitude orchestration
stage
orchestration
Pythagoras logic
algebra
logic
algebra
shape
module
form
equality
filled circle
1
outlined triangle
==
Ptolemy geometry
arithmetics
geometry
arithmetics
form
balance
shape
generic
outlined circle
==
filled lined square
∞
Pythagoras encrypts the universe with-out the 1 Ptolemy decrypts the cosmos from-in the ∞ Pythagoras is writing with an alphabet of elementary characters (finitudes), Ptolemy is reading the text asking for axiomatic numbers (infinitudes). Pythagoras is working with the multitudes of 2 and 3, Ptolemy is asking for the magnitudes of the primes:
66
coding AS literacy — Metalithikum IV
1, 2, 3, 5, 7, 11 … The ∞ is the text, the cosmic characteristic, the primes are the axioms to decrypt the cosmos. With Ptolemy the outlined circle, the void horizon, is the ideal form to sense nature, to read the text of nature. The different rotations of this circle are the rationalistic keys to analyze the geometrical shape of nature: filled lined squares. Whereas the Roman Pantheon brings the characterization of the logical shape to an infinite
and articulates a centered void within the filled circle as a new, a logical form that we presented as the outlined triangle, the Gothic cathedral brings the analysis of the geometrical form to an infinite and orchestrates a line around the void-circled horizon
I Elements of a Digital Architecture
67
(e.g. the Gothic rosette window) as a new geometrical shape which we presented as the filled lined square (the Gothic tracery and buttress).
68
coding AS literacy — Metalithikum IV
IV Alberti Centuries later. The Italian humanist Leon Battista Alberti (1404–1472). With him we see yet another inversion: it is an inversion of Ptolemy and a double inversion of Pythagoras. To accomplish our fugue with another voice we want to ask Alberti and start with his measurement of the new Rome.
This is our voice of reference: Ptolemy used an apparatus, called dioptra, to measure his position (magnitude) within the stars (multitude). And he created his famous map as a list of pairs of two numbers specifying the measured positions of the important points of his known world.
Alberti is using exactly the same apparatus, but he is using it as an instrument: he simply turns the dioptra from the cosmic sphere, the stars, and the primes,
I Elements of a Digital Architecture
69
to himself, moving, or, to put it more simply, to the ground. In doing so he himself, whose position was subject of measurement with Ptolemy (= magnitude) now becomes the point of stability or the reference (= multitude) to measure distances in between. Ptolemy
Alberti
apparatus
instrument
the stars
he himself
the position within
the distance in between
We want to describe this inversion more precisely. Ptolemy uses an apparatus to dissect his position within the cosmic order to construct a map of all positions on a void plane. An apparatus is: on the sensible plane: a logical form of an algebraic calculus, ( an outlined triangle: the actual point of measurement, the calculus: to get the position within two triangles ) on the intelligible plane: a geometrical form of an arithmetic balance
70
coding AS literacy — Metalithikum IV
( an outlined circle: the disk for any measurement, the equality: follow the same procedure for each measurement ).
A map, an image, or a construction is: on the sensible plane: a logical shape of an algebraic mode ( filled triangles: balanced figures of the measured destinctions, primes: on fictional layers, or species ) on the intelligible plane: a geometrical shape of an arithmetical generation ( filled square: a distinctive shape. infinite: on void ground, or: a prediction within the unknown, or: operating within modes / monas: modulation ).
Alberti uses Ptolemy’s apparatus as an instrument.
I Elements of a Digital Architecture
71
An instrument contract distances on worldly ground to constitute connections around centered voids. apparatus
instrument
dissect
contract
cosmic order
worldly ground
construction
constitution/model
position
connection
void plane
centered void
An instrument is: on the sensible plane: a geometrical shape of arithemetical characters, ( a filled square: a distinct shape on void ground. however: an assumption instead of a prediction ) on the intelligible plane: a logical shape of an algebraic module ( a filled circle: a generic figuration for any contract, == fugue 1: follow the same procedure for each measurement == generic ).
72
coding AS literacy — Metalithikum IV
De Artificiali Perspectiva, Pélerin (1505)
A model, a fugue, or a constitution is: on the sensible plane: a geometrical form of an arithmetical formula ( outlined square: with Alberti, the lines, in general, are a constellation, the formula: in proportion ) on the intelligible plane: a logical form of algebraic equality ( outlined triangle: with Alberti, the alignment, in general, the adjustment, the equality: in perspective. Or: acting with moduls, modularization ).
I Elements of a Digital Architecture
73
map/image
model/icon
figure
fugue
specific figuration
proportional constellation
predictive distinction
perspective adjustment
operate
act
modulation
modularization
Alberti articulates an inversion to Ptolemy and an abstraction to Pythagoras. The elements of Alberti’s geometry are coming out of Ptolemy’s balanced infinity, the void horizon. Alberti’s geometrical stage is the surface of the balanced filled volumes of Ptolemy. It is the outline of the geometrical shape of Ptolemy. Alberti’s stage is in between the old cosmic order. Alberti’s basilica Santa Maria Novella plays with new lines on the surface of the old volumes.
74
coding AS literacy — Metalithikum IV
With these lines in between spaces new Rome was built in between the ruins of antique Rome. For that Sixtus V, the gardener, just planted a few obelisks and lined up fresh water at dedicated places. The ruins and the spatiality of antique Rome became a matter of spatial archaeological interest, the new Rome the sensible lines of time in between.
Alberti’s elements are spatial by nature and Alberti is positioning them on the geometrical stage of time and movement. This stage is symmetrical to the geometrical stage of Pythagoras, but: The stage is of time not of space. The elements are of a spatial nature, they are multitudes of the primes, the spatial algebraic modes. They are not of a mythical nature as they had been with Pythagoras and his multitudes of the 2 and the 3, the mythical algebraic modes.
I Elements of a Digital Architecture
75
The elements are ciphers around the 0, not characters around the 1. Pythagoras
Alberti
stage of space
stage of time
mythical elements
spatial elements
2, 3
a, b (primes)
around the 1
around the 0
character
cipher
The details of the third voice of our fugue:
a and b, the multitudes, act on the geometrical stage, the finitude, as identities, as names, in the shape of filled lines on squares. a and b, the multitudes, play within the arithmetical orchestra,
76
coding AS literacy — Metalithikum IV
the infinitude, as an identity, as numbers, as a multiplicity of the principal ciphers, the primes (we know this as infinite series). sensible of the multitude geometrical stage
arithmetical orchestration
finitude
infinitude
shape
cipher
filled lines on squares
a, b (multitudes of primes)
c, the magnitude, acts on the geometrical stage as an individual in the form of an outlined line on a square between the shapes of the two identities / multitudes. Within the arithmetical orchestration c is articulated by a formula or algorithm aa + bb == cc
I Elements of a Digital Architecture
77
which is between the ciphers of the two identities / multitudes (we know this as the proportion of infinite series, e.g. Wallis 1656).
c sensible magnitude geometrical stage
arithmetical orchestration
form
formula
outlined line on a square
a a + b b == c c
Staging a and b as intelligible multitudes, which we call identities, we are looking for something like the shape of logic, or the shape of nature. We suggest the filled line on a circle.
78
coding AS literacy — Metalithikum IV
Orchestrating a and b as an intelligible identity we are looking for something like the character of algebra or the character of the essence. This should be the essence of all multitudes the division by 1, the 0, the module. a, b intelligible multitude logical stage
algebraic orchestration
shape of logic
character of algebra
shape of nature
character of essence
filled line on a circle
0
To stage c as the intelligible magnitude, as an identity, which would be something like the form of the logic, or the form of nature, with Alberti we can find the ratio between the multitudes
I Elements of a Digital Architecture
79
by rational cuts of the lines on a circle, or as points outlining a triangle. Orchestrating c as an intelligible multitude, as an individual, which would be something like the formula of algebra, the formula of the essence, we gain the equivalence relation. c intelligible magnitude geometrical stage
algebraic orchestration
form of logic
formula of algebra
form of nature
formula of essence
points outlining a triangle
==
With Renaissance architecture we find an architectonic articulation on the stage of time where the sensible is primary, where characteristic, modulated geometrical shapes are staging the geometrical form of an arithmetical formula.
80
coding AS literacy — Metalithikum IV
Whereas with Baroque architecture, two hundred years later, the intelligible becomes primary: a modular, characterized, logical shape, the circle, is orchestrating logical forms around an algebraic equality, a centered void in time.
I Elements of a Digital Architecture
81
V Lagrange Again 300 years later: With Lagrange’s (1736–1813) interpolation we position ourselves in the inversion of Alberti, a double inversion of Ptolemy, and a triple inversion of Pythagoras. Because of these symmetries we can constitute this next voice of our fugue with the help of the known equation ac + bd == ef
Where aa + and cc + and aa + and cc +
bb == ee dd == ee dd == ff bb == ff
are the multitudes, the same, which have no parts. This is our fugue for Lagrange in line with Ptolemy: As the instrument of Alberti Lagrange’s apparatus is working with triangles and circles, with the same, the other, and the essence. But: unlike Alberti, Lagrange does not rely on the ciphered or modularized shape of things (filled lined squares, filled line circle) to rationalize the form in between (outlined line squares, outlined lined triangle) to generate identities, which are of the same. Lagrange relies on
82
coding AS literacy — Metalithikum IV
the rationalistic or equalized forms to analyze the shape within to specify individuals, which have no parts. Alberti
Lagrange
15th century
18th century
analytic shape
rationalistic form
rationalize form
analyze shape
in between
within
identities
individuals
Even if it is a little cryptic, to get to know liner algebra we want to continue to play the voice of our fugue. The casts of multitude and magnitude has interchanged their roles completely. Lagrange’s equation can be written as ac + bd == ef
Alberti used the multiplicity of the two series of primes a and b
as his necessity, the tools he relies on, to measure the in-between aa + bb == 1
with the rational number, or the line in between, as his cipher, whereas Lagrange’s anchor points are multiple of these equations, each of which had been the form, algorithm, and essence of Alberti:
I Elements of a Digital Architecture
83
aa + and cc + and aa + and cc +
bb == ee dd == ee dd == ff bb == ff
They are intermingled toward the one formula ac + bd == ef
The equality of Alberti, the magnitude of the essence, and the 0 of Alberti, the multitude of the essence, are the points of inversion. The same and being of the same, the identity of Alberti, is inverted by Lagrange into the one that has no parts: the individual. Now the multitudes are no longer modularized ciphers a / b
(proportions of infinite series of primes), they are modulated formulars, ad + be == cf
and the magnitudes are no longer rationalized forms of distinction of the identical, aa + bb == 1
they are analyzed shapes of equality of the individual, which we know as the roots of the polynomials ax + bx2 + cx3 …
starting with -1.
84
coding AS literacy — Metalithikum IV
This is how Lagrange’s interpolation works in detail to create a line (the black one), which has multiple names, which passes multiple points, to orchestrate a set of points, under the assumption of linearity.
Lagrange fills the line toward linearity with lines in the sense of Alberti, as Ptolemy filled the plane with the ratios of Pythagoras. By this method, Lagrange does not get an identity of a perspective line, but an individual linearity, called a dimension. And he is able to do so by specifying a formula to transfer one linearity to another linearity to establish
I Elements of a Digital Architecture
85
a movement without movement in the sense of Alberti, a fictional movement, a movability in time. It is the story about development and education in time: to shape an individual by feeding them with more and more points of truth to decipher, to analyze the cosmic order. Alberti
Lagrange
modularized ciphers
modulated formular
a / b
ad + be == cf
rationalized forms of distinction
analyzed shapes of equality
aa + bb == 1
ax + bx2 + cx3 …
line
linearity
identical
individual
Logic and algebra now are on the side of the multitude again, and matter by necessity: The finitude of the multitude of the sensible now is staged logically in a form of the known representation as points outlining a triangle.
86
coding AS literacy — Metalithikum IV
(
Today we associate this logical form with a polyline in the sense of the polynomial interpolation of Newton. ) The infinitude of the multitude of the sensible now is orchestrated in an algebraic formula, the calculus. sensible multitude finitude
infinitude
logic
algebra
stage
orchestra
form
formula
points outlining a triangle
AB + CD == EF
I Elements of a Digital Architecture
87
The finitude of the magnitude of the sensible now is staged logically in the shape of filled lines on triangles. (
Today we associate this logical shape with the infinitesimal polynomial interpolation in the sense of Leibniz, if we think in contrast to Newton ) The infinitude of the magnitude of the sensible now is orchestrated algebraically with modulations of the individuality, the roots of the polynomial interpolation. sensible magnitude finitude
infinitude
logic
algebra
stage
orchestra
shape
modulation
filled lines on a triangle
the polynomial ax + bx2 + cx3 …
polynomial interpolation
roots of the polynomial
88
coding AS literacy — Metalithikum IV
The finitude of the multitude of the intelligible is staged geometrically in the form of an outlined circle. (
Today we would associate this geometrical form with the non-Euclidean geometry of Carl Friedrich Gauss,
with the set theory,
I Elements of a Digital Architecture
89
with the capsulation of energy and / or labor (self-movement), we call a product,
or with a pixel of a technical image as an abstraction of Ptolemy’s map.
) The infinitude of the multitude of the intelligible now is orchestrated within the arithmetical balance. ( Today we would associate this arithmetical balance with the operations on matrices, which are about orchestrating coefficients of polynomials in a dimensional order.
90
coding AS literacy — Metalithikum IV
scaling
uneaqual scaling
illustration
k 0 0 k
k1 0 0 k2
characteristic polynomial
(λ1 – k)2
(λ1 – k1) (λ1 – k2)
eigenvalues
λ1 = λ2 = k
λ1 = k1 λ2 = k2
µ1 = 2
µ1 = 1 µ2 = 1
matrix
λi algebraic multipl.
µi = µ (λi)
) inteLLigibLe MuLtitude
i
finitude
infinitude
geometry
arithmetics
stage
orchestra
form
formula
outlined circle
balance (==)
elementS of A digitAl Architecture
91
The finitude of the magnitude of the intelligible is staged geometrically in the shape of filled lines on a rectangle. (
Today we would associate the algebraic equality with Riemann Geometry,
with the Group Theory of Galois,
with the brands and labeling of products,
92
coding AS literacy — Metalithikum IV
and with the malls as the complementary part to factories.
Le Bon Marché
) The infinitude of the magnitude of the intelligible now is orchestrated arithmetically within the infinitesimal, the generic. ( This can be associated with the concept of entropy, a fully equally distributed state.
)
I Elements of a Digital Architecture
93
With Ledoux’s Rotonde de la Villette we find an architectonic articulation where the sensible is primary and where calculated, balanced logical forms
are staging the logical shape of an algebraic mode, Palais Garnier
whereas with Le Bon Marché, or the Palais Garnier, Paris, one hundred years later, the intelligible becomes primary: a balanced, calculated geometrical form, the circle, is orchestrating geometrical shapes around a generic arithmetics, the infinite void horizon.
94
coding AS literacy — Metalithikum IV
Alberti encrypts the universe with-out the 0, Lagrange decrypts the cosmos from-inside the ∞. Alberti is writing with an alphabet of elementary ciphers (finitudes), Lagrange is reading the text asking for the axiomatic roots (infinitudes). Alberti is working with the multitudes of the primes, Lagrange is asking for the magnitudes of the polynomial roots. The ∞ is the text, the cosmic encryption; the polynomial roots are the axioms to decrypt the cosmos. With Lagrange the outlined circle, the void horizon, is the ideal form to sense nature, to read the text of nature. The different rotations, of this circle are the rationalistic keys to analyze the geometrical shape of nature. Filled pointed squares.
Whereas the Roman Pantheon brings the characterization of the logical shape to an infinite and articulates a centered void within the filled circle as a new, a logical form, which we presented as the outlined triangle; whereas the Gothic cathedral brings the analysis of the geometrical form
I Elements of a Digital Architecture
95
to an infinite and orchestrates a line around the void-circled horizon as a new geometrical shape, which we presented as the filled lined square; whereas Baroque architecture brings the characterization of the logical shape to an infinite and articulates a centered void within the filled circle as a new, a logical form, which we presented as points outlining a triangle; the opera house, or the factory hall, or the exhibition hall, brings the analysis of the geometrical form to an infinite and orchestrates a line around the void-circled horizon as a new geometrical shape, which we presented as the filled pointed square.
96
coding AS literacy — Metalithikum IV
VI Markov Finally, we turn toward information and the quantum. Why Andrej Andreyevich Markov? We could follow Wiener, Turing, or Shannon? Even Chomsky? Because we asume they think of computers as machines running positively on an entropic background. They adhere to the idea of a computer, of information as being meaningful in time. “Information is neither matter nor energy.” — Norbert Wiener Cybernetics: Or Control and Communication in the Animal and the Machine, 1948.
With our fugue we expect an expulsion from entropy. As Alberti is expelled from Ptolemy’s spatial cosmic order, we expect to be expelled from Lagrange’s chronological analytical order. We do not expect to reflect on entropy, we are looking for entropic projections, as we find them in quantum physics, Richard Feynman’s Strange Theory of Light and Matter, 1985, or cryptography. Or with Markov and the operational principals of social media.
I Elements of a Digital Architecture
97
Alexandr A. Markov’s stochastical analysis of the epos “Evgenij Onegin” of Alexander Sergeyevich Pushkin, 1913.
Markov simply cuts the famous epos by Alexander Pushkin into meaningless consonants and vowels, counts the characters, analyses the numbers, and gets values of probabilities, by which one can navigate the text in a stable and ordered way prior to any specificity, prior to any reading or understanding. This is the birth of a new geometry beyond time. And this is how Google’s PageRank and social media work today. Unlike Wiener, Neumann, Turing, Shannon, or Chomsky, Markov, like Dedekind or Riemann, is not embedded within entropy. Markov simply cuts entropy and keeps the parts, as they are: entropies. But he gains the cuts and he is able to work with them in a meaningful way.
98
coding AS literacy — Metalithikum IV
Alberti took the cosmic reflective series, the rationality of Ptolemy. On the sensible plane he anchored it as geometry and logics as multiples to the ground. On the intelligible plane he aligned it as arithmetics and algebra with the infinite horizon. Therefore by modernity the entity of a rational number, a 2 / 3, which consists of two natural numbers (characters, not numbers), literally cuts the Ptolemean cosmos of series of primes into two and puts them into proportion. The world of character determination, the infinity of spatial order, is cut into two, the parts are ciphered by numbers, and arranged in time. This constitutes a modernity in time. And this is how Markov, information, and the quantum sound: Markov took the entropic analytical functions, the rationality of Lagrange. On the sensible plane he anchored it as geometry and logics as multiples to the ground. On the intelligible plane he aligned it as arithmetics and algebra with the infinite horizon.
I Elements of a Digital Architecture
99
Therefore with the digital, the entity of a signature like 1, 0, 0, 1, which is a proportion of two numbered species, literally cuts the analytical cosmos of entropic functions into two and puts them into proportion. The world of numeric specification, the infinity of the chronological order is cut into two, the parts are subscribed and arranged in, as we suggest, probability values. This constitutes modernity in value. Alberti
Markov
cipher
signature
series of primes
entropy of functions
2 / 3
0,1
modernity in time
modernity in value
The symmetries of the rational triangles in space of Pythagoras, the perspective triangles in time of Alberti and the probabilistic triangles in value of Markov chains are striking. In the typical diagrams of the Markov chain we see the geometrical multitudes of analytical elements (peripheral circles) and the magnitude of a digital element in between (centered circle), and we have the arthmetics of probabilites, as a glue, as the magnitude in between the multitudes.
100
coding AS literacy — Metalithikum IV
This is how we can read the Internet, mobiles, social media: analytical, energized elements, connected by necessities (multitudes) on the electrical level, and mediatized and operated by contingencies (magnitudes), and glued to the world of all the other nodes by probabilities. The any moves within the every. Anybody googles everybody. A new identiy is created upon every individuality.
What is information then? With Pythagoras we had a geometry in between things, with Alberti a geometry in between spaces, with Markov a geometry in between times.
I Elements of a Digital Architecture
101
Information is a geometry in between times. But how to operate on information, if it is in between times, if information is neither matter nor energy, if computers are not machines? If we look at Markov as a protagonist of the multitude of the intelligible of information, we suggest Kohonen and his Self-Organizing Maps as a protagonist of the magnitude of the intelligible of information. Teuvo Kohonen, Self-Organisation and Associative Memory, Springer, Berlin, 1983
102
coding AS literacy — Metalithikum IV
A and B, the multitudes, act on the geometrical stage, The finitude, as identities in the shape of filled points on squares. ( We know this as a group, a dimension, a technical image, a technical infrastructure. ) A and B, the multitudes, play within the arithmetical orchestra, the infinitude, as an identity, as a multiplicity of the principal signatures the polynominal roots. ( We know this as energized and optimized elements. )
I Elements of a Digital Architecture
103
sensible of the multitude geometrical stage
arithmetical orchestration
finitude
infinitude
shape
signature
filled points on squares
A, B (multitudes of optimized elements)
C, the magnitude, acts on the geometrical stage as an individual in the form of outlined points on a square between the shapes of the two identities / multitudes. ( We know it e.g. as wavelets. ) Within the arithmetical orchestration C is articulated by a formula or algorithm AA + BB == CC which is between the signatures of the two identities / multitudes.
104
coding AS literacy — Metalithikum IV
( We know it as categories ) C sensible magnitude geometrical stage
arithmetical orchestra
form
formula
outlined points on a square
AA + BB == CC
Staging A and B as intelligible multitudes, which we call identities, we are looking for something like the shape of logic, or the shape of nature. We suggest the filled points on a circle. ( A circle of probabilities as we know it from Google, providing the probabilities toward the whole world to any statement.
I Elements of a Digital Architecture
105
We know this as Markov chains. ) Orchestrating A and B as an intelligible identity, we are looking for something like the character of algebra or the character of the essence. This should be the essence of all multitudes, the division by zero, a 0 / 0, the digital module. ( We divide any statement by the index to any element of the world. ) A, B intelligible multitude logical stage
algebraic orchestration
shape of logic
character of algebra
shape of nature
character of essence
filled points on a circle
0 / 0
106
coding AS literacy — Metalithikum IV
To stage C as the intelligible magnitude, as an identity, which would be something like the form of the logic, or the form of nature, with Markov we can find this with the ratio between the multitudes by rational cuts of the points on a circle, or as outlined points on a triangle. ( This is, how we would discuss Kohonen’s self-organizing maps ) Orchestrating C as an intelligible multitude, as an individual, which would be something like the formula of algebra, the formula of the essence, we gain the equivalence relation. ( The sum of the probabilities of the whole world to any statement keeps the one. )
I Elements of a Digital Architecture
107
C intelligible magnitude geometrical stage
algebraic orchestration
form of logic
formula of algebra
form of nature
formula of essence
outlined points on a triangle
==
If the Euclidean model articulates the logical form of mythical elements in space and if the perspective model articulates the logical form of spatial elements in time then the self-organizing map articulates the logical form of chronological elements in probability values. Therefore we suggest that we should not talk about a self-organizing map but a self-organizing model.
An analytical map of Zurich, which is stable in the analytical / chronological order.
108
coding AS literacy — Metalithikum IV
And in inversion to the map, a self-organized model of Zurich, which changes the constallation of elements according to the analytical / chronological position of the observer.
Exactly symmetrical to the Renaissance model in time, which changes the constellation of elements according to the spatial position of the observer.
De Artificiali Perspectiva, Pelerin (1505)
I Elements of a Digital Architecture
109
With the architecture of the twentieth century we find an architectonic articulation on the stage of probability values, where the sensible is primary, where the optimized, modulated geometrical shapes are staging the geometrical form of an arithmetical formula around probabilities. This view toward architecture is in sync with the architecture of the Renaissance and with the architecture of the ancient Greeks. but on different levels of abstraction. What we are expecting for twenty-first-century architecture is that the intelligible becomes primary: a modular, characterized, logical shape, the circle, is orchestrating logical forms around an algebraic equality, a centered void in value. We are expecting a move toward a falling in sync with the architecture of the Baroque and with the architecture of ancient Rome, but on different levels of abstraction. If we are right with our fugue, then the primacy of the sensible in the architecture of the twentieth century,
110
coding AS literacy — Metalithikum IV
the geometrical shapes, ( we know them as wavelets and in an applied form: as parameters, (referring to Alberti’s geometry as the multiplicities of the code) or grammars (incl. L-systems, GA, CA …), (referring to Lagrange’s arithmetics as the multiplicities of the code) ) will shift toward a primacy of the intelligible in architecture, toward logical forms, toward a digital Baroque, in the twenty-first century. Self-Organizing Models, in implementations like Kohonen’s maps, will be the active subjects, contracting natures, to explore the new world beyond time.
I Elements of a Digital Architecture
111
orchestration
arithmetics
characters
2, 3
algebra
calculus
ab + cd == ef
arithmetics
ciphers
a,b
algebra
calculus
AB + CD == EF
arithmetics
signature
A,B
stage
geometry
shape
filled squares
logic
form
outlined triangles
geomety
shape
filled lines on squares
logic
form
points outlining a triangle
geometry
shape
filled points on squares
multitude
outlined points on squares
form
geometry
filled lines on triangles
shape
logic
arithmetics
filled points on a circle
shape
logic
Markov
points outlining a circle
form
geometry
0,0
module
algebra
==
balance
arithmetics
0
module
algebra
==
balance
arithmetics
1
module
algebra
magnitude
points outlining a point
form
logic
filled pointed square
shape
geometry
points outlining a triangle
form
logic
filled lines on squares
shape
geometry
outlined triangle
form
logic
stage
intelligible
orchestration
multitude
filled line on a circle
shape
logic
Lagrange
AA + BB == CC
formula
arithmetics
outlined circle
form
geometry
Alberti
the roots of polynomes, ax + bx2 + cx3 …
modus
algebra
aa + bb == cc
formula
arithmetics
filled circle
shape
logic
Ptolemy
1, 2, 3, 5, 7, 11 …
modus
algebra
2 2 2 2 + 3 3 == 5 5
formula
stage
Pythagoras
orchestration
magnitude
outlined lines on squares
form
geomety
filled triangles
shape
logic
outlined square
form
geometry
stage
sensible
==
equality
algebra
∞
generic
arithmetics
==
equality
algebra
∞
generic
arithmetics
==
equality
algebra
orchestration
Chronological signatures in value Operational Geometry (Code)
Spatial ciphers in time Analytical Geometry
Characters in space Euclidean Geometry
II A Nonanthropocentric Approach to Apperception
115
II A Nonanthropocentric Approach to Apperception Sha Xin Wei
Jean Petitot’s Fiber-Bundle Approach to Apperception 119 — The Case for Continua 127
Sha Xin Wei PhD is Professor and Director of the School of Arts, Media + Engineering in the Herberger Institute for Design and the Arts + Fulton Schools of Engineering. Dr. Sha is Director of the Synthesis Center for transversal art, philosophy, and technology at Arizona State University (http://synthesis.ame.asu.edu/), and is also a Fellow of the ASU-Santa Fe Institute Center for Biosocial Complex Systems. He is the author of Poiesis, Enchantment, and Topological Matter (Cambridge, MA: MIT Press, 2013).
116
codingAS ASliteracy — Metalithikum literacy — Metalithikum IV coding
Rather than ask what kinds of things there are, or what things are made of, one can ask how things come to be, and how things turn into the substrate from which they emerge. After giving a brief account of a topological approach to dynamics, I will interpret Jean Petitot’s construction of the apperception of a thing as the result of constituting processes. Along the way I will provide some topological concepts as handholds for Gilbert Simondon’s qualitative, processualist concepts of individuation. If one regards a cup from various perspectives, does one perceive planar shapes — now a disc, now a rectangle — depending on the orientation of the cup with respect to the eye? Does one perceive roundness? How does one II A Nonanthropocentric Approach to Apperception
117
regard a relation one has to a friend, or an institution? And how does one “regard” a mathematical concept such as the space X of continuous mappings from a class of Lie groups into the family of matrix transformation? How does one regard X from an “algebraic” versus a “differential geometric” or even a “historical” point of view? Even if an individual can take various perspectival approaches to regarding some thing, there is an emergent phenomenon that seems a-perspectival. In this essay I discuss an approach to apperception that avoids problematically a priori models of a human perceiver.
118
coding AS literacy — Metalithikum IV
Jean Petitot’s Fiber-Bundle Approach to Apperception
As a student of René Thom, Jean Petitot inherited the fearless Francophone use of mathematics as substantive rather than technical or merely illustrative material for philosophical investigation. On one hand, Petitot offers a version of ontogenesis that shares with Whitehead an appeal to the principle of least action,1 but also introduces a mathematically more sophisticated and arguably more openended formulation based on continua and continuous concepts. On the other hand, Petitot has inspired some contemporary French engineers in computer vision and machine learning to build robotic systems to experimentally test such approaches. Petitot uses continuity fundamentally to derive the possibility of quality as extension, a plausible way to understand qualities that ride above the unboundedly many particular predicates attached to particular objects. If we treat material experience as continuous, then it follows that the sorts of transformations we must consider acting upon experience are not combinatorial permutations (more generally algebraic transformations) but continuous topological transformations. In the epoch of the digital a snap response would be to ask, why continuous? Isn’t the world of bits and quantum dots discrete? Henri Bergson already had a precise answer to that in Creative Evolution : “Apparent discontinuity of the psychical life is then due to our attention being fixed on it by a series of separate acts : actually there is only a gentle slope; but in following the broken line of our acts of attention, we think we perceive separate steps.”2 In other words, whereas lived experience in material dynamics is continuous, analytic thought is a discretizing act, an act that simultaneously carves out an isolate piece of the unfolding world and defines a particular isolate action. A naive study of movement and change would treat what electronic sensors measure in terms of instantaneous frames of numbers “snapshots,” being the machinic analogue of an analytic act. But instead one can calibrate durational processes against one another by considering not moments, snapshots, but rather processes, trajectories, or, as we shall see later, invariants of such durational processes. Setting aside such “instanting” acts, our basic, embodied experience of the world is continuous. As I walk toward you across the room, my feet do not pop discontinuously from position to position, and my experience of you does not flicker in and out discontinuously. As I lift my hand to pick up a cup or to wave hello, it does not jerk discontinuously from point to point and moment to moment, at least not in ordinary experience. This 1 2
See Sha Xin Wei, Poiesis, Enchantment, and Topological Matter (Cambridge, MA: MIT Press, 2013), 145–47. Henri Bergson, Creative Evolution, trans. Arthur Mitchell (New York: Dover, 1910), 5.
II A Nonanthropocentric Approach to Apperception
119
continuity is a starting point for our continuous, dynamic approach to the formation of experience. In order to make sense of what follows, the reader will need to refer to some auxiliary texts. Short of recapitulating full courses of topology and Riemannian manifolds, I refer to a compact introduction that efficiently covers some of the basic notions that scaffold the current essay.3 A topological space is a set with a bare minimum of structure : a family of subsets, called “open,” that satisfy some basic relations. Colloquially the most important axioms are : (1) any intersection of any two “open” sets is “open”; and (2) the union of any number of “open” sets is “open.” I quote the adjective to emphasize that the selection of this family of “open” sets is usually far from unique. Indeed to topologize a set by constructing a topology of open sets is an art, an act of mathematical imagination. A metric space is a topological space M with a metric d (_,_), a symmetric function on M x M that associates to any two elements x and y in M a nonnegative number d (x, y) that is zero only when x equals y, and satisfies the triangle inequality d (x, z) ≥ d (x, y) + d (y, z) for any three elements x, y, and z in M. Furthermore, if every point has a neighborhood and a diffeomorphism between that neighborhood and a disc about the origin in some vectors space of a particular dimension n, then we call this M a Riemannian manifold.4 If the set of diffeomorphisms satisfies some basic compatibility conditions that articulate how to change coordinates consistently as we pass through overlapping neighborhoods, then we say that the set M is a Riemannian manifold of dimension n. Given a manifold M and a vector space V (which we can think of as some n-dimensional space like Rn), a vector bundle is a topological manifold E with a projection map π : E → M such that for each x in M, the inverse image π1[x] has the structure of V. This inverse image is called a fiber. A motivating example to keep in mind are lines (R1) that thread through each x on a loop M, like a set of spikes all strung together into a bracelet. The extra compatibility condition is that locally — i.e., for every neighborhood U in M — the inverse image π1[U ] is homeomorphic to U x V. If we replace V by more general algebraic structures, then we call < E, M, π > a fiber bundle.5 Petitot appeals to the concept of a fiber bundle to accommodate the continuous summing of perspectival “adumbrations” to a single quality. Regarding qualities, he perspicaciously observes : “A ‘simple’ (i.e. constant, uniform) color such as ‘red’ is a common quality shared by all red 3 4 5
120
Sha Xin Wei, “Topology and Morphogenesis,” Theory, Culture & Society 29 (July– September 2012): 220–46. A diffeomorphism between two spaces, such as two manifolds, is a one to one (injective) and onto (surjective) mapping that is differentiable. For a considerably more sophisticated use of fiber bundles, see the companion essays by Elias Zafiris and Michael Epperson.
coding AS literacy — Metalithikum IV
things. But even if traditional, this naive extensional point of view is not convincing. First it takes for granted the ‘atomist’ nominalist axiom of the primacy of individuated things. Now, things result from highly complex noetico-noematic processes of constitution and are by no means primitive data. Second, it does not take into account the fact that, in a covering, quality can vary continuously.”6 Continuity and the axiom of compatibility of transition functions are essential to this model to arrive at the existence of locally constant sections. Now, this way of thinking about qualities leads us to think about how they are perceived as a continuous function of varying perspective taken in a general, not only oculocentric way. To my mind, one of the most profound contributions of Petitot’s essay is a clear model for the Husserlian process of summing evidence (aletheia) in adumbration to a quality that is necessarily a-perspectival. Turning this piece of fruit in my hand, I say that it is a banana, not that it is a banana from this or that point of view. I can even say that it is shaped like a banana, without silently saying fine print like “from this point of view.” Following this line of thought, when I take an orange in hand and say that it is round, I do not say that it is round from this or that perspective. Therefore my assessment of an orange’s roundness is not a geometric calculation from some optical point of view, any more than my assessment of a banana’s banana-ness. Of course, I’m choosing simple examples to be clear, but choosing more complex phenomena like societies or ensembles of technical objects and technical individuals (people) only accentuates the a-perspectivalist aspect of phenomena. The central ambition of this essay is to find a more edged and nuanced account of the emergence of a-perspectival phenomena. In Petitot’s framework of fiber bundles, observing is selecting a particular value — the observed quality or value — in the fiber over a particular neighborhood around a particular base point. In the topology of fiber bundles, neighborhood is a technical concept precisely articulating what we colloquially call a point of view. In this framework, a particular continuously coherent set of such observations can be well characterized as a continuous section of the bundle. Continuity and the axiom of compatibility of transition functions are essential to this model to arrive at the existence of locally constant sections. Resorting to such a formulation provides a way to conceive qualities in ways that do not privilege a particular sense modality, and yet retain the phenomenologically relevant qualities of embodied experience. We need a way to not only conceive of simply sensed qualities of a piece of fruit but also qualities 6
Jean Petitot, “Morphological Eidetics for Phenomenology of Perception,” in Naturalizing Phenomenology: Issues in Contemporary Phenomenology and Cognitive Science, ed. Jean Petitot, Francisco J. Varela, Bernard Pachoud, and Jean-Michel Roy (Stanford, CA: Stanford University Press, 1999), 344; emphasis added.
II A Nonanthropocentric Approach to Apperception
121
like those of a person’s “character.” At a macroscopic scale, I could say that this person is quite imaginative from the professional point of view of a computer graphics engineer, or I could say that this same person is quite imaginative from the point of view of a radiologist, or a sibling, and so forth. As I change my point of view, I am essentially moving around from neighborhood to neighborhood in the base manifold of this fiber bundle. The key thing to notice here is that I am including myself, my own disposition in the account of observation, and therefore I cannot presume the putative existence of an a priori object whose predicates are determined prior to my encounter with that object. In fact there is a more primordial co-constitution of observer, observation, and observed to which I will turn at the end. Returning to the visual as an example, one of computer vision’s basic problems is to infer the objects in a scene based only on the colored pixels presented in a video image from one or more cameras trained on that scene. Right away we see an old dualism reinscribed in computer vision algorithms, a dualism that assumes that there is an a priori object “out there” waiting to be inferred from the camera data. However, the notorious correspondence problem in computer vision demonstrates very precisely a fundamental limit of this assumption : without extra information, a computer cannot tell whether a particular spot of red in one image corresponds to the same point of a putative object in a scene as does a spot of red in a second image of the same scene. Intricate (if sloppy) projective geometry yields some constraints (to projective lines) that reduce the correspondence problem to requiring that only a small number of pixels in the two bitmaps be identified as coming from a common ancestor “out there.” But there is no way to altogether eliminate the indeterminacy except by the intervention of an observer, or to adopt C. S. Peirce’s more precise notion, an interpretant. So what if we indeed assume no a priori object? What if we do not accept the naive form of Platonism, and consider the emergence of objects in continuous fields of camera data? Aside from the intrinsic interest of discovering mathematical alternatives to Platonist computer vision, one of the strongest philosophical motivations for studying models of machine vision is to be able to construct alternative working models of ontogenesis that do not reduce to predominant conceits about biology or ego — a speculative experimental phenomenology, decentered from historical anthropocentrism. In order to make that leap, we take a few steps back to work by some computer-vision and robotics researchers in France who were concerned with computational discovery — i.e., what goes under the label “machine learning,” the science of devising algorithms that can infer patterns of properties and relations in the world via mechanical sensor data. When I visited the Sony Computer Science Lab in Paris in 2004, I was most intrigued to discover a group of engineers who were applying
122
coding AS literacy — Metalithikum IV
notions from differential geometry such as Riemannian manifolds and Lie groups to help a Sony (AIBO) robot dog learn what obstacles are in its field of view. (The robot dog was equipped with a pair of video cameras for eyes and a set of motors that allowed it to turn its head and to move around the floor.) The computational scientists, David Philipona (Sony Computer Systems Laboratory, Paris), Kevin O’Regan (Laboratoire de Psychologie de la Perception, Université René Descartes, Paris V), Jean-Pierre Nadal (Laboratoire de Physique Statistique, École Normale Supérieure, Paris), and Olivier J.-M. D. Coenen (Sony Computer Systems Laboratory, Paris), were trying to implement in software and robotic hardware a crude version of a most subtle idea, that : objects of consciousness are invariants of embodied action and sensing in the world. Recall that an invariant of a rigid motion is something that stays the same under the action of that motion. For example, if I spin a ball around on its axis, only two points on its surface stay fixed under the set of all rotations about the fixed axis : the north and south poles defined by that axis. An invariant does not have to be an isolated point, however; if we consider arbitrary subsets of the sphere as putative objects, we could say that each latitude circle about a fixed axis is an invariant of the set of rotations about that axis. Much more generally, the zeros of a vector field can be considered the invariants of the flow induced by that vector field. It is crucial to understand that such vector fields and their substrate manifolds need not be at all visual or “geometric” in the crude sense that can be drawn on a computer screen. They may not even be finite-dimensional. A basic problem in machine perception is to interpret the data transmitted by the cameras to a software program inside the robot with minimal, ideally zero, a priori encoded assumptions about the robot’s body and the physical / spatial nature of its ambient. The program can emit signals to the robot’s motors that cause changes in its field of view, which are transmitted back from its cameras. However, the program should not have a “model” that pre-encodes a god’s-eye view that already distinguishes between internal and external structure; initially it does not even have information about which of its input or output signals relate to elements of its own body and which to elements in its ambient (such as lamps or objects illuminated by those lamps). How can such a robot program infer basic information about the objects in its ambient environment based only on its camera data? Philipona, O’Regan, Nadal, and Coenen start with a few observations : (1) determinism : repetition of motor signal P yields repetition of effect on the image K; (2) composability : if P causes a change in image from Image1 to Image2, and Q causes a change in image from Image2 to Image3, then the composition of P followed by Q yields a change in the image from Image1 to Image3 ; (3) path continuity : small changes in image can be induced by small changes in the motor signals; (4) invertibility : for every continuous
II A Nonanthropocentric Approach to Apperception
123
motor movement P that transforms the input camera image from Image1 to Image2, there is another continuous motor movement Q that returns the image back to Image1. Q is called the inverse of P. These seem like very minimal assumptions, and do not encode basic information such as which signals control a putative “body” and which a putative “ambient.” Certainly there is no encoding about even such spatially fundamental notions as the dimensionality of the putative ambient environment. But even at this level of generality, we already have a rich mathematical theory available because conditions (1) through (4) define what is called a continuous group, synonymous with a Lie group. Lifting our attention from manifolds to dynamics on manifolds, from things and stuff to the transformations of things, Lie groups provide a most powerfully expressive articulation of continuous action. One of the most powerful facts about a Lie group is that it is both an algebraic group — a technical concept in the mathematical theory of abstract algebra — and a manifold — a technical concept in Riemannian geometry (the morphological language of Einstein’s general relativity). It may seem quite hard to discover any structure in a continuous group at this level of generality, but Lie theory provides some deep theorems about the existence of substructures in general Lie groups, and the conditions under which a Lie group admits at least local representation by matrix groups, where “local” has a precise interpretation in terms of differential geometry. This in turn raises the possibility of a machine representation in software. Of course there is a significant amount of applied mathematics and engineering to be done, which is the content of these engineers’ adaptation of the Lie group and differentiable manifold formulation inspired by Petitot’s approach to perception and phenomenology. At one key step, to infer information about the ambient environment, Philipona et al.’s algorithm uses a dimension-reducing technique called principal component analysis from engineering, which invokes a version of the principle of least action. It is a moment at which the principle of least action, the fundamental ontogenetic principle for modern physics as well as for Whitehead and Leibniz, is encoded into an operational metaprocedure. Is this all? Is this enough? It seems that Deleuze and Guattari would respond in the negative. But relying on Whiteheadian concrescence also relies inevitably on the principle of least action. So we have arrived at a conundrum, but a fertile one, I believe. To develop this important point would take us too far from the theme of this essay, so I refer to the discussion of the principle of least action and alternative process dynamics in the ontogenesis chapter of my book Poiesis, Enchantment, and Topological Matter. No matter what the dynamic, is there a rigorous yet anexact way to understand the emergence of objects and subjects? What are the philosophical implications of proposing to regard objects as invariants of continuous action on a continuous manifold? Let me underline that in
124
coding AS literacy — Metalithikum IV
no way do I suggest that we reduce experience to a mathematical model. Nor do I agree with Petitot’s project to naturalize Husserl by conflating mathematics with a computational model. Petitot outlines the essential features of the program to find a naturalized phenomenology : “The key point is the mathematical schematization of phenomenological descriptive eidetics, that is, the elaboration of a mathematical descriptive eidetics (which Husserl thought impossible in principle). For us, to naturalize an eidetics consists in implementing its mathematical schematization in natural substrates (e.g., neural nets).”7 To parenthetically slip in neural nets plays loosely with a very large reduction in kind from natural substrates of ordinary matter to computational abstraction. My hesitation dwells more on the reduction than the abstraction. This already shakes the ground on which a lot of computational modeling rests, be it good old-fashioned neural networks, deep belief networks, or self-optimizing maps. Such numerical, computational techniques necessarily exclude and elide all but a very small sliver of material dynamic, experience, and history. Therefore all these techniques necessarily reduce the lifeworld into a few parameters. As a parenthesis, consider how many dimensions are needed to model and modulate simply the physical matter activating a given situation.8 One could defend such a practice by claiming that these techniques extract certain essential features of the phenomena in question, and so give some insight into the collective lived experience of those phenomena. But usually, an argument for why we should grant some sort of registration between the phenomenon and the trace is not made. Moreover, there is a large performative, experiential, and existential gulf between recognizing and implementing a mathematical schematization in a natural substrate. To recognize is to perceive and understand 7 8
Petitot, “Morphological Eidetics for Phenomenology of Perception,” 330–31. Consider a set of particles (say bits of cigarette smoke) in a room. Each particle p i (where i is its index) comes with its mass mi (a scalar), position xi , and velocity vi. If the particles are in R3, then the position and velocity are vectors in R3, so this particle’s kinematic data are given by a point in R7. Let’s take the relatively simple Newtonian case where the mass is constant, i.e., mass does not change as a function of time, position, or velocity. Then the dynamics of the entire ensemble of N particles is defined by 6N parameters: at any moment of time, a possible dynamic configuration (the potentially variable positions and velocities) is a set of N positions, with N associated velocities, which can be represented by a vector of dimension 6N. In other words, a point in R6N. This 6N-dimensional space of possible positions and velocities is called the configuration space of the system. If we were to consider only the oxygen in a 60 kg human body, there would be about 23,486,352,981,000,000,000,000,000,000 atoms of oxygen. Just to model the possible of set of configurations of where the atoms are and how they are moving in a 3-D volume of space would require a configuration space whose dimension is six times the number of oxygen atoms — in other words, a vector space of dimension 140,918,117,886,000,000,000,000,000,000, V R140,918,117,886,000,000,0 00,000,000,000 . Now a single point in this space R140,918,117,886,000,000,000,000,000,000 represents the kinetic state of the entire ensemble of 23,486,352,981,000,000,000,000,000,000 oxygen atoms, and as the body evolves in time, the state of the entire ensemble of oxygen atoms traces a path γ of configurations in V, where each point on that path is a vector of dimension 140,918,117,886,000,000,000,000,000,000.
II A Nonanthropocentric Approach to Apperception
125
a pattern without implying that this pattern exhausts or completely fills out the entity or occasion in which it is perceived. The pattern may not be physically present, even if it arises from physical morphology — think of shadows. However, Petitot calls for creating an operational (what Guattari might call a machinic) articulation in matter of the mathematical schematization. Perhaps the closest classical analogue to this would be the realization of constructive geometry by carrying out geometric procedures in physical material, such as cutting out an elliptical boundary by swinging a loop of string around two fixed pegs. This cuts across several ontological strata : consciousness of pattern, mathematical pattern, physical material, and corporeal movement and gesture. More fundamentally, Petitot’s project, stated so baldly, seems implausibly reductionist unless accompanied by an argument for the materiality of mathematical entities, a radical but arguable proposition that would take us very far afield.9 In short, my quibble with Petitot’s formulation of his project lies not with the deployment of mathematical concepts, but with the parenthetical “e.g., neural nets.” Neural nets are software models, implementing the optimization of a linked set of differential equations derived by taking the gradient of a system of sigmoidal (nonlinear) functions. This is a reductive process, in the sense that it does not register history, affect, or more classically culture and eros, so much as elide them. Abstraction, squinting our eyes to reduce the glare from the “blooming, buzzing” welter of experience, is our principal technique for trying to think with mortal limits about phenomena that are too complex to be contained by our thought. There is no dishonor or danger in abstraction, until we try to use it as a mode of description under the delusion that it serves as a complete and accurate representation of what is going on. Of course we know this, and yet representations have this power pulling the wielder of those representations, whether neural networks and Self-Organizing Maps (SOM), or tables and networks, toward metaphysical hubris. Rather than claim that our representations describe reality, we can deploy instead these mathematical concepts like others, as modes of material articulation. This proposition is both easier and harder to entertain when mathematical concepts become implemented in software code that manipulates physical matter via computer-controlled machines. Examples of such algorithmically animated machines abound, ranging from LED screens and loudspeakers to rapid prototyping machines and pacemakers. But there is a pernicious reduction that a literary or image analysis persistently and typically makes when it restricts attention to traces to the exclusion of the material, technical processes by which the traces are made. This 9
126
See for example Alain Badiou’s essay on the ontological status of mathematics in Infinite Thought: Truth and the Return to Philosophy, trans. Oliver Feltham and Justin Clement (New York: Continuum, 1998).
coding AS literacy — Metalithikum IV
makes the trace in place of, or conflates the trace with, the machine and the making of the trace. For short, let’s call this the fallacy of treating techne solely as technology of representation, the fallacy of representationalism. Now, shifting from ink to plastic forms does not escape this fallacy of representationalism or semiologism. Printing some plastic shape that corresponds to whatever data is affiliated with a hurricane makes that prop no less an abstract and disembodied representation of the hurricane than a matrix of numbers on a screen. Just because you can pick up the representing prop does not make it necessarily any less abstracting than a set of equations. Indeed, a set of equations can be far more embodying and conducive to enactive experience than any physical object when the set animates a media environment that changes behavior richly in concert with the activity of its inhabitants. Looking at a data visualization is a different order of experience from pacing at walking speed through a room while feeling the acoustics of a gale rising as if you are stirring up the atmosphere along a mile-wide swath at hundreds of miles an hour. In this case, we do not look at a model so much as wear the algorithms and integrate them into our bodies’ modulation of matter. Furnished with these examples, we return to Petitot’s fiberbundle approach to the apperception of qualities. The Case for Continua Petitot’s use of fiber bundles to articulate qualities as extensions, following Husserl, seems fertile but needs a sustained and wellinformed effort to animate it by a more dynamic, process-oriented story. The key here is to start with Petitot’s and Husserl’s speculative proposition that a predicate’s extension depends crucially on continuous variation, not on sequences of discrete points. But to be clear, I would set aside Petitot’s concern with epistemology, and the checking of the theory of experience against a computational model. Exploring the implications of a topological approach to a plenist, unbifurcated ontology, I am dealing with the problem of how things emerge and dissolve with respect to their background. I use “thing” mindful of multiple notions. The first is Bruno Latour’s and science studies’ notion of things, such as controversies that have left the lab and have entered public discourse. This is not unrelated to Heidegger’s “thing” performing, gathering the fourfold : earth and sky, divinities and mortals. A third sense is computer science / machine perception’s notion of an object that can be “inferred” from sensor data. The most evocative sense of “thing” for my present purpose is Alain Connes’s proposal that objects of consciousness are invariants in a mathematically interpretable sense. Connes’s proposal returns us to Petitot’s phenomenological project, which I interpret as a concretization of Connes’s speculation and René Thom’s large project on the topological dynamic study of ontogenesis. And to more fully depart
II A Nonanthropocentric Approach to Apperception
127
from anthropocentric metaphysics and phenomenology, there is a cosmological step we can take with Whitehead’s sense of thing, which he calls “entity” or “actual occasion” in Process and Reality. We start that journey by a more careful rereading of Heidegger’s example of the silver sacrificial chalice in the essay “The Question Concerning Technology.” For Heidegger the essence of the chalice is not merely its shape or the precious metal of which it’s made but its being a prop in a performance that gathers the divine and mundane worlds together.10 A cosmopolitically oriented reader might carp that this characterization of the essence of a chalice by its use in a social ritual like a sacrifice to the gods deeply embeds metaphysics as a purely anthropocentric phenomenon. But another reading would be to recognize that this does the useful job of dislodging us from presuming that the meaning of an entity — a word, a thing — can be read off solely from predicates local to that object. Let’s call that the atomicity assumption. An adaptive method like neural networks that makes such an atomicity assumption does not escape this problem merely by virtue of being an evolutionary process. Even more radically, one could read this example as a step toward exploding the phenomenon of the chalice into a cosmological setting via Simondon’s thicker account of material processes together with technical process. This thicker account of being as becoming that foregrounds the physics of matter equally with social ritual is not new. Indeed, conventionally metaphysical readings of Heidegger’s account of the essence of the man-made chalice may overlook that Heidegger gave equal weight to physis as poiesis. Silver is that out of which the silver chalice is made. As this matter (hyle), it is co-responsible for the chalice. The chalice is indebted to, i.e., owes thanks to, the silver for that out of which it consists. But the sacrificial vessel is indebted not only to the silver. As a chalice, that which is indebted to the silver appears in the aspect of a chalice and not in that of a brooch or a ring. Thus the sacrificial vessel is at the same time indebted to the aspect (eidos) of chaliceness. Both the silver into which the aspect is admitted as chalice and the aspect in which the silver appears are in their respective ways co-responsible for the sacrificial vessel. […]11 Not only handcraft manufacture, not only artistic and poetical bringing into appearance and concrete imagery, is a bringingforth, poiesis. Physis also, the arising of something from out of itself, is a bringing-forth, poiesis. Physis is indeed poiesis in the highest sense. For what presences by means of physis has the bursting open belonging to bringing-forth, e.g., the bursting of a blossom into bloom, in itself (en heautoi). In contrast, what is 10 Martin Heidegger, The Question Concerning Technology, and Other Essays, trans. William Lovitt (New York: Garland Publishing, 1977), 8. 11 Ibid., 7.
128
coding AS literacy — Metalithikum IV
brought forth by the artisan or the artist, e.g. the silver chalice, has the bursting open belonging to bringing forth not in itself, but in another (en alloi), in the craftsman or artist.12 Simondon gives a characteristically more fleshed-out account of technical mediation, one that in turn makes clearer the intricate processes by which technical knowledge coevolves with technical individuals and machines in geological / social / historical / symbolic milieu. However, in the technical operation which gives rise to an object having form and matter, like a clay brick, the real dynamism of the operation is extremely far from being able to be represented by the matter-form couple. The form and the matter of the hylemorphic model are an abstract form and an abstract matter. The definite being that one can show, this brick drying on this board, does not result from the union of an unspecified matter and an arbitrary form. If one takes fine sand, that is wet and then one puts it in a brick mould : with the release from the mould, one will obtain a sand heap and not a brick. If one takes clay and then passes it to the rolling mill or the spinneret : one will obtain neither plate nor wire, but an accumulation of broken layers and course cylindrical segments. The clay, conceived as supporting an indefinite plasticity, is the abstract matter. The right-angled parallelepiped, conceived as a brick form, is an abstract form. The concrete brick does not result from the union of the clay’s plasticity and the parallelepiped. So that there can be a parallelepipedic brick, a really existing individual, it is necessary that an effective technical operation institutes a mediation between a given clay mass and this notion of the parallelepiped. However, the technical operation of molding does not itself suffice. Moreover, it does not institute a direct mediation between a given mass of clay and the abstract form of the parallelepiped; the mediation is prepared by two chains of preliminary operations which make matter and form converge toward a common operation. To give a form to clay is not to impose the parallelepiped form on rough clay : it is to pack prepared clay in a manufactured mold. If one divides the two ends of the technological chains, the parallelepiped and the clay in the quarry, one tests the impression of realizing, in the technical operation, an encounter between two realities of heterogeneous domains, and institutes a mediation, by communication, between an inter-elementary order, macro-physical, larger than the individual, and an intra-elementary order, micro-physical, smaller than the individual.13 12 Ibid., 11. 13 Gilbert Simondon, L’individu et sa genèse physico-biologique (Paris: Presses Universitaires de France, 1964); trans. Taylor Adkins, October 3, 2007, https:// fractalontology.wordpress.com/2007/10/03/translation-simondon-and-thephysico-biological-genesis-of-the-individual/.
II A Nonanthropocentric Approach to Apperception
129
In short, the way that an object, whether a sacrificial silver chalice or a humble clay brick, comes to be is deeply in-formed, to adopt another Simondonian concept, by a complex of social, symbolic, and geological dynamics. Information, in his formulation, is that which conditions, registers, and calibrates the material-symbolic processes in which matter and thought co-articulate technical individuals and technical or aesthetic objects. So it is a multiplicative notion, conceptually quite the opposite to Claude Shannon’s formal definition in terms of bit entropy, whose innovation lies precisely in its decoupling of meaning from syntax. Simondon’s generous notion reverses the reduction of life to bits. So what, in sum, have we encountered from this triple-stranded passage through computer vision, Petitot’s recasting of Husserlian phenomenology, and Simondonian individuation? We have the possibility of a radically decentered, deanthropomorphized phenomenology that is a study of experience from more than the human perspective. This departure from Heidegger’s phenomenology is mindful of Deleuze’s enduring reservations concerning anthropocentric metaphysics but retains the larger concern with the condensation of subjectivity in the exfoliation of the experiential world. With this “fiber-bundle” articulation we have a non-ego-based, number-free, and metric-free account of experience that respects evidence of continuous lived experience but does not reduce to sense perception or ego-centered experience. And we have the essential feature of continuity both as a quality of lived experience and as a mode of description of such experience. Given that, it is hard to exaggerate the radical significance of fiber bundles as a continuous mode of articulation, reducible neither to alphabetic language nor to number, and free of the a priori conceit of a Cartesian perceiving agent. We have here the seed of an approach to poiesis and expressive experience that may avoid recourse to stochastic methods, statistics, and informatic sweepings of ignorance under the rug, and is “nonclassical” in the senses of both quantum mechanics and measure theory. Quantum mechanics precisely articulates the profound observational inextricability of the states of the observer, the observed, and the apparatus of observation. Any “god’s-eye view” of a phenomenon, treated as a representation that precedes or recapitulates a situation, subscribes to a classical notion of event. I suggest instead that we regard all semiotic technologies, whether a cartographer’s map or a game designer’s storyboard, as instruments for the enactive co-construction of events. With this understanding, I propose a second step, which is to retell this account without a classical commitment to the possibility of description. That is to say that an object, situation, or process has a property X, or is in a relation Φ with respect to another object, situation, or process, is making an assumption of measurement. This is not necessary for the emergence of value in concerted activity, but to develop that thought fully will be work for another day.
130
coding AS literacy — Metalithikum IV
III Pre-Specific Modeling: Computational Machines in a Coexistence with Concrete Universals and Data Streams Vahid Moosavi 1 how to Approach the Notion of Scientific Modeling 134 — II Formal Definitions and Categories of Scientific Modeling 136 — III Idealization in Scientific Modeling 137 — IV Universals and Modeling 141 — V Specific Modeling: Models Based on Abstract Universals 144 · V.I Limits of Modeling Based on Abstract Universals 146 · V.I.I Godel’s Incompleteness Theorem and Arbitrariness of Models Based on Abstract Universals 146 · V.I.II Curse of Dimensionality in Complex Systems 147 · V.I.III From Particular to Genericand the Concept of “Error” 147 — VI Pre-Specific Modeling: Models based on Concrete Universals 148 · VI.I Dedekind Cut: When a Particular Object is Represented by the Negation of Its Complement 149 · VI.II From Generic to Particular: Object -Dependent Representation 150 — VII Massive Unstructured Data Streams: An Inversion in the Notion of Measurements and Data Processing 152 — VIII Computational Methods Supporting Pre-Specific Modeling 156 · VIII.I Markov Chains 156 · VIII.II Self-Organizing Map 161 · VIII.II.I No More External Dictionary and No More Generic Object 162 · VIII.II.II Computing with Indexes Beyond Ideal Curves 165 Vahid Moosavi is a PhD candidate at the the Chair for Computer Aided Architectural Design (CAAD), Swiss Federal Institute of Technology (ETH Zurich), and has been working at the Future Cities Laboratory at Singapore-ETH Centre (SEC) in Singapore since 2012. In 2009 he got his master of science degree in Industrial and Systems Engineering from Amirkabir University of Technology (Tehran Polytechnic). He has worked as a consultant in the field of enterprise planning and transformation for several private and public organizations like the Ministry of Health and the Ministry of Energy in Iran, the Industrial Management Institute of Iran (IMI), and several manufacturing companies. His scientific research interest has always centered around abstract and mathematical modeling and design approaches. In his PhD research, he develops a generic computational modeling framework for urban and architectural design problems, based on Self-Organizing Maps (SOM) and Markov Models as core technologies in coexistence with Urban Data Streams. Some of the research applications he is now focused on are as follows: modeling urban traffic dynamics using GPS data streams, data-driven urban air pollution modeling, modeling the dynamics of real estate markets, spatial function approximation, and multi-variate thematic mapping. www.vahidmoosavi.com.
132
codingAS ASliteracy — Metalithikum literacy — Metalithikum IV coding
This article criticizes a majority of traditional approaches in scientific modeling as following an idealist point of view that results from the set theoretical understanding of models based on the assumption of “abstract universals.” We discuss the success of set theoretical models as well as some principle limits to applying them in dealing with complex systems. As an alternative to the assumption of abstract universals, we propose a conceptual modeling framework based on “concrete universals” that can be interpreted as a category theoretical approach to modeling. We call this modeling framework prespecific modeling. We show how a certain group of mathematical and computational methods that work with data streams are able to III Pre-Specific Modeling
133
operationalize the concept of prespecific modeling with great benefit and promise. I
how to Approach the Notion of Scientific Modeling
Modeling paradigms, as a necessary element of any scientific investigation, act like pairs of glasses that impact the way in which we encode (conceive of) the real world. Therefore, any kind of intervention in real world phenomena is affected by the chosen modeling paradigm and the real phenomena under investigation. In the domain of urbanism and urban design, cities as complex and open environments with dynamic and multidimensional aspects are challenging cases for modeling scholars, as there are many distinct urban phenomena. Figure 1 shows a list of different functional aspects of urban phenomena in an indexical manner. Fig. 1. Different functional aspects of urban phenomena.
In addition to the diversity of urban problems, there is a huge variety of competing paradigms for analyzing cities : the city as an ecological phenomenon that is optimally adjusted to an environment (economic, political, cultural) assumed to be “natural” for it; the city as a thermodynamic system that needs to be balanced and that can be controlled; the city as a grammatical text with its own “syntactic laws”; the city as a biological organism following fractal growth patterns. Further, historical perspectives provide additional city models such as the City of Faith, the City as Machine, or the Organic City,1 and especially, since the advent of computers from the second half of the twentieth century, city as information.2 1 2
134
See Kevin Lynch, Good City Form (Cambridge, MA: MIT Press, 1984). See Manuel Castells, The Informational City: Information Technology, Economic Restructuring, and the Urban-Regional Process (Oxford: Blackwell, 1989), 15.
coding AS literacy — Metalithikum IV
Compared to classical science and its engineering disciplines such as physics, chemistry, and mechanics, urban design, planning, and modeling is a rather young discipline; yet when one does a quick search of the keywords central to this field, one is quickly confused by the number of approaches and the variety of practical problems within the reaches of the discipline. For example, A. G. Wilson’s five-volume text on urban modeling is over 2,600 pages long.3 A broad range of case-based canonization has thus emerged, and applied techniques are developed for specific urban functions such as urban land use, urban transportation, urban economy, urban social patterns, and so on. As a result, the lack of a more abstract categorization of applied techniques makes comparison between them very hard. Beginning in the mid-twentieth century, general systems theory emerged as one of the main theories for working toward unification of different disciplinary modeling practices.4 In principle, the underlying idea of systems theory is the promotion of a unified view to modernist-reductionist science, which was diversified around a variety of application and functional domains. Although interdisciplinary collaborations such as making analogies within disciplines (e.g., hydraulic theories to describe biological systems) was not new, systems theory’s formalization, as an orthogonal view to classically diversified scientific and practical problems, reached a point in which, according to George Klir, systemic tasks such as modeling, optimization, and simulation emerged as distinct scientific disciplines.5 However, taking systems theory as a body of knowledge (rather than a specific and singular theory), one could expect a gradual divergence of its methods, starting from its unified principles. The advent of computational methods by Alan Turing in the 1940s and later the democratization of computational methods in the 1980s created a new diversified landscape of system modeling approaches. As a result, after fifty years we encounter a competitive ecosystem of different modeling species with different capacities and trade-offs. Figure 2 shows a list of different modeling methodologies in an indexical manner. Fig. 2. Competitive ecosystem of different modeling methodologies.
3 A. G. Wilson, Urban Modelling: Critical Concepts in Urban Studies, vols. 1–5 (London: Routledge, 2012). 4 See Ludwig von Bertalanffy, General Systems Theory (New York: George Braziller, 1993). 5 See George J. Klir, Architecture of Systems Complexity (New York: Saunders, 1985).
III Pre-specific Modeling
135
Therefore, the first motivation of this present essay is to find a unifying (abstract) perspective for an assessment of different (interdisciplinary) modeling approaches, while keeping the diversities. Toward this aim, we need to investigate the mathematical and philosophical grounds of scientific modeling. Formal Definitions and Categories of Scientific Modeling
II
Because there is such a wide variety of modeling approaches in different scientific domains, formalizing and theorizing the practice of scientific modeling is an active research area in the philosophy of science. For example, according to Roman Frigg and Stephen Hartmann, there exist the following types of models: Probing models, phenomenological models, computational models, developmental models, explanatory models, impoverished models, testing models, idealized models, theoretical models, scale models, heuristic models, caricature models, didactic models, fantasy models, toy models, imaginary models, mathematical models, substitute models, iconic models, formal models, analogue models and instrumental models are but some of the notions that are used to categorize models.6 Nevertheless, these categories are not still abstract enough, but rather labels for different (not necessarily exclusive) modeling approaches. To better understand models, one can look at the interpretation of their roles and functions, and to distinguish the presets on which the different points of view are based. One of the main issues by which models have been extensively discussed is the relation between models and the way of representation of real phenomena under study (the target system). According to Frigg and Hartmann, from a representational point of view there are “models of phenomena” and “models of data,”7 and within these categories there are subcategories such as “scale models,”8 “idealized models,”9 and “analogical models,”10 like the hydraulic model of an economic system, which are further divided into material analogy, where there is a direct similarity between the properties (or relations between properties) of two phenomena, and formal analogy, where two systems are based similarly on a formalization such as having the same
Roman Frigg and Stephan Hartmann, “Models in Science,” in Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta (Fall 2012), ed. Edward N. Zalta, http://plato.stanford. edu/archives/fall2012/entries/models-science/. 7 Ibid. 8 Max Black, Models and Metaphors: Studies in Language and Philosophy (Ithaca, NY: Cornell University Press, 1962). 9 Ernan McMullin, “Galilean Idealization,” Studies in History and Philosophy of Science Part A 16, no. 3 (1985): 47–73. 10 Mary B. Hesse, Models and Analogies in Science (London: Sheed and Ward, 1963). 6
136
coding AS literacy — Metalithikum IV
mathematical equations that describe both systems 11. Further, one can refer to phenomenological models, which are focused on the behavior of the particular phenomena under investigation rather than on underlying causes and mechanisms.12 Further still, there are models of theories, like the dynamic model of the pendulum, which are based on laws of motion. Models can also be divided into ontological classes like physical objects, fictional objects, set-theoretic structures, descriptions, and equations. However, these categories of models and modeling approaches overlap and they are descriptive and neutral classifications rather than critical. They do not give us a measure or a gauge to compare different modeling approaches in terms of their capacities and their limits in dealing with different levels of complexity in real world problems. In this essay I am looking for a way to condition modeling approaches in different levels of complexity to examine their theoretical capacities. Among the abovementioned categories, the crucial but somewhat commonly accepted shared property of the majority of traditional scientific modeling approaches is that they are all based on some sort of idealization. In what follows, I explain different aspects of idealization in scientific modeling. The issue of idealizations directs us to the problem of universals, which is an old philosophical issue.13 III Idealization in Scientific Modeling In the context of philosophy of science, idealization in modeling has been discussed extensively.14 In principle, idealization is considered to be equal to an intended (over-)simplification in the representation of the target system. Although there are different ways of explaining or defining the notion of idealization, Michael Weisberg discusses three kinds of idealization that we refer to in this work: minimalist idealization, Galilean idealization, and multiple-model idealization.15 Minimalist idealization is the practice of building models of real world phenomena by focusing only on main causal factors. Therefore, as is inferred from its name, minimalist models usually result in very simple elements that are informative enough for further decision making. For example, the aim in the domain of network analytics is to explain complex behaviors that happen in the real phenomena by means of network properties such as centrality measures, integration, closeness, 11 Ibid. 12 Ernan McMullin, “What Do Physical Models Tell Us?,” Studies in Logic and the Foundations of Mathematics 52 (1968): 385–96. 13 See Gonzalo Rodriguez-Pereyra, “What Is the Problem of Universals?,” Mind 109, no. 434 (2000): 255–73. 14 See McMullin, “Galilean Idealization”; Leszek Nowak, “Laws of Science, Theories, Measurement,” Philosophy of Science 34 (1972): 533–48; Michael Weisberg, “Three Kinds of Idealization,” Journal of Philosophy (2007): 639–59. 15 Weisberg, “Three Kinds of Idealization.”
III Pre-specific Modeling
137
between-ness, etc.16 As an example, in urban theory, in the city science approach17 or urban scaling laws18 the final goal is to find a few main informative factors in cities such as city size or population in order to explain other aspects of cities such as energy consumption in a linear equation. Even though it seems obvious that cities are complex phenomena with many observable aspects and many exceptions, minimalist models attract attention exactly because they identify and state very general rules. Fig. 3. Network analytics: Structure-oriented modeling (minimalist idealization), Central Place Theory (left) and Space Syntax (right).
City theories that seek to create archetypal city models are in a way using minimalist idealized models. For example, Lynch’s City of Faith, City of Machine, or City as Organism, or Cedric Price’s egg analogies of the city (city as boiled egg, city as fried egg, or city as scrambled egg) are characterized by a few urban elements that are informative enough to explain each model and to discriminate that city model from the other models. David Grahame Shane shows how the three abovementioned models could be identified by linear combinations of three recombinant elements, called Enclave, Armature, and Heterotopia.19 The second category of Galilean idealization as the most pragmatic type of idealizations happens when the modeler intentionally simplifies the conditions of a complicated situation toward more computational tractability and simplicity. For example, it is common in economic models to assume that agents are rational maximizers, or in transportation models to assume that commuters take the shortest path, or to assume there is no friction in motion models of the particles. The basic idea of Galilean idealization is that by understanding the modeling environment gradually, it is possible to de-idealize or to build more 16 See Mark Newman, Networks: An Introduction (Oxford: Oxford University Press, 2010). 17 See Luís M. A. Bettencourt et al., “Growth, Innovation, Scaling, and the Pace of Life in Cities,” Proceedings of the National Academy of Sciences 104, no. 17 (2007): 7301–06. 18 See Michael Batty, “The Size, Scale, and Shape of Cities,” Science 319, no. 5864 (February 8, 2008): 769–71. 19 David Grahame Shane, Recombinant Urbanism: Conceptual Modeling in Architecture, Urban Design, and City Theory (Chichester: Wiley, 2005).
138
coding AS literacy — Metalithikum IV
comprehensive models on top of previous ones. Therefore, the majority of engineering approximation methods such as systems of differential equations or computational fluid dynamics or biological reaction networks are among this category of idealized models. Further, figure 4 shows how the idealization process in a complex phenomenon (here, the agent-based modeling of land-use transportation dynamics of a city) leads to a parametric and feature-based representation of the real phenomena. This layering and parameterization gives the modeler the option to adjust the resolution (levels of details) of the model based on the needs and the purposes of the modeling process and the constraints and limitations, including the availability of data or prior knowledge or time and scale resolutions. Fig. 4. Parametricism: Idealization of the interactions between different agencies through layering and parameterization of the real phenomena.
The third category of idealization, multiple-model idealization, results from those models that consist of several (not necessarily compatible) models or several models with different assumptions and different properties. This type of idealization is in fact a combination of two other idealizations and it can be very useful when understanding the final output (the behavior) of the model is more important than knowing the underlying mechanisms of the target phenomena. For example, in weather forecasting, ensemble models, which include several predictors with different parameters or even different structures, are used to predict weather conditions.20 Further, from a systemic and functional point of view there are many models in which idealization is happening in (one) main aspect(s) of real phenomena. To just name a few : static or dynamic models, structureoriented idealization (in network models), process-oriented idealization 20 See Tilmann Gneiting and Adrian E. Raftery, “Weather Forecasting with Ensemble Methods,” Science 310, no. 5746 (2005): 248–49.
III Pre-specific Modeling
139
(such as system dynamics,21 system of differential equations), rule-based idealization (such as cellular automata22 or fractals23), and decentralized interactions (such as agent based), all are placed in the abovementioned categories of idealizations. Fig. 5. System dynamics: process-oriented idealization.
However, considering the size and the variety of parameters and aspects in the target phenomena, idealized models create a dichotomy, where on one extreme the models are all general, simple, and tractable, and on the other, models become complicated, specific, and high-resolution. In fact, multiple model idealization becomes necessary whenever the selected parameters and aspects of the target system in each individual model (out of Galilean idealization for example) are not sufficient, but also add more aspects to an individual model, either making it more complicated or resulting in model inconsistency. This issue seems to be a never-ending debate in many scientific fields, including biology, ecology, economics, and cognitive and social science, where one group believes in the explanatory power of models and the other group believes in model accuracy and the level of details comparing to the real phenomena.24 21 John Sterman, Business Dynamics: Systems Thinking and Modeling for a Complex World (New York: McGraw Hill, 2000) 22 Waldo Tobler, “Cellular Geography,” in Philosophy in Geography, ed. Stephen Gale and Gunnar Olsson (Boston: D. Reidel, 1979), 379–86. 23 Benoit Mandelbrot, The Fractal Geometry of Nature (New York: W. H. Freeman & Co., 1982). 24 See Matthew R. Evans et al., “Do Simple Models Leads to Generality in Ecology?,” Trends in Ecology & Evolution 28, no. 10 (2013): 578–83; Matthew R. Evans et al., “Data Availability and Model Complexity, Generality, and Utility: A Reply to Lonergan,” Trends in Ecology & Evolution 29, no. 6 (2014); Mike Lonergan, “Data Availability Constraints Model Complexity, Generality, and Utility: A Response to Evans et al.,” Trends in Ecology & Evolution 29, no. 6 (2014).
140
coding AS literacy — Metalithikum IV
Although idealized models have been applied successfully in many classical modeling problems, this type of debate cannot be fruitful in dealing with complex systems as long as there is no abstraction from the current paradigm of scientific modeling (i.e. idealization). Analogically, an onionlike model of numbers explains what I mean by the abstraction in the concept of modeling. For example, with natural numbers (or more generally, integers) one can never grasp the richness of proportions and fractions in rational numbers (e.g., 2.6, which is neither 2 or 3 from a natural number perspective), while the introduction to the concept of rational numbers as the ratio of two integer numbers (e.g., 26/10) solved this problem. Therefore, by choosing 1 as the denominator, one can show that all the integers are rational numbers; while with rational numbers we have new capacities in addition to integers. Similarly, if we take an idealized model as an arbitrary representation of real phenomena by adding several of them together (which is the case in multiple model idealization), we still cannot grasp the whole complexity. Therefore, our hypothesis is that an abstraction to the concept of modeling is needed in order to conceptually encapsulate all the potential arbitrary views in an implicit way. However, I do not claim that one can introduce a new concept as such, but in fact in this work I am trying to identify and discover new aspects of a potential body of thinking in scientific modeling. In order to highlight this conceptual abstraction from the current idealization paradigm, first we need to explain the notion of universals, including abstract and concrete universals, followed by our interpretations of these concepts in relation to the notion of scientific modeling. In the next section, after presenting the connections between the notions of idealization and abstract universals, I will formally describe the concepts of abstract universals and concrete universals, which can be interpreted as set theoretical and category theoretical definitions of these two notions.25 Further, I will show how the concept of concrete universals from category theory can open up a new level of modeling paradigm. IV Universals and Modeling In the majority of texts written about idealization in the domain of scientific modeling, the notion of idealization is equal to simplification and the elimination of empirical details and deviations from a general theory that is the base for the final model. At the same time, the word “ideal” literally comes along with “those perfections that cannot be fully realized.” For example, circle-ness as a property is an ideal that cannot be fully realized, and any empirical circular shape has, to a degree, the circle-ness property. 25 David P. Ellerman, “Category Theory and Concrete Universals,” Erkenntnis 28, no. 3 (1988): 409–29.
III Pre-specific Modeling
141
Fig. 6. Enso (circle of Zen): Toward the ideal circle.
Therefore, the idealization process in scientific modeling can be explained as a form of purification of empirical observations toward a set of given (assumed) ideal properties. In statistical data analysis, it is always assumed that collected empirical data follows a normal distribution function. Thus, one can convert the empirical data to a normal distribution function and utilize it in a “normalized” manner that results from the machinery of this ideal mathematical representation (i.e. the normal distribution function). Applications of idealizations in many mathematical approaches such as linear algebra are enormous. For example, a Fourier transformation (figure 7) can be seen as a form of idealization by which any observed time-varying data can be reconstructed (approximately) by a set of time-varying vectors (a set of pure sinusoidal waves with different frequencies and phases). From this perspective, any waveform phenomenon is a linear combination of a set of ideal prototypes. Fig. 7. Fourier decomposition: Any observed form is a linear combination of some ideal cyclic form.
However, these ideal forms (a wave with a certain frequency in the case of the Fourier analysis) as the set of aspects (properties) of real phenomena are abstract. This means that there is no concrete (empirical) instance that fully matches one or several of these a priori, ideal properties. From this point of view, idealized models are models that are based on the notion of abstract universals.
142
coding AS literAcy — metAlithikum iV
The notions of “universals” and “property” are old topics in philosophy that can be approached differently, namely through realism, idealism, or nominalism.26 However, in this work I focus on the distinctions between concrete and abstract universals in relation to the paradigms of scientific modeling. According to David Ellerman, “In Plato’s Theory of Ideas or Forms (ειδη), a property F has an entity associated with it, the universal uF, which uniquely represents the property. Therefore, an object X has the property F i.e. F(X), if and only if it participates in the universal uF to a degree (μ).”27 For example, “whiteness” is a universal and the set of white objects that participate in “whiteness” property (i.e., with different degrees of whiteness) are represented by this property. Further, “Given a relation μ, an entity uF is said to be a universal for the property F (with respect to μ) if it satisfies the following universality condition : For any x, x μ uF if and only if F(x).”28 This condition is called universality, and it means that the universal is the essence of that property. In addition to universality, a universal should be unique. “Hence there should be an equivalence relation (≈) so that universals satisfy a uniqueness condition : If uF and uF’ are universals for the same F, then uF ≈ uF’.”29 Therefore, any entity that satisfies the conditions of universality and uniqueness for a certain property is a universal for that property. Now, if a universal is self-participating, it is called a concrete universal; if it does not have selfparticipatory properties, it is an abstract universal. For example, whiteness is an abstract universal as there is no empirical (concrete instance) to be “whiteness.” In language models, being a “verb” is a property that can be assigned to many words, but “verb” itself is an external definition and it is not self-participating in the sets of concrete verbs. The same argument goes for the above example of the Fourier analysis and ideal forms. On the other hand, defining a property as being part of set A and set B has a concrete universal, which is the intersection of two sets A and B (A ∩ B). It means that any object from set A and B (including all the potential subsets) that has this property (being part of A and B) participates in the intersection set A ∩ B, and since A ∩ B is participating in itself, then it is a concrete universal. Further, Ellerman shows that modern set theory is the language of abstract universals and how category theory can be developed as the mathematical machinery of concrete universals. Finally, he summarizes: Category theory as the theory of concrete universals has a different flavor from set theory, the theory of abstract universals. Given 26 See Rodriguez-Pereyra, “What Is the Problem of Universals?” 27 Ellerman, “Category Theory and Concrete Universals,” 410. 28 Ibid. 29 Ibid., 411.
III Pre-specific Modeling
143
the collection of all the elements with a property, set theory can postulate a more abstract entity, the set of those elements, to be the universal. But category theory cannot postulate its universals because those universals are concrete. Category theory must find its universals, if at all, among the entities with the property.30 In the past few decades there have been many theoretical works to further the new field of category theory in terms of this fundamental difference between set theory and category theory. For example, currently the main categorical approaches in mathematics are topos theory and sheaf theory, which are generalizations of topology and geometry to an algebraic level.31 It seems that applications of these general frameworks in different domains should be one of the main future research areas in the field of modeling. On the other hand, Ellerman concludes: Topos theory is important in its own right as a generalization of set theory, but it does not exclusively capture category theory’s foundational relevance. Concrete universals do not “generalize” abstract universals, so as the theory of concrete universals, category theory does not try to generalize set theory, the theory of abstract universals. Category theory presents the theory of the other type of universals, the self-participating or concrete universals.32 Now that we have defined the concepts of abstract and concrete universals, we need to formalize two different approaches of modeling, which are based on these notions of the universal. As stated earlier, idealized models are models that are based on the notion of abstract universals and consequently idealized models can be interpreted as set theoretical models. In the next section, by focusing on the idea of representation in idealized models, I show their theoretical consequences and their limits in dealing with complex systems, with the definition of the abstract universal being crucial. Next, I show another conceptual representational framework that is matched with the concept of concrete universals. Further, I will introduce an alternative line of modeling to idealized modeling. V Specific Modeling: Models Based on Abstract Universals The fundamental difference between abstract and concrete universals is the issue of self-participation. In terms of modeling and representation, in those models based on abstract universals, the definition of the common property of the target system is a priori, given in a metalevel. This means that in an empirical setup, we have an externally given 30 David Ellerman, “On Concrete Universals: A Modern Treatment Using Category Theory”; available online at SSRN, http://ssrn.com/abstract=2435439, here p. 6. 31 See Saunders Mac Lane and Ieke Moerdijk, Sheaves in Geometry and Logic: A First Introduction to Topos Theory (New York: Springer, 1992). 32 Ellerman, “Category Theory and Concrete Universals,” 16–17.
144
coding AS literacy — Metalithikum IV
idea about the set of properties (aspects) of the real phenomena under study at the beginning of the modeling process. As an example, if we are comparing many concrete objects (e.g., several apples), we first need to define a set of specific properties (such as size, color, taste, etc.) to construct a representation of apple-ness. Therefore, apple-ness is reduced to this external setup. We call this approach specific modeling as it is based on a set of specific properties of the target system. In relation to the idealization process, the level of details in terms of the number and variety of properties is the choice of the modeler. If the modeler considers few aspects of the target system the model becomes simple and if he or she selects many aspects or properties, the model becomes complicated. Figure 8 shows the concept of idealized representation in specific modeling schematically. Each circle in this figure stands for a concrete object. These objects are symbolic, which means that they can stand for anything — be it people, cars, companies, buildings, streets, neighborhoods, cities, websites, protein networks, networks of words in a corpus of texts, or people and their activities in a social network. Therefore, in the first step, we need to define our abstract universals, which leads to a set of selected features of the real objects. These features are shown by rectangles. As a result of these universal features, the concrete instances of the object are assumed to be independent from each other, as they will be all compared indirectly by an abstract class definition, which acts as an external reference. Fig. 8. Specific modelingbased abstract universals and parametric idealization of the target object.
This is the underlying notion of rationality started in sixteenth century by René Descartes and it should be mentioned that it offers a fantastic mechanism and an abstract language for axiomatization of different phenomena. Nevertheless, there are fundamental limits to this approach of modeling in dealing with complex phenomena, with many different properties, where the specific models need to define an arbitrary set of properties.
III Pre-specific Modeling
145
V.I
Limits of Modeling Based on Abstract Universals Within the literature of scientific modeling, the majority of discussions on the issues of scientific modeling approaches are bounded to models based on abstract universals and the differences of different idealization processes. Among the few investigations, Richard Shillcock discusses the fundamental problems of modeling in the domain of cognitive science from the perspective of universals. He notes: “Cognitive science depends on abstractions made from the complex reality of human behaviour. Cognitive scientists typically wish the abstractions in their theories to be universals, but seldom attend to the ontology of universals.”33 Later he explains several fundamental problems in the domain of cognitive science by reviewing the different aspects of abstract and concrete universals. In what follows I present some of the fundamental issues of the models that are based on abstract universals.
V.I.I Gödel’s Incompleteness Theorem and Arbitrariness of Models Based on Abstract Universals In models based on abstract universals, the universal properties are not self-participating. Intuitively, one can argue that in any level of abstraction, members of a set are concrete and the set itself is abstract with regard to its members. Therefore, the first modeling step is the decision about the set of properties that define (represent) the object of inquiry. To have a set of concrete instances (e.g., set of red apples), one needs a super-set that defines the ideal properties of that class (the apple-ness and the red-ness). This requirement (brought forward by Plato) initiates a never-ending hierarchical process of defining abstract universals for the higher order classes (e.g., a set for colors). As a result, one can argue that in practical modeling domains, from a meta-level above, models are based on assumed or commonly agreed properties of the target system. This problem can be explained by Gödel’s incompleteness theorem, that is to say we only can make a consistent system if it is based on an unproved truth (the incomplete model); if the model is complete (everything based on proofs), it cannot be consistent.34 This beautiful theorem simply says that any model that is based on abstract universals is in a way arbitrarily consistent, but not simultaneously complete. The same argument holds for the case of Russell’s paradoxes and naive set theory.35 33 Richard Shillcock, “The Concrete Universal and Cognitive Science,” Axiomathes 24, no. 1 (2014): 63–80, here p. 63. 34 Panu Raatikainen, “Gödel’s Incompleteness Theorems,” Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta (Winter 2014), http://plato.stanford.edu/archives/ win2014/entries/goedel-incompleteness/. 35 See Ellerman, “Category Theory and Concrete Universals.”
146
coding AS literacy — Metalithikum IV
V.I.II Curse of Dimensionality in Complex Systems Models based on abstract universals have been successfully applied in many practical domains such as classical physics, medicine, and engineering. Nevertheless, they reach a computational limit in dealing with complex systems. This limit is directly related to their quest for explicit representation of the target systems through a set of specific properties. Assume that we measure the complexity of a system (i.e., a real phenomenon) as a function of the number of its potential properties and the relations between those properties.36 In this scenario, in comparison to a building a wooden chair is less complex, and the same relation holds for a building in comparison to a city. As a result, by increasing the number of potential properties and their interrelationships, and consequently the exponential growth in the number of combinations, the space of modeling (i.e., potential specific models) expands in an exponential manner. This phenomenon is called the curse of dimensionality introduced by Richard Bellman in 1961.37 Consequently, in a complex system, any endeavor toward an explicit representation (which is the case in specific modeling) leads either to a complicated model (models with lots of redundancy and lack of explanation) or to very simple and minimalistic idealizations. Figure 9 shows this issue diagrammatically. Fig. 9. The curse of dimensionality and idealized models based on abstract universals.
V.I.III From Particular to Generic and the Concept of “Error” In the idealization process of particular objects there is no longer a unique identity dedicated to a particular (concrete) instance, but rather the identity of that particular case is realized as a combination of globally defined properties (see figure 8). In other words, in models based on 36 See Klaus Wassermann, “That Centre-Point Thing — The Theory Model in Model Theory,” in Printed Physics — Metalithikum I, ed. Vera Bühlmann and Ludger Hovestadt (Vienna: Ambra, 2013), 156–88. 37 Richard Bellman, Adaptive Control Processes: A Guided Tour (Princeton, NJ: Princeton University Press, 1961).
III Pre-specific Modeling
147
abstract universals the particular object is considered as an instance of a (fictitious) generic object. Along the same line, Shillcock says: “The concrete universal is a universal, but it has all the richness of the particular. Whereas an abstract universal can be defined as something abstract (typically seen as a property) that inheres in many other different things, a concrete universal is an entity in which many other different things inhere.”38 Consequently, constructing the notion of a generic object through the lens of abstract universalism, we impose a limit to empirical deviations and treat them as errors. For example, assuming linearity as an ideal property in a system, the generic object has a purely linear behavior while other objects are erroneous (they deviate from the line) to a degree. Figure 10 shows this issue in the case of linear regression. In a two-dimensional linear system, it is assumed that for any observation (i.e., a concrete object via its x and y dimensions) there is a linear relation in the form of y=ax + c. Therefore, those points that don’t fall in a common line have a degree of error in comparison with the ideal line. Fig. 10. Introduction to the concept of error: The deviation of particular objects from the ideal line.
Vi
Pre-sPeciFic ModeLing: ModeLs based on concrete uniVersaLs
In this section, we investigate the potentials for a new level of abstractions in paradigms of scientific modeling. This is not in opposition to specific modeling (i.e., models based on abstract universals), which is a common approach in social science and the humanities, rather it is based on the notion of concrete universals from category theory. As discussed in the previous section, models that we call specific models are based on a priori defined or selected abstract universals. Further, they have certain theoretical limits and issues in dealing with complex systems. Therefore, my hypothesis is that if any specific model is like an arbitrary view to the real 38 Shillcock, “The Concrete Universal and Cognitive Science,” 71.
148
coding AS literAcy — metAlithikum iV
phenomena, there should be a category of models that encapsulates all the potential specifics in an implicit way. We call this approach pre-specific modeling, which was originally introduced by Vera Bühlmann.39 If specific modeling can be theorized by set theory and abstract universals, pre-specific modeling should be supported by the concepts of category theory and concrete universals. In order to establish the building blocks of pre-specific modeling, we need to focus on fundamental assumptions of the specific modeling. VI.I Dedekind Cut: When a Particular Object is Represented by the Negation of Its Complement In specific modeling, when one defines the abstract universal in terms of a set of specific properties, a parametrical generic object will be conceptualized directly. The individual objects can then be reconstructed (analyzed) or generated (synthesized) by changing the values of those specific parameters in the generic object. As a fundamental example, we refer to number theory and the definition of rational numbers as the ratio of two integers m and n, where n is not equal to 0. In this case, any specific rational number, q, can be directly represented by infinite pairs of (m, n) integer values, where m / n = q. In other words, q is graspable directly and independently from the other rational numbers. However, as we know, the rational numbers are countable and they are only a small fraction of the whole space of the real numbers. As a result, this approach reaches a point in some cases where no one can define a real number on its own. Fig. 11. Rational numbers cannot fill the space of real numbers. Each line corresponds to one rational number.
For example, √2 cannot be touched by the abovementioned procedure. In general, this is the case for irrational numbers. A different method for the definition of irrational numbers is required. In the late nineteenth century, Richard Dedekind came up with a different 39 Vera Bühlmann and Martin Wiedmer, eds., Pre-Specifics: Some Comparatistic Investigations on Research in Art and Design (Zurich: JRP|Ringier, 2008).
III Pre-specific Modeling
149
conceptual definition of irrational numbers, known as the Dedekind cut.40 Intuitively, a Dedekind cut is a unique way of representing an irrational number by its complementary set. He defined a cut for a specific number, b, as the space between two ordered sets of rational numbers A and B, where all the elements of A are less than all the elements of B and further all the elements of A are smaller than b and all the elements of B are equal or greater than b. Definitely, if b is a rational number, the union of set A and B is the whole number space of real numbers, U, and if b is an irrational number, b is equal to U minus the union of sets A and B (AUB). For example, in order to define √2, A is the collection of all negative rational numbers and the collection of every nonnegative rational number whose square is less than 2 and B is the collection of all positive rational numbers whose square is larger than 2. Fig. 12. Dedekind cut: Representation of an irrational number as the negation of its complement.
Further, two irrational numbers can be compared through their corresponding cuts — if their cuts are equal, these numbers are identical. By this definition each specific irrational number is represented uniquely as the negation of its complement, while it is not directly touchable. This is opposite to that of a rational number, where it can be directly pointed out. We think that regarding the issue of object representation, the Dedekind cut importantly implies that it is possible to introduce an alternative approach for representation of the objects to what is common in specific modeling (shown in figure 8).41 VI.II From Generic to Particular: Object-Dependent Representation The core aspect about any modeling paradigm is how real phenomena are represented. As figure 8 shows, by selecting the set of representational properties of the real phenomena in specific modeling, each 40 Richard Dedekind, “The Nature and Meaning of Numbers,” in Essays on the Theory of Numbers, trans. W. W. Beman (Chicago: Open Court Publishing, 1901 [1898]). 41 For more detailed discussions around the conceptual and philosophical underlying ideas of the Dedekind approach, refer to what Ludger Hovestadt mentions as centered voids in Vera Bühlmann and Ludger Hovestadt, ed., EigenArchitecture (Vienna: Ambra, 2014).
150
coding AS literacy — Metalithikum IV
individual object is represented directly. In other words, the identity of a particular object is defined independently of the other (concrete) objects as long as we have a global axiomatic set up (i.e., those selected properties) to define the generic object. Here, the generic object is the abstract universal, in which with different parametric values one can instantiate or approximate a particular object. Referring to the example of number theory, this is similar to the case of rational numbers, where a specific number can be generated as the ratio of two integers. Now, imagine an empirical representation of concrete objects in network-based representation in which nodes of connectivity, which are to specify in multidimensional ways, represent objects. For example, the number of cars that pass from one street to another, or the relation established by two individuals who select the same restaurant, or the relation between two cities that host offices of the same company, or the number of times a specific word has appeared after another specific word. In distinction to parametric representation of objects, the identity of an object is defined directly in terms of the relations it maintains with the other objects. The main difference between the two approaches is that in the feature (property) based approach, the specific identity of objects is assumed independently, while in the network-based representation, the identity of objects is regarded as pre-specific, and is specified purely relationally, out of the connectivity, which is observable. Two objects are considered identical if they share the same sets of relations with other objects. Figure 13 shows the representation of concrete objects in a networkbased approach. Fig. 13 Object-dependent representation of concrete objects in pre-specific modeling.
In specific modeling, each property has an abstract universal, but in object — object relations, each concrete object is a property (for example A-ness for concrete object A) and thus we have concrete universals for each property, as each object has an identity relation with itself.
III Pre-specific Modeling
151
Assuming each concrete object as a feature, we can represent each object via its relation to the other objects. In comparison with the definition of irrational numbers by the Dedekind cut, here too the identity of a particular concrete object is defined as the negation of the identity of the other particulars. Note that here there objects that are not yet defined — unlike the case of parametric object representation. This setup, as shown in figure 13, is an object-dependent representation that is conceptually scalable with the size of empirical objects. While in specific modeling the size of parameters is independent to the number of concrete objects (i.e., observations), in object-dependent representation, by adding one concrete object we directly add one new aspect for the representation of other objects. This aspect makes pre-specific modeling suitable for working with large amounts of data. Further, in many areas today, the conditions for these types of representation hold, as we have an emergent network of connected instances that can be used for the representation of the object of inquiry. In section 8 I present two main technical frameworks that support the concept of pre-specific modeling — two main applications of objectdependent representation. However, as I mentioned before implicitly, object-dependent representation and pre-specific modeling in general are data driven, as the setup shown in figure 13 is based on concrete objects. The role of data in pre-specific modeling is different than classical empirical research when one assumes to have an a priori generic object. As pre-specific modeling is proposing a new modeling framework it demands another notion of data, one that is different than traditionally designed observations and measurements. VII Massive Unstructured Data Streams: An Inversion in the Notion of Measurements and Data Processing In classical scientific modeling, theories and a priori representations define what should be measured and observed. According to Bas C. van Fraassen, a measurement outcome is always achieved relative to particular experimental setup designed by the user and characterized by his theory.42 Similarly, as we showed in the case of specific modeling, by selecting abstract universals we limit the set of potential observable aspects of real phenomena. For example, when dealing with a pendulum model and using Newtonian laws of gravity as a theoretical model to describe the foundation of the motions of particles, data and measurements can only empirically validate or propose minor modifications. Therefore, classical data has always played a marginal role in the process of modeling. 42 Bas C. van Fraassen, “Scientific Representation: Paradoxes of Perspective,” Analysis 70, no. 3 (2010): 511–14.
152
coding AS literacy — Metalithikum IV
In addition to this conceptual setup, measurement and observation have, historically, been very expensive, and this pushed modelers toward more structured, designed, and optimized experiments and observations. Taking into consideration data in specific modeling, figure 14 shows the classical process of modeling. As shown in the diagram, abstract universals (or the definition of the generic objects) are always the first and the primary element of modeling processes; the data — including its structure (i.e., the selected properties of the real environment) and its size (to be statistically enough) — has a supporting role in model tuning and model validation. This diagram shows that since the data is the secondary element, after a certain level of observation the model quality (in terms of accuracy for example) becomes stable, as we have enough data to tune the system. Fig. 14 The classical modeling process (specific modeling).
Nevertheless, considering computational technologies as the dominant factors in shaping and directing the area of scientific modeling of the last century, the landscape of measurements and data processing has been changing dramatically. In a recent article, I discuss three levels of computational capacities, known as computing power, computational and communicational networks, and data streams.43 The first level deals with computing power in terms of numerical simulations in comparison with analytical approaches. Historically, there have been different technologies of computation starting with mainframes, moving to the democratization of computing through personal computers and microcomputers, which are still getting faster and more powerful at an exponential rate. The primary function of computing power is numerical simulation, even though computers have been isolated or with limited communication abilities. Although computers and their simulation power opened up new possibilities for better understanding of the real world phenomena in 1960s and 1970s in many fields, for a while during the late 1970s, these computational models got data hungry and their demand for data was higher than what was available for model tuning and validation. This produced some skepticism about the applications of computational models to real world problems.44 43 Vahid Moosavi, “Computational Urban Modeling: From Mainframes to Data Streams,” presented at the AI for Cities Workshop, AAAI Conference, Austin, Texas, 2015. 44 See Douglas Lee, “A Requiem for Large Scale Modeling,” Journal of the American Institute of Planners 39, no. 3 (1973): 163–78.
III Pre-specific Modeling
153
However, alongside the developments within computing technologies, advancements in communication technologies gradually opened up another capacity for modelers, which can be considered as the second level of computational capacities. In this level, where computing power was not scarce, it was the communication between computing systems that turned important. Therefore, new phenomena such as networks of sensors, mobile phones or computers, and the Internet started to emerge. Gradually, considering the amount of embedded systems in many real world applications, computers as computing machines became the ground to introduce new functions that were emerging on top of computational networks. As a byproduct of these networks of computing and communicating machines, the amount of digital data started to increase as well. Starting from the mid-1990s, technical terms such as “data mining” and “database management” emerged in parallel to a focus on methodology to explore digital data (mainly structured data) among modelers. As can be seen, data began to emerge around this period, but this data was a byproduct of designed measurements and sensory systems. It is important to note that by this time the notion of data had not changed from its old notion — collected data was still structured and followed by the modeler’s choices. In fact, the data still was the secondary element, rationally determined by the given properties of the target system. However, what had changed dramatically was the amount of digitally collected data. It started to grow quantitatively on top of the communicating and computational networks across disciplines. Finally the third level, for which we think we have a suitable notion of data for pre-specific modeling, emerged only recently. With rapid advancements both on the level of computing power and the networks of computing systems, and a rapid growth in social media, we have encountered a new stage in which on top of ubiquitous computing and communicating systems, a new level of abstract phenomenon has started to emerge. We have begun to experience exponential growth in the amount of information available, together with the mobile computing devices most people use on a daily basis. This is often called a data deluge. Next to the challenges these changes bring, we can also see how new areas for research and practice are emerging.45 It seems clear today that the classic paradigm of observation and data gathering has changed radically. Data is produced on an everyday basis, 45 To just mention a few recent publications: ; Adam Greenfield, Everyware: The Dawning Age of Ubiquitous Computing (Berkeley, CA: New Riders, 2006); Uwe Hansmann et al., Pervasive Computing: The Mobile World (Berlin: Springer, 2003); Nathan Eagle and Alex Penxtland, “Reality Mining: Sensing Complex Social Systems,” Personal and Ubiquitous Computing 10, no. 4 (2006): 255–68; Eric Paulos, R. J. Honicky, and Ben Hooker, “Citizen Science: Enabling Participatory Urbanism,” in Handbook of Research on Urban Informatics: The Practice and Promise of the Real-Time City, ed. Marcus Foth (Hershey, PA: Information Science Reference, IGI Global, 2008).
154
coding AS literacy — Metalithikum IV
from nearly any activity we engage in, and accumulates from innumerable sources and formats such as text, image, GPS tracks, mobile phone traces, and many other social activities, into huge streams of information in digital code. These unstructured and continuous flows, which can be called urban data streams, can be considered as a new infrastructure within human societies. This notion of data is opposed to its classical notion, where data was produced mainly as the result of designed experiments to support specific hypothetical models or when data was transmitted via defined semantic protocols between several inter-operating software. These new data streams are the raw materials for further investigations; and similar to computing power, they hold new capacities for modeling. As a result of this new plateau, we are challenged to learn new ways to grasp this new richness. These massive, unstructured urban data streams induce an inversion in the paradigm of modeling from specific modeling, and they match the concepts of pre-specific modeling and models based on the concept of concrete universals. Therefore, as an alternative to previous modeling process, pre-specific modeling is mainly based on the coexistence of unstructured data streams representing particular objects and self-referential representation of concrete universals (figure 15). As shown in the next section, opposite to a specific modeling paradigm, where data has a limited use in modeling process and after a certain level of data size, the performance of models become stable in this level of data-driven modeling and object-dependent representation. Adding more data will improve the quality of the final model. This is the power of concrete universals that are scalable with data size, unlike parametric models, in which after a certain size of data, the parametric state space of the generic object becomes full and reduces the discrimination power of the model in dealing with a variety of circumstances in complex systems. By using concrete universals, each new instance will introduce a new aspect in relation to the other concrete instances. Therefore, it is scalable with data size. Fig. 15 Pre-specific modeling in coexistence with unstructured data and concrete universals.
In the next section, I present a certain category of mathematical and computational methods that support the notion of pre-specific modeling.
III Pre-specific Modeling
155
VIII Computational Methods Supporting Pre-Specific Modeling Even though I’ve presented several examples of pre-specific modeling, so far the explanation has remained on a conceptual level. Here I present a more technical discussion of two computational methods that fit very well with the concept of pre-specific modeling. VIII.I Markov Chains Andrei Markov is among the greatest mathematicians of the twentieth century; he has made numerous contributions toward forming probability theory, but his major work is the concept of Markov chains, which he introduced in 1906. In engineering and applied scientific domains today, many people know Markov chains as a kind of memoryless dynamic model: once we have a sequence of random variables (x1, x2, x3, … , xt-1, xt), the state of the system at the next step (xt + 1) depends only on the previously observed state (xt). These processes create a chain of random activities, where there is a probabilistic link between adjacent nodes. Here, we assume the case of discrete time processes with a finite number of states, but in principle one can assume to have continuous time and continuous state space. Further, the chain is called homogeneous if the conditional distributions of xt + 1 given xt were independent of time steps. And, assuming more sequential dependency, one can construct highe-order chains from the state at step t dependent upon its n previous steps, if the order of the chain is n. Nevertheless, in the case of first-order chains, assuming the mutual dependencies between any two potential states, a Markov chain can be represented with a directed graphical network, where each node corresponds to a state and the edges between two nodes correspond to two conditional probabilities. Figure 16 shows an example of a Markov chain with three states. Fig. 16 Traditional representation of a Markov chain in modeling of a dynamic system.
In the domain of dynamic systems, Markov chains and their many different versions have been studied and applied in diverse applications for the simulation of dynamic systems or for the study of steady state
156
coding AS literacy — Metalithikum IV
conditions. They have also been applied successfully to sequence and time series prediction and classification. Markov’s brilliant idea for the representation of a complex phenomenon such as natural language in a purely computational manner is particularly relevant here. Before going into his approach, let us unpack the traditional concept of language models. One of the major and to some degree dominant concepts of linguistic models is based on the notion of abstract universals. In this approach to modeling, which is in accord with Noam Chomsky’s,46 a spoken language can be modeled by means of a set of semantic and syntactic laws of the specific language. Therefore, writing and speaking correctly by an individual means that there is a system of production in his or her mind that produces the instances of that language following his or her ideal model. This is one of the best examples of specific modeling based on the concept of abstract universals. However, Shillcock notes that because natural languages are complex evolving systems, trying to identify the ideal model of a live language is always a process of catch-up.47 Therefore, considering the evolution and the exceptions and the number of different languages all over the world, this approach has never been successfully applied in a computational model. Now let us refer to the experiments of Markov in 1913, which are among the first linguistic models that follow the concept of concrete universals. In what has now become the famous first application of Markov chains, Markov studied the sequence of 20,000 letters in Alexander Pushkin’s poem “Eugeny Onegin” to discover the conditional probabilities of sequences of the letters in an empirical way. What follows is the less-discussed way of interpreting Markov chains, i.e. not from the traditional viewpoint of dynamic systems but rather via the empirical representation of concrete objects. Figure 17 shows the underlying concept of object-dependent representation in Markov chains. Suppose that we have a defined number of symbols in a specific language (e.g., all the observed words in the English language). Now imagine for each specific word in the collection of our words, we consider all the words that have appeared N steps before and N steps after that specific word in all of our collected texts and we count the total number of occurrences. Next, by normalizing the total number of occurrences for each position before and after each specific word, we can find the empirical ratio of having any word after or before that specific word. Now, assuming these relations between all of the words exist, we have an object-dependent 46 Noam Chomsky, “Linguistics and Brain Science,” in Image, Language, Brain, ed. Alec P. Marantz, Yasushi Miyashita, and Wayne O’Neil (Cambridge, MA: MIT Press, 2001), 13–23. 47 Shillcock, “The Concrete Universal and Cognitive Science.”
III Pre-specific Modeling
157
representation for each word based on its relation with all the other words. This would be a huge pre-specific representation of concrete objects. It is pre-specific because compared to the abovementioned language models, in this mode of representation of words there is no specifically given semantic or syntactic property (e.g., synonym structures or grammatical rules) to the words. The whole network is constructed out of summation and division operations. However, as Markov says, “many mathematicians apparently believe that going beyond the field of abstract reasoning into the sphere of effective calculations would be humiliating.”48 Fig. 17 Language representation based on concrete universals: Markov’s approach in constructing an empirical representation of a language’s words based on a set of observed sentences.
Here again we have a self-referential setup, where concrete instances are implicitly represented by their relations to the other instances. As a result, if two particular words have the same function in that language, they will have similar relations with the other words. This was a big claim in 1906, when there wasn’t even enough computing power to construct these relational networks. As Claude Shannon later noted, even after almost forty years Markov’s proposed modeling framework was not practically feasible, since it demands a large number of observations and relatively large computational power.49 Nevertheless, as mentioned in section 7, the recent rapid growth in computation power has changed the situation dramatically and a similar approach, distributed representation, has attracted many researchers and practitioners.50 Further, new applications of neural probabilistic models of language are becoming popular, while classical approaches in natural language processing are still struggling for a real world and scalable application.51 The PageRank algorithm, used in Google searches, is another important application of Markov chains that follows the concept of representation on objects in a concrete level.52 Around the year 2000, due to 48 Markov, quoted in Gely P. Basharin, Amy N. Langville, and Valeriy A. Naumov, “The Life and Work of A. A. Markov,” Linear Algebra and Its Applications 386, no. 14 (2003): 3–26. 49 Claude E. Shannon, “A Mathematical Theory of Communication,” Bell System Technical Journal 27, no. 3 (1948): 379–423. 50 Yoshua Bengio, Aaron Courville, and Pascal Vincent, “Representation Learning: A Review and New Perspectives,” IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 8 (2013): 1798–828. 51 Yoshua Bengio et al., “A Neural Probabilistic Language Model,” Journal of Machine Learning Research 3 (2003): 1137–55. 52 See Sergey Brin and Larry Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer Networks and ISDN Systems 30 (1998): 107–17.
158
coding AS literacy — Metalithikum IV
Fig. 18 Distributed representation of linguistic models from Markov to the age of data deluge.
exponential growth in the number and diversity of websites, the ranking of search results yielded by Internet search engines was becoming a critical issue. Prior to PageRank, most solutions sought to define a set of features for each website and then apply a scoring logic based on these features. In other words, the starting point for the ranking system was the act of defining a generic website, represented with a set of abstract universals. Consequently, every particular page would be a point in this parametric space of the generic website. Statistically, what happens is that if we increase the size of observations (i.e., the number websites), No, the ratio of the number of parameters (dimensions), Np, to the number of observations, Np / No, quickly gets close to zero. This means that the parametric space becomes full, and the discernability of different websites from each other becomes impossible. Therefore, considering the size and the diversity of websites across the Internet, the process of defining features and assigning values to each page was a bottleneck and theoretically limited. With the Markov-based procedure developed in PageRank, Google did not improve the classic approach, but rather changed the paradigm. They changed the basic assumption of centralized ranking and simply assumed that individuals know best which pages are related and important for them — better than any axiomatic or semantic order could know. They looked on a micro-scale at how individuals link important websites to their websites, and based on these live streams of data, they constructed, continuously and adaptively, a Markov chain, mapping how people likely surf the Internet, and thereby they constructed a probabilistic network of connections within the pages. They defined the importance of a website as the result of the importance of the websites connected to that website, which is a self-referential equation with no externally imposed feature set. In terms of statistical representation, what happens is that this object-dependent representation is scalable to the size of the
III Pre-specific Modeling
159
observations (websites), since each concrete object brings its representation with itself and acts as a new parameter for the representation of all the other pages (for simplicity’s sake, assume a binary relation of a website to the other pages). This practical application conceptually is aligned with the definition of the Dedekind cut, where instead of representing an object directly, each object is represented as the negation of its complement. Fig. 19 A part of Google matrix: I=HI. Everything is represented based on every other thing.
Fortunately, theories from linear algebra are available to solve this self-referential equation. The values of the first eigenvector of the constructed Markov matrix are used as the ranking of the pages. With the same methodology, it is possible to model similar problems in other fields. For example, a Markov chain based on available GPS trackings of cars can be used for modeling traffic dynamics in an urban street network.53 Fig. 20 A Markov chainbased representation of traffic networks from a sequence of GPS trackings of cars in Beijing.
53 See Vahid Moosavi and Ludger Hovestadt, “Modeling Urban Traffic Dynamics in Coexistence with Urban Data Streams” (paper presented at 2nd ACM SIGKDD International Workshop on Urban Computing, 2013).
160
coding AS literacy — Metalithikum IV
VIII.II Self-Organizing Map Following the same line of argumentation for the issue of representation in complex systems that we had for Markov chains, there is another powerful data-driven, pre-specific modeling method called the Self-Organizing Map (SOM).54 As a well-known method in machine learning, the SOM has a very rich literature with a diverse set of applications.55 According to the literature, the SOM is a generic methodology that has been applied in many classical modeling tasks such as the visualization of a high-dimensional space,56 clustering and classification,57 and prediction and function approximation.58 During the past three decades there have been different extensions and modifications to the original algorithm that was introduced by Teuvo Kohonen in 1982. For example, one can compare the SOM with other clustering methods or with space transformation and feature extraction methods such as the Principal Component Analysis (PCA).59 It is possible to explain and compare the SOM with vector quantization methods.60 Further, it is possible to explain the SOM as a nonlinear function approximation method and to see it as a type of neural network methods and radial basis function.61 However, in this work I present two main aspects of the SOM in relation to the idea of pre-specific representation and in comparison with other modernist mathematical approaches, which are based on the notions of ideals and abstract universals.
54 See Teuvo Kohonen, “Self-Organized Formation of Topologically Correct Feature Maps,” Biological Cybernetics 43, no. 1 (1982): 59–69. 55 See Teuvo Kohonen, “Essentials of the Self-Organizing Map,” Neural Networks 37 (2013): 52–65. 56 Juha Vesanto, “SOM-Based Data Visualization Methods,” Intelligent Data Analysis 3, no. 2 (1999): 111–26. 57 Alfred Ultsch, “Self-Organizing Neural Networks for Visualization and Classification,” in Information and Classification: Concepts, Methods and Applications, ed. Otto Opitz, Berthold Lausen, and Rüdiger Klar (Berlin: Springer, 1993), 307–13; Juha Vesanto and Esa Alhoniemi, “Clustering of the Self-Organizing Map,” IEEE Transactions on Neural Networks 11, no. 3 (2000): 586–600. 58 Guilherme de A. Barreto and Aluizio F.R. Araujo, “Identification and Control of Dynamical Systems Using the Self-Organizing Map,” IEEE Transactions on Neural Networks 15, no. 5 (2004): 1244–59; Guilherme de A. Barreto and Gustavo M. Souza, “Adaptive Filtering with the Self-Organizing Map: A Performance Comparison,” Neural Networks 19, no. 6 (2006): 785–98. 59 Hujun Yin, “Learning Nonlinear Principal Manifolds by Self-Organizing Maps,” in Principal Manifolds for Data Visualization and Dimension Reduction, ed. Alexander N. Gorban et al. (Berlin: Springer, 2008), 68–95. 60 Christopher M. Bishop, Markus Svensén, and Christopher K. I. Williams, “GTM: The Generative Topographic Mapping,” Neural Computation 10, no 1 (1998): 215–34; Teuvo Kohonen, “Improved Versions of Learning Vector Quantization” (paper presented at the IJCNN International Joint Conference on Neural Networks, 1999). 61 See Barreto and Araujo, “Identification and Control of Dynamical Systems Using the Self-Organizing Map”; Barreto and Souza, “Adaptive Filtering with the SelfOrganizing Map.”
III Pre-specific Modeling
161
Fig. 21 Reconstruction of an observed signal (top row) based on a parametric dictionary of ideal waves in Fourier decomposition.
VIII.II.I No More External Dictionary and No MoreGeneric Object As discussed in section 5, in specific modeling the observations of any real phenomena are encoded in a generic object being represented by a set of given parameters. The underlying idea of pre-specific modeling is how to relax the modeling process from any specific and idealistic representation of the real phenomena — or, how not to depend on the generic object. For example, as mentioned before, in a Fourier transformation we assume that any dynamic behavior can be reconstructed and re-presented by a set of ideal cyclic forms. As figure 21 shows, an observed signal can be decomposed or can be approximated as a linear summation of some ideal waves. In other words, here we assume that there is a generic sinusoidal wave (as an ideal behavior) with a parametric setup, and by changing the parameters of this generic function there are different instances of waves. Finally, an observed signal can be represented as a summation of these ideal waves as follows: m s (t) = ao / 2 +
Σ [a cos(nwt) + b sin(nwt)] π
π
n=1
Therefore, the Fourier transformation is among the specific models that are based on abstract universals. As discussed, although powerful and useful in many classical engineering and scientific applications, this approach of idealized modeling has fundamental limits in dealing with complex (multifaceted) phenomena. The opposite idea or the complementary idea in pre-specific modeling is that based on the concept of concrete universals, it might be possible to establish a self-referential setup using concrete objects (i.e., the observations) to model real phenomena without any external representation or any external control. In the domain of machine
162
coding AS literacy — Metalithikum IV
learning, this is the underlying idea of unsupervised learning. Interestingly, the SOM corresponds with the idea of representation based on concrete universals. Compared to Fourier decomposition, shown in figure 21, if we train a SOM with enough observations, we get a dictionary of potential dynamic forms collected via real observations (figure 22). In assuming each of the prototypical forms in the trained SOM as a word or a letter in a language, a trained SOM can be used as pre-specific dictionary for the target phenomena. In other words, in terms of signal processing, assuming a fixed segmentation size, the observed signal can be translated into a set of numerical indexes (i.e., the index of the matching prototypes in the SOM network with each segment of the observation vector) — further, these indexes will be used for further steps of modeling. The main point is that unlike the case of Fourier decomposition, here there is no external axiomatic setup for the transformation of observations into codes; the whole encoding system provided by the SOM is based on concrete observations. Fig. 22 Self-organizing maps: Constructing a pre-specific dictionary of dynamic forms from a large collection of observed signals.
Further, in a certain topology of SOM networks, the final indexes can be used to transform a multidimensional dynamic system into a onedimensional symbolic dynamic system. In this case, the indexes of the SOM can be considered as “contextual numbers.”62 In the field of computer vision and speech processing there has been a growing trend of methods that are based on the idea of representation that outperform many classical pattern recognition methods only in coexistence with a large amount of observations based on feature engineering. Classically, feature engineering means that in order to 62 Vahid Moosavi, “Computing with Contextual Numbers,” arXiv preprint (2014).
III Pre-specific Modeling
163
develop a pattern recognition model (i.e., in an image classification problem), one first needs to design a feature space to transform the images by that and then to develop a classification model on top of the engineered features. In the example of the Fourier analysis, the frequency and phase difference are the features. On the other hand, in this new category of modeling, sometimes called representation learning, there is no specific and separate feature-engineering task before adjusting the classification or prediction method. 63 Among these algorithms is the sparse coding algorithm, which has conceptual similarities with the SOM.64 The principle idea of sparse coding is that if the original observations are n dimensional vectors, one can find an over-complete set of vectors (i.e., K vectors, where K>>n) to reconstruct the original observations with a linear and sparse combination of these K vectors. While it looks similar to methods such as PCA65 or Independent Component Analysis (ICA),66 sparse coding (similar to the SOM algorithm) does not produce a global transformation matrix. In PCA for example, all the n orthogonal basis vectors proportionally (according to their corresponding eigenvalues) contribute to the representation of all of the original observations, but in the SOM and sparse coding we have a kind of “distributed representation,” in which each original observation is directly represented by a few specific prototypes (basis vectors). In other words, in the SOM each prototype is an object, which is not true for each principal component in PCA. They are from different worlds. Further, this encoding approach can be applied in a hierarchical process. For example, in the case of image processing it can be applied to small patches of an image, where each patch will be indexed to a few codes and the next level (for example the whole image) will then be represented by new codes constructed on top of the previous codes. In fact, the output of one step is used as input for the next layer. Therefore, the whole image is analyzed by multilevel sparse codes. This simple idea of coding in an unsupervised approach has been applied in many practical applications and it has been claimed that it works better than the wavelet decomposition method.67 I should note that the wavelets act similarly to the Fourier series, but they are more advanced, since there is no longer the assumption that the underlying ideal waves are 63 See Bengio, Courville, and Vincent, “Representation Learning.” 64 See Bruno A. Olshausen, “Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images,” Nature 381, no. 6583 (1996): 607–9. 65 Karl Pearson, “On Lines and Planes of Closest Fit to Systems of Points in Space,” London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2, no. 11 (1901): 559–72. 66 Aapo Hyvärinen, Juha Karhunen, and Erkki Oja, Independent Component Analysis (Malden, MA: John Wiley & Sons, 2001). 67 See Olshausen, “Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images.”
164
coding AS literacy — Metalithikum IV
stationary. Figure 23 shows an example of a sparse coding algorithm applied to image patches. Fig. 23 Sparse coding: Learning data driven image dictionaries.67
VIII.II.II Computing with Indexes Beyond Ideal Curves Another property of the SOM is its unique disposition for structural learning. Figure 24 shows the main difference between the SOM and a classical way of relation (function) modeling. In simple terms, the primary goal of relation modeling is to find the relation between two dimensions based on a set of observations. In a classical way of modeling, one needs to fit a curve (a fixed structure) to a data set, while minimizing the deviations (errors) from the selected curve. In other words, the selected curve represents the logic that idealizes the observed data into a continuous relation. The SOM assumes that the logics (the argument that integrates cases) can be extracted from within the observed data—and it conserves all the logics (arguments) according to which it clusters the cases. What is optimized in such modeling is not how the data fits to logic, but how the logic, which is being engendered, can accommodate as much as possible from the data. In this sense, in an analogy to a governance and decision-making system, we might say that the classical approach in curve fitting is a democratic setup, in which there is a global structure, tuned locally by the effect of individual votes. On the other hand, the SOM provides a social environment, in which each individual instance is not reduced but is kept active in its own individuality, while individuals can be unified into local clusters, if necessary. 68 This image is taken from James Hughes, Daniel J. Graham, and Daniel N. Rockmore, “Quantification of Artistic Style through Sparse Coding Analysis in the Drawings of Pieter Bruegel the Elder,” Proceedings of the National Academy of Sciences 107, no. 4, (2010) 1279–83.
III Pre-specific Modeling
165
Fig. 24 Computing with indexes beyond ideal curves.
Here again, the final model of the real phenomena is an abstraction of any potential specific model and it does not import any axiomatic or semantic specificity. Further, if the real environment is dynamic and evolving, and if we can assume the availability of dynamic data streams, then the SOM evolves along with the environment. In other words, prespecific models are models in coexistence with data streams.
166
coding AS literacy — Metalithikum IV
IV SOM. self. organized. André Skupin
André Skupin is Professor of Geography and Founder / Codirector of the Center for Information Convergence and Strategy (CICS) at San Diego State University. He combines a classic cartographic education with interests in geovisualization, visual data mining, and spatiotemporal modeling. In the domain of knowledge visualization, much of his research has addressed how knowledge artifacts can be analyzed by combining disparate approaches from natural language processing, artificial neural networks, and cartography. As Associate Director of the Center for Entrepreneurship and Innovation (CEI) at the University of Dubai and Cofounder of a knowledge management start-up, Dr. Skupin has a strong interest in accelerated transition of technological innovation into diverse application areas, from biomedical knowledge management to financial analytics, demography, criminology, and environmental monitoring.
168
coding codingAS ASliteracy — Metalithikum literacy — Metalithikum IV
The images produced by André Skupin for this book represent a knowledge visualization in which the SOM technique itself is applied to a collection of more than four thousand scientific articles dealing with SOM published during the last twenty-five years. The result is a self-organized base map of the SOM knowledge domain, with topographic features corresponding to major latent themes in the literature.
IV SOM. self. organized.
169
In terms of geography: A Self-Organizing Map of more than 4000 articles about Self-Organizing Maps, showing the main concepts next to each other.
170
coding AS literacy — Metalithikum IV
V The Nature of Local / Global Distinctions, Group Actions, and Phases: A Sheaf-Theoretic Approach to Quantum Geometric Spectra Elias Zafiris I Observables and Geometric Spectrum 174 — II Group Actions and the Erlangen Program 175 — III Local Group Actions and Gauge Theory 176 — IV The Advent of Quantum Theory 178 — V What Is a Sheaf? 180 — VI The Program of “Relational Realism” 181— VII Quantum Mechanics as a Non-Spatiotemporal Gauge Theory 183 — VIII Quantum Geometric Spectra 185
Elias Zafiris is a senior research fellow at the Institute of Mathematics of the University of Athens (Greece) and the Center for Philosophy and the Natural Sciences of the California State University (Sacramento). He holds an MSc (Distinction) in Quantum Fields and Fundamental Forces from Imperial College (University of London), and a PhD in Theoretical Physics, also from Imperial College. He has published research papers on the general theory of relativity and quantum gravity, the mathematical and conceptual foundations of quantum mechanics and quantum logic, complex systems theories, and applied category-theoretic structures.
172
codingAS ASliteracy — Metalithikum literacy — Metalithikum IV coding
The basic idea of a physical geometric spectrum implied by the principles of gauge theory is based on the notion of invariance under the action of a local symmetry group. In particular, the sheaf-theoretic interpretation of gauge geometry introduces a subtle distinction between local and global physical information carriers. A natural question arising in this context is how these aspects are reflected in the case of quantum mechanics. We explain that local information carriers are encoded as observables, whereas global ones are encoded either as “global memories” of states during processes of cyclic evolution, or as relative phase differences between distinct histories of events. We conclude that quantum geometric spectra V The Nature of Local / Global Distinctions
173
admit a sheaf-theoretic interpretation in combination with the non-spatiotemporal gauge structure of quantum mechanics. I Observables and Geometric Spectrum The state information of physical systems is adequately described by the collection of all observed data determined by the functioning of measurement devices in suitably specified experimental environments. Observables are precisely associated with physical quantities that, in principle, can be measured. The mathematical formalization of this procedure relies on the idea of expressing the observables, at least locally, by functions corresponding to measuring devices. Moreover, the usual underlying assumption on the basis of physical theories postulates that our form of observation is represented by coefficients in a number field, which is usually taken to be the field of real numbers. Thus, observables are typically modeled, at least locally, by continuous real-valued functions corresponding to measuring devices. In this setting, the consideration of the structure of all observables necessitates the imposition of a further requirement pertaining to their algebraic nature. According to this requirement, the set of all observables bears the structure of a commutative, linear associative algebra with the unit over the real numbers, at least locally. The basic fact underlying this requirement is that with any commutative algebra of observables there is a naturally obtained topological space via measurement, which is thought of as the geometric spectrum of this algebra — namely, the space of points that is accessible by means of evaluating this algebra in the real number field. Most important, the observables of the algebra are represented as continuous functions on this geometric spectrum. The crux of this requirement is that any observed geometric spectrum should not been considered ad hoc, but should be associated with evaluating a corresponding algebra of observables via measurement. From a mathematical perspective, this principle has been well demonstrated in a variety of different contexts, and is known as StoneGelfand duality in a functional analytic setting or Grothendieck duality in an algebraic geometric setting.1 In a nutshell, to any commutative algebra 1
174
David Eisenbud, Commutative Algebra: With a View Toward Algebraic Geometry, Graduate Texts in Mathematics, vol. 150 (New York: Springer, 1995); Sergei Gelfand and Yuri Manin, Methods of Homological Algebra, 2nd ed. (New York: Springer, 2003); Peter T. Johnstone, Stone Spaces (Cambridge: Cambridge University Press, 1986).
coding AS literacy — Metalithikum IV
of observables with a unit, there is a naturally associated topological space, namely its geometric spectrum, such that each observable of the algebra becomes a continuous function on the spectrum (at least locally). The basic didactic of this principle when applied, for instance, to the case of smooth manifolds of states utilized in physics, is the following :2 smooth algebras of observables allow us to observe smooth geometric spaces, namely smooth manifolds, which are identified with the real-valued spectra of the geometric realization of these smooth algebras’ observables. Inversely, the observables are identified with real-valued differentiable functions of these smooth manifolds. We note that these identifications hold up only to isomorphism or equivalence of kind or form. In this manner, a measurement process of the observables of a physical theory can specify the geometric domain of its applicability up to an isomorphic mapping. Thus, the notion of isomorphism demarcates the geometric boundary of observability. II Group Actions and the Erlangen Program The central concept of Klein’s Erlangen program is expressed by the thesis that the objective content of a geometric theory is captured by the group of transformations of a space.3 Again, it is instructive here to think of the notion of a space, from a physical perspective and at least locally, as the geometric spectrum of a commutative algebra of observables. The crucial point of the Erlangen program is that transformation groups constitute an algebraic encoding of a criterion of equivalence for geometric objects. Moreover, a transformation group determines the notion of what is to be a meaningful property of a concrete geometric figure. Therefore, from the Erlangen perspective, a geometric figure may be conceived from an abstract algebraic viewpoint as a manifold acted upon transitively by a group of transformations. The decisive aspect of the criterion of equivalence that a transformation group furnishes is its use in characterizing kinds of geometric figures and not particular instances of these figures. This leads to the idea that geometry, in an abstract sense, refers to kinds of figures that are specified by the transformation group of the space. Each kind can have infinite instantiations; thus, the same geometric form may be manifested in many different ways or else assume multiple concrete realizations. This reveals an important ontological dimension of Klein’s program, since a transformation group of a space provides an efficient criterion to abstract a geometric kind from particular geometric instantiations, whereas the specific details of these instantiations, irrespective of their features as instances of a geometric kind, is irrelevant. In light 2 3
Elias Zafiris, “A Sheaf-Theoretic Topos Model of the Physical Continuum and Its Cohomological Observable Dynamics,” International Journal of General Systems 38, no. 1 (2009). Richard W. Sharpe, Differential Geometry: Cartan’s Generalization of Klein’s Erlangen Program (New York: Springer, 1997).
V The Nature of Local / Global Distinctions
175
of this, a geometry is specified by a group and its transitive action on a space, which remarkably can be presented in a purely algebraic way as a group homomorphism from the transformation group to the group of automorphisms of the underlying space. Conceptually speaking, the form of a geometric theory is encoded in the transitive action of a respective transformation group. Different particular geometric configurations are the same in form if, and only if, they share the same transformation group. In other words, the transitive group action provides a precise characterization regarding matters of geometric equivalence. Mathematically, the above thesis is expressed as the principle of transfer ence, or principle of isomorphism induced by a transitive group action on a space. A transfer of structure takes place by means of an isomorphism, providing different equivalent models of the same geometric theory. Philosophically, underneath lies an Aristotelian conception of space, according to which space is conceived as being matter without form. The form is being enacted by the action of a concrete transformation group. Still, more important, the space itself may be considered as the quotient of the transformation group over a closed subgroup of the former. A change in algebraic form, or else a change of the transformation group, signifies a change in geometry, in the sense that the equivalence criterion encoded in the group action is altered. Thus, moving from a group to a larger one amounts to a change in the resolution unit of figures, expressed as a relaxation of the geometric equivalence criterion involved in the procedure. In effect, the criterion of equivalence serves as a powerful classification principle for geometries in relation to group hierarchies. A crucial aspect of the Erlangen program is that it does not specify which underlying manifolds exist as spectra of corresponding observable algebras, it rather deals with the possible existence of geometric structures on these manifolds in relation to form-inducing transformation groups’ action upon them. This naturally leads to a bidirectional relation of dependent-variation between transformation groups and geometric structures on manifolds. This bidirectional relation conveys the information that two spaces cannot have different transformation groups without differing as geometric structures, whereas the converse is clearly false. III
Local Group Actions and Gauge Theory
From a physical point of view, geometry is synonymous with measurement — hence, closely related to observation, being in fact the result of it. Group actions can be actually thought of as particular acts of measurement. The general stance toward physical geometry implicated by the Erlangen program assumes that the geometric configuration of states of a physical system and the symmetry group of transformations of those states are regarded as being semantically equivalent via the transitive group action on the space of states.
176
coding AS literacy — Metalithikum IV
Modern gauge-invariant physical theories are being mathematically viewed as fiber bundle or Cartan geometries.4 Cartan managed to combine Klein’s group theoretical conception of geometry with Riemann’s infinitesimal metrical viewpoint.5 This has been achieved by the introduction of the fundamental notion of a variable connection, which can be made metric-compatible. A connection serves as a covariant derivative of the states establishing the concept of parallelism under transportation from the local to the global level, which is considered to be induced by a physical field. Locally, a connection plays the role of the field’s potential, whose observable effects are expressed by a homological tensorial magnitude called the curvature of the connection. The curvature is an observable playing the role of the field’s strength. In the setting of gauge theories the transformation group is being modeled locally on the fibers of a principal fiber bundle. Klein geometries are precisely the Cartan geometries whose connections have zero curvature. In physics, a geometric kind is represented by an equivalence class of state spaces, where a state space includes all possible potential states of a system. The state space is semantically equivalent with the group action space of a symmetry group, at least locally, in the sense that the symmetry group circumscribes the range of possible potential properties that a geometric kind can assume, like in the case of gauge theories. Being a member of a geometric kind, a physical entity can be potentially in any of the possible states locally, although after measurement it is actually in a particular one. This is called “local gauge freedom” and constitutes a concrete physical manifestation of the criterion of equivalence that a local gauge group furnishes in the case of gauge theories. The conceptual paradigm of gauge theories is very instructive because it convincingly demonstrates that the concept of geometric kinds incorporates the distinction between the potentially possible and the actual. The idea of a geometric kind is precisely articulated by the action of a local symmetry group, which pertains not to spatiotemporal but to qualitative features. The local symmetry group of a gauge theory circumscribes a set of potentially possible states locally and depicts a natural geometric kind via its action. The fiber-bundle formulation of gauge theories captures precisely the formation of geometric kinds under equivalence criteria constituted by actions of local symmetry groups. The base manifold of a fiber bundle equipped with a connectivity structure representing a gauge field plays the role of space-time. Note that space-time is not given a priori, but is an integral part of the existence of matter. It is the carrier of the geometry by which matter is transformed, thus perceived as a structural quality of the dynamic Anastasios Mallios, Modern Differential Geometry in Gauge Theories, vol. 1, Maxwell Fields (Boston: Birkhäuser, 2006); Anastasios Mallios, Modern Differential Geometry in Gauge Theories, vol. 2, Yang-Mills Fields (Boston: Birkhäuser, 2009). 5 Sharpe, Differential Geometry. 4
V The Nature of Local / Global Distinctions
177
field or dynamic connection between the fibers of the bundle, modeling in turn the local group action. Finally the laws of physics are revealed by the variation of matter, expressed by the dynamic field connection on the fiber space and observed through the curvature of the connection. IV The Advent of Quantum Theory The crucial distinguishing feature of quantum mechanics in relation to all classical theories is that the totality of all physical observables constitutes a global noncommutative algebra, and thus quantum observables are not theoretically compatible.6 This simply means that not all observables are simultaneously measurable with respect to a single universal global logical Boolean frame as is the case in all classical theories of physics.7 Thus a multiplicity of potential local Boolean frames exists, each one standing for a context of comeasurable observables. Technically speaking, each Boolean frame is a Boolean algebra of projection operators obtained by the simultaneous spectral resolution of a family of compatible observables, represented as self-adjoint operators. Such a family of compatible observables forms a commutative observable algebra whose idempotent elements (projections) constitute a logical Boolean frame. In this way, each local or partial Boolean frame signifies the local logical precondition predication space for the probabilistic evaluation of all the observables belonging to the corresponding commutative observable algebra. Thus, the manifestation of every single observed event in the quantum regime requires taking explicitly into account the specific local Boolean frame with respect to which it is contextualized. Since a single, unique, global Boolean frame does not exist, due to the noncommutativity of the totality of quantum observables, the necessity arises to consider all possible local Boolean frames and their interrelations. It is important to stress that the local / global distinction in quantum mechanics is of a topological nature and does not involve any preexisting set-theoretic space-time background of embedding events.8 Actually, by utilizing the Stone-Gelfand representation theorems for Boolean algebras and commutative observable algebras correspondingly, in the setting described above, only Garrett Birkhoff and John von Neumann, “The Logic of Quantum Mechanics,” Annals of Mathematics 37 (1936): 823–43; Gudrun Kalmbach, Orthomodular Lattices (London: Academic Press, 1983); Veeravalli S. Varadarajan, Geometry of Quantum Theory (New York: Springer, 1985); John von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton, NJ: Princeton University Press, 1955). 7 Elias Zafiris, “Boolean Coverings of Quantum Observable Structure: A Setting for an Abstract Differential Geometric Mechanism,” Journal of Geometry and Physics 50 (2004): 99–114; Elias Zafiris and Vassilios Karakostas, “A Categorial Semantic Representation of Quantum Event Structures,” Foundations of Physics 43 (2013). 8 Elias Zafiris, “Generalized Topological Covering Systems on Quantum Events Structures,” Journal of Physics A: Mathematical and General 39 (2006): 1485–505; Elias Zafiris, “Sheaf-Theoretic Representation of Quantum Measure Algebras,” Journal of Mathematical Physics 47 (2006): 92103. 6
178
coding AS literacy — Metalithikum IV
the notion of a local geometric spectrum becomes applicable as co-emergent with the specification and functional role of a local Boolean or commutative observable frame, respectively. The explicit consideration of all potentially possible local Boolean frames entirely covering the factual layer of quantum observable behavior provides a local logical / topological relativization or contextualization of global quantum event structures in local Boolean or commutative algebraic terms. This type of relativization should be best thought of as a process of sieving, or filtering, the factual content of the global quantum geometric spectrum with respect to covering families of partially compatible nested Boolean frames at various logical resolution scales. In categorytheoretic terminology, these covering families of local Boolean frames define covering sieves, which are used for the enunciation of an appropriate notion of topology (Grothendieck topology) with respect to which the local / global distinction is formally depicted.9 The independence of the local / global distinction implicated in quantum mechanics from any metrical spatiotemporal connotation, as has been the case in classical theories, cannot be overestimated. In the preceding section, where we discussed the conceptualization of physical geometry by means of gauge theory, the geometric fiber spaces are thought of as being soldered over the points of a metrical space-time manifold. If quantum phenomena are actually compatible with the principles of gauge geometry, requiring invariance under the local action of a gauge symmetry group, then the local / global distinction should be disassociated from its restricted metrical spatiotemporal semantic identification and reclaim its original logical / topological semantic role. For this purpose, what is actually required is an efficient method of localization of the observed geometric spectrum that does not depend on the existence of points. In this respect, the notion of a sheaf proves to be indispensable for point-free localization processes, and paves the way for a deeper understanding of quantum theory under a substantially broader gauge-theoretic perspective.10 A topological approach to quantum mechanics based on a conceptual and technical sheaf-theoretic framework has been presented recently in book form and should be consulted as a standard reference in what follows in this text.11 9
Elias Zafiris, “Quantum Observables Algebras and Abstract Differential Geometry: The Topos-Theoretic Dynamics of Diagrams of Commutative Algebraic Localizations,” International Journal of Theoretical Physics 46, no. 2 (2007); Elias Zafiris, “Boolean Information Sieves: A Local-to-Global Approach to Quantum Information,” International Journal of General Systems 39, no. 8 (2010). 10 Saunders MacLane and Ieke Moerdijk, Sheaves in Geometry and Logic: A First Introduction to Topos Theory (New York: Springer, 1992); Anastasios Mallios, Geometry of Vector Sheaves: An Axiomatic Approach to Differential Geometry (Dordrecht: Kluwer Academic Publishers, 1998); John L. Bell, “From Absolute to Local Mathematics,” Synthese 69 (1986); Anastasios Mallios, “On Localizing Topological Algebras,” Mathematics 341, no. 79 (2004). 11 Michael Epperson and Elias Zafiris, Foundations of Relational Realism: A Topological Approach to Quantum Mechanics and the Philosophy of Nature (Lanham, MD: Lexington Books, 2013).
V The Nature of Local / Global Distinctions
179
V
What Is a Sheaf?
From a physical viewpoint, a sheaf of observables or states constitutes the natural outcome of a complete bidirectional localization / globalization process. The notion of a sheaf is based solely on the topological form of local / global distinctions and is independent of any smooth, metrical space-time point-manifold substratum. The sheaf concept essentially expresses gluing conditions — namely, the way by which local structural algebraic information referring to observables or states can be amalgamated compatibly into global ones over a multiplicity of local covering domains of a global space. These local domains may be simply thought of in terms of open loci completely covering a topological space at different levels of spectral resolution. In the case of quantum theory, the local covering domains of a global quantum event space bear a logical semantics, since they can be actually identified as the spectra of local Boolean measurement frames corresponding to complete Boolean algebras of co-measurable quantum observables. In this manner, a sheaf may be thought of as a continuously variable algebraic information structure of observables or states, whose continuous variation is carried over all these local covering domains, such that certain pieces of local information may be glued together compatibly under extension from the local to the global. It is instructive to notice that in comparison to the notion of an algebraic structure of observables or states defined set-theoretically, a sheaf bears an additional intrinsic granulation or crystallization structure of the elements, called sections of the sheaf, specified by the grain of resolution of the covers. For example, in the case of open covers of a topological space the granulation structure is a partial order. In the general case, the granulation structure is thought of as a sieve whose nested holes or variable extent spectral horizons are comprised by the local covering domains. Heuristically, we think of the covers as measures of definability, and the elements of a sheaf restricted to a cover are exactly the members defined to the extent provided by this cover. One of the most interesting aspects of the general sheaf concept, which is certainly crucial for the understanding of quantum theory, is that it naturally leads to spectral models of space, which do not have a local structure defined by points. In contradistinction, the local structure is generated by the families of covers and reference to points is allowed only contextually — that is, only in relation to all potential covers containing a point. Therefore, the sheaf information conveyed at a pointevent of the spectrum is not of a punctual character as in classical physics, but of a germinal character — that is, it bears the semantics of an information seed. This is the case because it requires reference to equivalence classes or germs of all compatible local observable or state information with respect to all covers containing potentially this
180
coding AS literacy — Metalithikum IV
point-event if a measurement takes place. Technically, the information germ at a point-event of the spectrum is expressed by an inductive limit synthetic logical procedure carried over these covers.12 It needs to be emphasized that the sheaf concept leads to a physically different understanding of the notion of a spectrum space in comparison to the classical set-theoretic one. More precisely, the actualization of each point-event by a measurement is internally related to an observable or state information germ, which contextualizes it with respect to all compatible local covers. At the same time, it constitutes an objective refinement of the spectrum, and therefore alters it globally. In this way, the global spectrum is not fixed and predetermined as in the set- theoretic conceptualization, but it is continuously unfolding and refined by the actualization of new events, correlated historically with their logical antecedents via compatible information germs. The technical characterization of a sheaf is defined in two steps. The first step involves the functorial organization of the local covering domains’ infiltrated observable or state information, meaning that the requirement of compatibility under restriction or reduction from the global to the local level should be satisfied. This process produces a variable algebraic information structure with the prescribed global-to-local compatibility, called a presheaf. The second step involves two processes — namely, the functional localization of the organized information presented in terms of a presheaf, and then its eventual completion by means of gluing locally compatible information from the local to the global. In this manner, the notion of a sheaf incorporates all the necessary and sufficient conditions for the bidirectional compatibility of observable or state information under restriction or reduction from the global to the local, and inversely under extension or induction from the local to the global. VI The Program of “Relational Realism” Based on the fundamental concepts of sheaf theory, and in confluence with basic philosophical notions of Whitehead’s “process theory,”13 which proved to be of crucial significance in relation to quantum mechanics,14 the active research program of “relational realism” has emerged.15 This program has been developed systematically for a novel interpretation of quantum theory, and has been currently extended toward an approach to quantum gravity and the deep understanding of 12 Zafiris, “Generalized Topological Covering Systems on Quantum Events Structures”; Zafiris, “Boolean Information Sieves”; Zafiris and Karakostas, “A Categorial Semantic Representation of Quantum Event Structures.” 13 Alfred North Whitehead, Process and Reality: An Essay in Cosmology, ed. D. Griffin and D. Sherburne (New York: Free Press, 1978). 14 Michael Epperson, Quantum Mechanics and the Philosophy of Alfred North Whitehead (New York: Fordham University Press, 2004). 15 Epperson and Zafiris, Foundations of Relational Realism.
V The Nature of Local / Global Distinctions
181
topological order and topological states of matter from first principles.16 The major novelty of relational realism is the utilization of concepts, methods, and formal techniques from the mathematical fields of category theory, topos theory, and homological algebra for the consistent and nonparadoxical explanation of all peculiar characteristics of the quantum theoretic universe of discourse, together with the logical, physical, and philosophical grounding of all related notions.17 In particular, the objectives of the research program of relational realism concern the investigation of applicability of categorical and sheaf-theoretic ideas, together with their philosophical implications, in relation to the following perennial issues in quantum theory : I. The problem of a revised viable relational realist interpretation of quantum theory superseding the norms of classical realism; II. The reevaluation of the globally non-Boolean logical structures of events associated with quantum systems from a categorical and sheaf-theoretic standpoint together with their corresponding truth-value assignments; III. The explication of the structure of the part-whole relation in quantum systems by sheaf-theoretic means and its functional role in explaining entanglement correlations being in focus in the domain of quantum information science; IV. The understanding of the emergence of classicality from the fundamental quantum description of systems via processes of decoherence modeled categorically; V. The elucidation of the notions of global relative topological and geometric phases and their conceptual significance for the explanation of topological order and topological states of matter with applications in solid state, condensed matter physics, and quantum computation; VI. The application of the algebraic-topological scheme of “sheaftheoretic localization” and the gauge field-theoretic method of “extensive connection” for the formulation of background- independent quantum dynamic processes and the study of the issue of singularities from this conceptual standpoint. In a nutshell, the adjective “relational” in the characterization of the program of relational realism is to be thought of technically not in its 16 Anastasios Mallios and Elias Zafiris, “The Homological Kahler-De Rham Differential Mechanism: I. Application in General Theory of Relativity,” Advances in Mathematical Physics (2011), doi:10.1155/2011/191083; A. Mallios and E. Zafiris, “The Homological Kahler-De Rham Differential Mechanism: II. Sheaf-Theoretic Localization of Quantum Dynamics,” Advances in Mathematical Physics (2011), doi:10.1155/2011/18980. 17 Steve Awodey, Category Theory, 2nd ed. (Oxford: Oxford University Press, 2010); Bell, “From Absolute to Local Mathematics”; Robert Goldblatt, Topoi: The Categorial Analysis of Logic, rev. 2nd ed. (1984; repr., New York: Dover, 2006); MacLane and Moerdijk, Sheaves in Geometry and Logic; Eisenbud, Commutative Algebra; Gelfand and Manin, Methods of Homological Algebra.
182
coding AS literacy — Metalithikum IV
usual set-theoretic connotation of a scheme of relations, but in terms of a theoretical and algebraic topological prism of analysis. According to this, the emphasis is on the formation of bidirectional functorial bridges, called adjunctions,18 between different categories as well as on the establishment of partial or local structural congruence relations between different levels of categorical structure in a natural manner without the intervention of ad hoc choices and artificial conventions. Thus, the target of this analysis is the formation of natural bridges between structural relations and the efficient transfer of difficult problems pertaining to some categorical level into another level where they can be resolved. VII Quantum Mechanics as a Non-Spatiotemporal Gauge Theory A natural question emerging in the sheaf-theoretic setting of understanding quantum event spectra is if quantum phenomena are compatible with the principles of gauge geometry, where the base space of the involved fiber-bundles should not be required to be a space-time point manifold anymore. We remind here that the basic idea of gauge geometry, represented by a fiber bundle geometric structure, requires invariance under the local action of a gauge symmetry group. It is an astonishing realization that the sheaf and fiber-bundle perspectives can actually be made functorially equivalent under the satisfaction of the mild conditions of the Serre-Swan theorem.19 According to this theorem, finitely generated projective modules, and thus locally free sheaves of modules called vector sheaves of states, defined over commutative observable algebra sheaves, are equivalent to vector bundles over a paracompact and Hausdorff topological base space. Notwithstanding this functorial equivalence, sheaves in general can be displayed as fibrations more rich and flexible than fiber bundles, since instead of the local product structure involved in the definition of a fiber bundle, the much weaker condition of a local homeomorphism is required (for the case of topological spaces). From the other side, the set of sections of any vector bundle encoding the physical information of states always forms a vector sheaf of germs. If we disassociate the semantics of gauge geometry from the usual metrical space-time point manifold base, then general paracompact and Hausdorff topological spaces may be utilized as base spaces of the associated bundles’ geometry. It is instructive to think of these base spaces as topological spaces of control variables. We emphasize that a base topological space of 18 Elias Zafiris, “Rosen’s Modeling Relations via Categorical Adjunctions,” International Journal of General Systems 41, no. 5 (2012). 19 Mallios, Geometry of Vector Sheaves; Mallios, Modern Differential Geometry in Gauge Theories, vol. 1.
V The Nature of Local / Global Distinctions
183
control variables serves only as the carrier of a bundle geometric spectrum, and in particular it incorporates the local / global distinction required for the sheaf-theoretic interpretation of this spectrum. In view of a gauge-theoretic conception of quantum geometric spectra, the following aspects acquire particular significance. First, in the case of fiberbundle gauge geometry, the fiber over each point of a base space represents the local gauge freedom in the local definition of a physical information attribute. Thus, the vector space over each point of a base topological space of a vector bundle represents the local gauge freedom in the local definition of a state. Second, due to equivalence of vector bundles with vector sheaves of germs a sheaf-theoretic interpretation of the bundle geometric spectrum should be adopted, and in particular state or observable information at a point should be evaluated in terms of germs and not in a punctual way. Third, the sheaf-theoretic interpretation of gauge geometry introduces a subtle distinction between local and global physical information carriers. How are these aspects reflected in the case of quantum mechanics? In the foundations of quantum mechanics, the widespread opinion is that phases are not important because a state is not actually described by a vector but by a ray or a projection operator so that it can always be removed by a suitable transformation. Moreover, due to the standard probability interpretation of a state vector at a single moment in time, physical significance has been assigned only to the modulus or magnitude of a state vector, whereas its phase has been ignored. Although it is true that the notion of phase can always be gauged away locally, this is not the case globally. Actually all typical global quantum information carriers are relative phases obtained by interference phenomena. These phenomena involve various splitting and recombination processes of beams whose global coherence is measured precisely by some relative phase difference. Generally, a relative phase can be thought of as a global physical attribute measuring the coherence between two distinct histories of events sharing a common initial and final point in the base space of control variables parameterizing the dynamic evolution of a quantum system. We note that due to the functional dependence of the dynamic evolution on the control variables, the state of a quantum system is parameterized implicitly by a temporal parameter through the control variables. The short discussion above points to the essential non-spatiotemporal gauge nature of quantum geometry. In the simplest case, we may consider the sheaf-theoretic localization of a complex Hilbert space of states over its complex projective Hilbert space in order to obtain a line bundle of states. The line-bundle structure expresses the fact that a quantum state is defined locally up to an arbitrary complex phase, and thus the unitary group of complex phases plays the role of a local symmetry group of a gauge geometric spectrum constituted sheaf-theoretically. The sheaf-theoretic interpretation of the spectrum implicates the existence of both local and global information carriers. A general
184
coding AS literacy — Metalithikum IV
mechanism of a generation of global information carriers in the form of global phase factors of a geometric or topological origin has been initially formulated by Berry and modeled in line-bundle theoretic terms by Simon.20 It has been demonstrated that a quantum system undergoing a slowly evolving (adiabatic) cyclic evolution retains a “global memory” of its motion after coming back to its original physical state. This “memory” is expressed by means of a complex phase factor in the state of the system, called Berry’s phase or the geometric phase. The cyclic evolution, which can be thought of as a periodicity property of the state of a quantum system, is driven by a Hamiltonian bearing an implicit time dependence through a base topological space of control variables. Due to the implicit temporal dependence imposed by the time parameterization of a closed path in the environmental parameters of the control space, this global geometric phase factor is thought of as “memory” of the motion since it encodes the global geometric or topological features of the control space. It should be stressed that a “global memory” is topological or geometric because it depends solely on the topology or geometry of the control space pathway along which the state is transported. It does not depend on the temporal metric duration of the historical evolution nor on the particular form of the dynamics that is applied to the system. VIII
Quantum Geometric Spectra
The fixed space-time point manifold independent approach to physical geometry proposed in this paper implies that quantum geometric spectra can be adequately understood only from the prism of a sheaf-theoretic interpretation, which fully utilizes the non-spatiotemporal gauge structure of quantum mechanics. According to this interpretation, global information carriers of quantum systems are encoded either as “global memories” of states during processes of cyclic evolution, or as relative phase differences between distinct histories of events parameterized by the same initial and final points with respect to a base topological space of control variables, through which the temporal evolution is implicitly defined. The particular significance of the concept of global topological and geometric phases, from the viewpoint of the sheaftheoretic interpretation scheme, is that they mark a distinctive point in the history of quantum theory, where for the first time the significance of global information carriers as distinct entities from local ones is realized and made explicit through precise physical models, which have found 20 M. V. Berry, “Quantal Phase Factors Accompanying Adiabatic Changes,” Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 392, no. 1802 (March 8, 1984): 45; Barry Simon, “Holonomy, the Quantum Adiabatic Theorem, and Berry’s Phase,” Physical Review Letters 51, no. 24 (December 12, 1983): 2167.
V The Nature of Local / Global Distinctions
185
concrete experimental applications, like the quantum Hall effect and topological states of matter—for example, topological insulators.21 In particular, global relative phase factors in the gauge-theoretic setting of sheaves are obtained via an integration procedure of local gauge potentials over a contour, represented by a closed path or loop on a base space of control variables, which is implicitly parameterized continuously by an external temporal parameter. This nontrivial geometric or topological information of global significance is measured in terms of global holonomy phase factors via the procedure of lifting closed paths from the base space to the states or observables defined over it according to some parallel transport constraint (like the adiabatic one), technically called a connection. Due to the implicit time dependence parameterizing this procedure, if we continuously trace a loop on the base space, then this loop can be lifted to the implicitly evolving states or observables, which are represented as sections of a vector sheaf or an observable algebra sheaf respectively, over the base space. The particular global transformation undergone, for instance, by a state when it is parallel-transported along a closed curve on the base space is called the holonomy of the connection and is represented by a unitary group element. Thus, in this case the holonomy describes the global state transformation induced by cyclic changes in the controlling variables. We conclude that although quantum geometric spectra may be locally probed in terms of observables, represented as self-adjoint operators, and their corresponding probabilities of events with respect to an orthonormal basis of eigenstates comprising a Boolean logical frame, so that local phases do not have any measurable significance, globally it is precisely the measurable relative phase differences that maintain the quantum coherence information. A global phase factor is not represented by any self-adjoint operator, but it is represented by means of a holonomy unitary group element, to be thought of as the accumulated “memory” due to periodicity with respect to an environment of control variables. The explicitly different nature of physical information carriers as we make the transition from the local to the global level of describing quantum geometric spectra, and its inverse, requires an adequate interpretation scheme where this distinction is appropriately modeled. The main thesis of this work is that a natural approach to the physical geometry of the quantum regime should be carried out by utilizing the theory of vector sheaves of states equipped with a connection. Thus in this manner quantum geometric spectra admit a sheaf-theoretic interpretation in combination with the non-spatiotemporal gauge structure of quantum mechanics. 21 M. Z. Hasan and C. L. Kane, “Colloquium: Topological Insulators,” Reviews of Modern Physics 82 (2010): 3045 ; M. Z. Hasan and J. E. Moore, “Three-Dimensional Topological Insulators,” Annual Review of Condensed Matter Physics 2 (2011); Y. Zhang, Y.-W. Tan, H. L. Stormer, and P. Kim, “Experimental Observation of the Quantum Hall Effect and Berry’s Phase in Grapheme,” Nature 438 (2005): 201.
186
coding AS literacy — Metalithikum IV
VI Self-Organizing Maps and Learning Vector Quantization for Complex Data Barbara Hammer I Introduction 191 — II Fundamental principles 194 · II.I Unsupervised Prototype-Based Techniques 194 · II.II Supervised PrototypeBased Schemes 197 — III Metric Learning 199 — IV Relational and Kernel Mapping 202 — V Recursive models 208 — VI Conclusions 211 — Acknowledgments 211
Barbara Hammer received her PhD in Computer Science in 1995 and her venia legendi in Computer Science in 2003, both from the Osnabrück University, Germany. From 2000 to 2004, she was chair of the junior research group “Learning with Neural Methods on Structured Data” at Osnabrück University before accepting an offer as professor for Theoretical Computer Science at Clausthal University of Technology, Germany, in 2004. Since 2010, she holds a professorship for Theoretical Computer Science for Cognitive Systems at the CITEC cluster of excellence at Bielefeld University, Germany. Several research stays have taken her to Italy, the United Kingdom, India, France, the Netherlands, and the United States. Her areas of expertise include hybrid systems, self-organizing maps, clustering, and recurrent networks as well as applications in bioinformatics, industrial process monitoring, and cognitive science. She has been chairing the IEEE CIS Technical Committee on Data Mining in 2013 / 14, and she is chair of the Fachgruppe Neural Networks of the GI and vice chair of the GNNs. She has published more than two hundred contributions to international conferences / journals, and she is coauthor / editor of four books.
188
codingAS ASliteracy — Metalithikum literacy — Metalithikum IV coding
By modeling data in terms of prototypical representatives, SelfOrganizing Maps (SOM) and Learning Vector Quantization (LVQ) offer very intuitive paradigms in the field of data analysis and machine learning. Both classical LVQ and SOM rely on the Euclidean distance measure as a crucial ingredient to decompose the data space into clusters or classes. Hence the technology is not applicable if data are not consistent with the standard Euclidean metric. In this contribution, we will discuss three important paradigms of how to extend prototype-based techniques to more general data structures: 1. Relevance and metric learning to enhance the technology by an adaptive metric that takes into VI SOM Som and Learning Vector Quantization for Complex Data
189
189
account scaling and correlation of features. 2. Relational approaches that extend the technology to data characterized in terms of pairwise dissimilarities only. 3. Recursive approaches that extend the technology to time-series representations. Thereby, we will take a principled approach that treats LVQ and SOM as models that essentially minimize a cost function that depends on pairwise distances of prototypes and data points. This enables us to transfer the approaches in a very canonic way to more general data. We will not discuss any application in this contribution, but rather focus on the mathematical principles behind the proposed paradigms, 190
coding AS literacy — Metalithikum IV
and refer to the corresponding publications for further reading. i Introduction The amount of electronic information is increasing dramatically, turning the issue of big-data analysis into a major challenge of modern society. In this context, Kohonen’s ingenious Self-Organizing Maps (SOM) or their supervised counterpart, learning vector quantization (LVQ), have lost none of their attractiveness as intuitive data-inspection tools with very light-weight computational burden: they allow humans to rapidly access large volumes of high-dimensional data.1 The technology is characterized by very simple and intuitive training techniques that are typically linear time and constant-memory approaches, and that can be easily turned into online techniques — central requirements for advances that are suitable for big-data analysis. Application scenarios range from robotics and telecommunication up to web and financial data mining.2 Classical SOM and LVQ are based on heuristics grounds, such that their mathematical treatment and extensions to more general metrics or data structures are difficult.3 Modern variants are typically based on principled mathematical methods : many approaches are accompanied by an explicit cost function, and its numeric optimization yields learning rules that closely resemble the ones as proposed by Kohonen.4 We will solely rely on this formalism in the following and demonstrate the power of such an abstract, reduced modeling : extensions to more general metrics or data formats are very clear in this abstract formalization. One major drawback of original SOM and LVQ algorithms consists in its limitation to the standard Euclidean metric. Prototypes represent isotropic regions, and cluster boundaries are formed by hyperplanes that are perpendicular to the lines connecting the prototypes. This setting is often not appropriate for practical applications : it relies on the assumption that every input dimension carries the same relevance; 1 T. Kohonen, ed., Self-Organizing Maps, 3rd ed. (New York: Springer Verlag, 2001). 2 Ibid. 3 J. Fort, “Som’s Mathematics,” Neural Networks 19, nos. 6–7 (2006): 812–16; M. Cottrell, J. Fort, and G. Pagès, “Two or Three Things That We Know about the Kohonen Algorithm,” in ESANN’1994 Proceedings: 2nd European Symposium on Artificial Neural Networks (Brussels: D-Facto, 1994), 235–44. 4 T. Heskes, “Self-Organizing Maps, Vector Quantization, and Mixture Modeling,” IEEE Transactions on Neural Networks 12, no. 6 (2001): 1299–305; M. Biehl, B. Hammer, P. Schneider, and T. Villmann, “Metric Learning for Prototype-Based Classification,” in Innovations in Neural Information Paradigms and Applications, ed. M. Bianchini, M. Maggini, F. Scarselli, and L. C. Jain (Berlin: Springer, 2009), 183–99.
VI SOM and Learning Vector Quantization for Complex Data
191
further, correlations of the input dimensions are not taken into account. This fact is particularly problematic if high-dimensional data are dealt with : usually data dimensions are noisy, and this noise can accumulate such that the overall representation becomes meaningless. Because of these aspects, generalizations of LVQ and SOM to more complex metric schemes have been proposed, one particularly relevant approach being the generalization of LVQ to adaptive quadratic forms.5 We will introduce this intuitive and elegant scheme for general matrix learning as a first approach of extending prototype-based techniques to data-driven settings beyond the standard Euclidean metric. SOM and LVQ and their counterparts have been proposed to process vectors in a fixed feature-vector space; also, a substitution of the standard Euclidean metric by a general quadratic form still relies on the notion of a finite dimensional real-vector space for data representation. Often, electronic data have a dedicated format that cannot easily be converted to vectors of fixed dimensionality : biological sequence data, biological networks, scientific texts, functional data such as spectra, data incorporating temporal dependencies such as EEG, time series, etc., are usually not represented in terms of fixed-dimensional vectors, rather they are phrased in terms of data structures such as sequences, tree structures, or graphs. In many cases, experts access such data by means of dedicated comparison measures : for example, BLAST or FASTA for biological sequences, alignment techniques for biological networks, dynamic time warping for time series, etc. From an abstract point of view, dissimilarity measures that are suited for the pairwise comparison of abstract data types are used, whereby these dissimilarity measures are far richer than a simple quadratic form. As a second major paradigm, we will therefore discuss how to extend SOM and LVQ to more general data that are described by pairwise similarities or dissimilarities only. Already more than ten years ago, Kohonen presented the so-called median SOM :6 instead of mean prototype positions in a real-vector space, prototype locations are restricted to data positions. The generalized median serves as a computational vehicle to adapt such restricted prototypes according to given dissimilarity data. This principle can be substantiated by a mathematical derivative from a cost function such that convergence of the technique can be proved.7 The positional restrictions of prototypes to data points, however, can lead to suboptimal representations as compared to the capabilities of continuous updates that are possible in Euclidean space. As an alternative, two principled 5
P. Schneider, M. Biehl, and B. Hammer, “Adaptive Relevance Matrices in Learning Vector Quantization,” Neural Computation 21 (2009): 3532–61. 6 T. Kohonen and P. Somervuo, “How to Make Large Self-Organizing Maps for Nonvectorial Data,” Neural Networks 15 (2002): 945–52. 7 M. Cottrell, B. Hammer, A. Hasenfuss, and T. Villmann, “Batch and Median Neural Gas,” Neural Networks 19 (2006): 762–71.
192
coding AS literacy — Metalithikum IV
approaches have been proposed : provided dissimilarity data can be linked to a proper kernel, and kernel variants of SOM and LVQ can be used.8 Provided dissimilarities are strictly non-Euclidean; so-called relational approaches offer an alternative by means of an implicit pseudo-Euclidean embedding of data.9 In both cases, prototypes are represented implicitly by means of a weighting scheme, and adaptation takes place based on pairwise dissimilarities of the data only. This principle has already been used in the context of fuzzy clustering; in the past years, it has been successfully integrated into topographic maps such as SOM, neural gas, the generative topographic mapping, and LVQ schemes.10 We will have a look at so-called median and relational approaches of prototype-based techniques as a second major principle to address general data structures. Relational approaches extend prototype-based methods to any type of data that can be represented by pairwise relations. Many complex data structures are of a specific nature such that a third principle can be used : recursive data structures allow a decomposition into basic constituents and their relations. SOM techniques can follow this dynamics : they process each constituent separately, thereby utilizing the context imposed by the structure. This method is particularly convenient, if the considered data structures, such as sequences or trees, possess a recursive nature. In this case, a natural order in which the single constituents should be visited is given by the natural order within the structure. For supervised learning scenarios the paradigm of recursive processing of structured data is well established, covered by the field of recursive or graph networks with applications ranging from logic and natural language parsing up to bio-informatics and chemistry.11 Early unsupervised recursive models, such as the temporal Kohonen map or the recurrent SOM, include the biologically plausible dynamics of leaky integrators. Combinations of leaky integrators with additional features can increase the capacity of the models as demonstrated in further proposals. Later, more general recurrences with richer dynamics have 8
See e.g. H. Yin, “On the Equivalence between Kernel Self-Organising Maps and SelfOrganising Mixture Density Networks,” Neural Networks 19, nos. 6–7 (2006): 780–84; and B. Hammer, D. Hofmann, F.-M. Schleif, and X. Zhu, “Learning Vector Quantization for (Dis-)Similarities,” Neurocomputing 131 (2014): 43–51. 9 B. Hammer and A. Hasenfuss, “Topographic Mapping of Large Dissimilarity Datasets,” Neural Computation 22, no. 9 (2010): 2229–84; Hammer et al., “Learning Vector Quantization for (Dis-)Similarities.” 10 Hammer and Hasenfuss, “Topographic Mapping of Large Dissimilarity Datasets”; A. Gisbrecht, B. Mokbel, and B. Hammer, “Relational Generative Topographic Map,” in ESANN’10, ed. M. Verleysen (Evere: D-Side, 2010), 277–82; Hammer et al., “Learning Vector Quantization for (Dis-)Similarities.” 11 B. Hammer, A. Micheli, and A. Sperduti. “Adaptive Contextual Processing of Structured Data by Recursive Neural Networks: A Survey of Computational Properties,” in Perspectives of Neural-Symbolic Integration, ed. B. Hammer and P. Hitzler, vol. 77, Studies in Computational Intelligence (Berlin: Springer, 2007), 67–94; B. Hammer and B. Jain, “Neural Methods for Non-standard Data,” in ESANN’2004, ed. M. Verleysen (Evere: D-Side, 2004), 281–92.
VI SOM and Learning Vector Quantization for Complex Data
193
been proposed.12 These models transcend the simple local recurrence of leaky integrators and can represent much richer dynamic behavior. In this contribution, we will have a look at general recursive extensions of SOMs to deal with time-series data as a third approach. First, we introduce the fundamentals of SOM and LVQ and, in particular, define corresponding cost functions. Afterward, we give an overview about three major paradigms to extend SOM and LVQ to more general data structures using (1) metric learning, (11) relational approaches, and (111) recursive approaches. We conclude with a discussion about open questions and challenges. II
Fundamental principles
Prototype-based approaches represent data vectors x ∈ n by means of prototypes w1, … ,wk ∈ n based on the standard squared Euclidean distance
d ( x,wi ) = x − wi 2 [1]
One fundamental principle underlying all methods, supervised and unsupervised, is given by the decomposition of the data space into receptive fields as induced by the prototypes : the receptive field of prototype is determined by the characteristic function 1 if d ( x,wi ) ≤ d( x,wj ) for all j χi ( x ) = 0 otherwise
[2]
This decomposition constitutes the essential part of every mapping prescription induced by a prototype-based technique. Note that this prescription is independent of the choice of the metric d, which could be any real-valued measure in this setting. The question that is addressed by the separate learning techniques is how to determine suitable prototype positions given a set of training data. II.I Unsupervised Prototype-Based Techniques For unsupervised learning, a finite set of data points x1, … , xN is given which should be represented as accurately as possible. The quantization error Eqe = 1/2
Σ χi ( xj )d ( xj,wi )
[3]
i, j
offers one quality measure for the representational capability of the vectors for the data xj , possibly the most popular one : it simply averages 12 B. Hammer, A. Micheli, A. Sperduti, and M. Strickert, “Recursive Self-Organizing Network Models,” Neural Networks 17, nos. 8–9 (2004): 1061–86; B. Hammer, A. Micheli, A. Sperduti, and M. Strickert, “A General Framework for Unsupervised Processing of Structured Data,” Neurocomputing 57 (2004): 3–35; G. d. A. Barreto, A. F. Araújo, and S. C. Kremer, “A Taxonomy for Spatiotemporal Connectionist Networks Revisited: The Unsupervised Case,” Neural Computation 15, no. 6 (2003): 1255–320.
194
coding AS literacy — Metalithikum IV
over the squared distance of prototypes and data in their receptive field. Popular learning schemes such as k-means clustering or vector quantization directly aim at its optimization. Statistical counterparts are given by Gaussian mixture models of the data and corresponding training.13 Although the quantization error constitutes one of the most popular measures to evaluate unsupervised clustering, it is often not satisfactory to directly derive learning schemes based thereon : it suffers from numerical problems due to multiple local optima. In addition, further functionalities such as a neighborhood structure of the prototypes that enables browsing and data visualization are not offered. Both problems are addressed by topographic mapping. SOM : The popular Self-Organizing Map imposes a fixed predefined neighborhood on the prototypes defined by a regular lattice topology, typically a two-dimensional lattice in Euclidean or hyperbolic space.14 The goal of learning is not only to represent data in terms of prototypes in the sense of a minimum least squares match as specified in [3], but to do so in such a way that neighbored prototypes in the lattice represent neighbored receptive fields in the data space. The original SOM does not possess a cost function in the continuous case and its mathematical investigation is quite demanding;15 a slight variation, however, does, corresponding to the costs ESOM = 1/2
Σ χi∗ ( xj ) Σ exp(−nd( i,k )/σ2 ) d ( xj,wk ). i, j
[4]
k
Here, nd(i, j ) refers to the priorly fixed neighborhood structure of the prototypes, e.g. their graph distance in the predefined low dimensional lattice. The characteristic function of the receptive fields χ∗i ( xj ), unlike [2], is measured via the averaged distances Σk exp(−nd( i, k )/σ2 ) d ( xj,wk); a slight variation as compared to χi which does not change the learning much in practice, and is often simply approximated by χi for this fact. This cost function constitutes a very intuitive extension of the quantization error to take into account the neighborhood structure : not only should the distance of a data point to its closest prototype be minimized, but the prototypes that are neighbored are also taken into consideration to allow for a topology-preserving mapping of the lattice structure to the data space. Learning rules can directly be derived from these costs based on general principles : online adaptation iteratively adapts the winning prototype and its neighborhood toward a given data point; updates result from this 13 C. Bishop, Pattern Recognition and Machine Learning (Berlin: Springer, 2007). 14 Kohonen, Self-Organizing Maps; J. Ontrup and H. Ritter, “Large-Scale Data Exploration with the Hierarchically Growing Hyperbolic SOM,” Neural Networks 19, nos. 6–7 (2006): 751–61. 15 See e.g. Heskes, “Self-Organizing Maps”; Kohonen, Self-Organizing Maps; Cottrell et al., “Two or Three Things That We Know about the Kohonen Algorithm.”
VI SOM and Learning Vector Quantization for Complex Data
195
cost function by means of a stochastic gradient. The following steps are iterated until convergence : ∗
compute I := argmaxi χi ( xj ) for a given xj
[5]
adapt wk := wk + η · exp(−nd( I,k )) · (xj − wk) for all wk
[6]
where η > 0 is the step size. Online adaptation has the benefit that it can be used in scenarios with a streaming data access only, and it also allows for lifelong learning. On the contrary, batch adaptation takes into account all data at once. It iterates the following two computations ∗
compute χi ( xj ) for all i, adapt wk :=
∑i,j χi∗ ( xj ) · exp(−nd(i, k )) · xj ∗ ∑i,j χi ( xj ) · exp(−nd(i, k ))
[7] [8]
that can directly be derived from the costs by minimizing with respect to prototype locations or data assignments to receptive fields, respectively. In statistics, a very similar approach is well known under the umbrella of expectation maximization (EM) for an optimization of costs in the context of hidden variables. The deterministic counterpart yields the update rules as indicated above. Both techniques, batch and online adaptation, converge to a (possibly local) minimum of SOM’s costs in a mathematically well-defined sense. The convergence of batch adaptation is of quadratic order, while online adaptation amounts to linear convergence. This, however, has the consequence that a good initialization is necessary for batch optimization to avoid topological mismatches. Typically, principal component analysis is used. Online learning, on the other hand, is usually more robust, but it pays the price of a slower convergence and indeterminism of the output; if SOM is used as an interactive display of data for a practitioner, indeterminism is usually an unwanted effect, such that batch learning is usually preferred in this context. There exists a large variety of alternative prototype-based data representation schemes such as generative topographic mapping (GTM), which can be seen as a statistical counterpart of SOM,16 or neural gas, which, unlike SOM, does not rely on a predefined lattice structure.17 Rather, it infers a possibly irregular optimum lattice while training. From an abstract point of view, all these techniques can be formalized as methods that optimize a cost function that depends on pairwise distances of data 16 C. M. Bishop and C. K. I. Williams, “GTM: The Generative Topographic Mapping,” Neural Computation 10 (1998): 215–34. 17 T. Martinetz, S. Berkovich, and K. Schulten, “‘Neural-Gas’ Network for Vector Quantization and Its Application to Time-Series Prediction,” IEEE-Transactions on Neural Networks 4, no. 4 (1993): 558–69.
196
coding AS literacy — Metalithikum IV
points and prototypes. The methods differ in the form of the cost function and the exact way in which it is optimized, the latter ranging from online stochastic-gradient batch adaptation up to its statistical counterpart, an EM approach for GTM. Due to this fact, the following extensions to complex data structure can be transferred to all these methods. II.II Supervised Prototype-Based Schemes Learning vector quantization (LVQ) as introduced by Kohonen aims at learning a clustering of given example data.18 Assume labeled data points {xi , yi } ∈ n × {1, … , C } are given, as the training set. Then LVQ is characterized by prototypes wi x ∈ n and their labels c (wi ) ∈ {1, … , C }. These induce a classification that is induced by the label of a prototype in its receptive field: c (x ) := c (wi ) such that wi = argminwj d ( x,wj )
[9]
The goal of learning is to minimize the classification error : || { xi || c (xi ) ≠ yi } || [10]
Its direct optimization, however, is NP-hard, such that suitable approximations have to be found. Basic LVQ schemes rely on heuristics such as Hebbian learning. LVQ1, for example, consists in an iteration of the following updates, given a data point xi : wj + ŋ · ( xi − wj ) if c ( wj ) = yi wj := wj − ŋ · ( xi − wj) if c (wj ) ≠ yi
[11]
Interestingly, this simple learning rule leads to a quite efficient algorithm that displays remarkable results. Kohonen proposed a few alternatives that account for a faster convergence or better adaptation of the decision boundaries. One popular alternative, for example, is given by LVQ2.1, which adapts the two closest prototypes in every step, or optimized LVQ, which adjusts the learning rates to obtain faster convergence. These heuristics, however, similar to SOM, are not directly derived from a cost function. One can show that this is impossible for LVQ1, while LVQ2.1 can be accompanied by a cost function, but this is unlimited and hence prone to numerical problems.19 GLVQ : Several researchers have proposed alternative LVQ schemes that are directly derived from an appropriate cost function, such that convergence and learning dynamics are explicitly stated in terms of the objective. We will focus on the approach of “Adaptive Relevance Matrices in Learning Vector Quantization” and generalizations thereof, since it pro-
18 Kohonen, Self-Organizing Maps. 19 Biehl et al. “Metric Learning for Prototype-Based Classification.”
VI SOM and Learning Vector Quantization for Complex Data
197
vides a very convenient extension of LVQ2.1 with typical characteristics.20 The cost function underlying generalized LVQ (GLVQ) has the form
yi
d (xi,wj ) − d (xi ,wk )
EGLVQ = /2 Σ χ j ( xi ) · χ ( xi ) · Φ i, j, k d ( xi,wj ) + d ( xi,wk ) 1
¬ yi k
[12]
where Φ : → is a strictly monotonic increasing function such as the identity or the logistic function. χjy refers to the receptive field masked by prototypes that are labeled y only. Similarly, χ ¬yk takes into account only those prototypes that do not carry the label y. This cost function involves the same term as LVQ2.1, the difference d(xi,wj ) − d(xi,wk ) that is negative iff the classification of xi is correct, and that is smaller the larger the difference of the distance to the closest correct prototype versus the closest wrong prototype. Unlike LVQ2.1, this cost term is scaled by means of the denominator such that summands in (–1, 1) arise. These are possibly subject to a further nonlinear transformation Φ. In effect, the sum approximates the number of misclassifications, since exactly those sums that correspond to a misclassification contribute nonnegative costs. It has been shown in “Adaptive Relevance Matrices in Learning Vector Quantization” that the definition [12] constitutes a valid cost function also in the case of a continuous data distribution. Updated rules become wj := wj + η · Φ' · µ+ (xi ) · wk := wk – η · Φ' · µ– (xi ) ·
∂d (xi ,wj ) ∂wj
∂d (xi ,wk ) ∂wk
[13]
[14]
where wj is the closest prototype with the same label as xi and wk with a different label than xi. Φ' is evaluated at (d (xi,wj ) − d (xi, wk ))/(d (xi,wj ) + ’ d (xi,wk )), the factor µ+ (xi ) equals 2d (xi,wk ) /(d (xi,wj ) + d (xi,wk ))2, the – factor µ (xi ) equals 2d (xi,wj ) /(d (xi,wj ) + d (xi,wk ))2, and the derivative of the squared Euclidean metric is ∂d (x,w)
∂w
= −2 (x − w). [15]
These updates are actually very similar to the standard LVQ2.1 updates, but incorporate a suitable scaling of the Hebbian and antiHebbian update terms, which accounts for numeric stability of the learning rules. Typically, batch updates are not possible in closed form due to the complexity of these costs (incorporating fractions), such that numeric optimization techniques typically resort to some form of gradient method. 20 Schneider et al., “Adaptive Relevance Matrices in Learning Vector Quantization.”
198
coding AS literacy — Metalithikum IV
Interestingly, the LVQ costs are very similar to SOM on a very abstract level : both techniques are characterized by a cost function that depends on the pairwise distances of data points and prototypes f (d (xi,wj )i,j ) [16]
Their major difference is that LVQ takes label assignments into account and tries to approximate the number of misclassifications, while SOM does not take into account labels, and it sums over distances only. As for SOM, there exist probabilistic alternatives for LVQ, such as robust soft LVQ (RSLVQ) as introduced in Seo and Obermayer’s “Soft Learning Vector Quantization,” which offers a discriminative probabilistic modeling of data.21 Note that SOM itself, albeit being unsupervised in nature, can easily be used as a classifier by means of posterior labeling. Generalization Ability : Interestingly, it is possible to accompany prototype-based classification schemes by strong learning theoretical generalization bounds similar to bounds as obtained for support vector machines. Under the assumption that training data are representative for the underlying data distribution, and the training error achieved during training is small, one can explicitly limit the expected classification error for novel data points by a term that does not depend on the data dimensionality, but that depends on the so-called hypothesis margin only.The latter is the quantity M : x 7→ min {d (x,wi–)} − min {d (x,wi+ )}
f − wi
[17]
+
wi
+
with w i– referring to prototypes with the same / a different label compared to x. The larger this margin on the given training set, the better is the guaranteed generalization ability of LVQ classifiers. The underlying proofs rely on a central combinatorial quantity that characterizes the richness of the function class as induced by LVQ models, the so-called Rademacher complexity; this can be limited based on the margin.22 III
Metric Learning
Typically, LVQ and SOM are used for vectorial data, and its classification schemes [9] rely on the standard squared Euclidean distance [32]. This restricts receptive fields of SOM or class boundaries of LVQ to convex regions that rely on the assumption of isotropic classes, i.e. it severely restricts their representation capability. Further, numerical problems easily occur if huge dimensionality has to be dealt with and noise 21 S. Seo and K. Obermayer, “Soft Learning Vector Quantization,” Neural Computation 15, no. 7 (2003): 1589–604. 22 Schneider et al., “Adaptive Relevance Matrices in Learning Vector Quantization.”
VI SOM and Learning Vector Quantization for Complex Data
199
accumulates. It has been discussed that Euclidean distances become more and more meaningless if the dimensionality of the data increases.23 Besides SOM and LVQ models, a large variety of popular machine learning techniques such as k-means clustering or k-nearest neighbor classifiers rely on distances, such that this problem is quite widespread in the field. It has caused a line of research that is typically summarized under the umbrella of metric learning, and that aims for an adaptation of the Euclidean metric such that the resulting form is more suitable for the given task at hand.24 Due to its formalization in terms of a cost function and its representation of the model by prototypes and their receptive fields, metric learning can naturally be integrated into LVQ and SOM.25 We will have a short glimpse at this powerful framework in this section. Generally, the idea is to substitute the Euclidean distance [32] by a general quadratic form dΛ (x,wj ) = (x − wj )t · Λ · (x − wj)
[18]
where Λ is a n × n symmetric and positive semidefinite matrix with Σi [Λ ]ii = 1. Note that positive definiteness can be enforced by a different parametrization of the matrix Λ = ΩΩ t. In practice, various different partially more specific parametrizations of the metric have been considered, such as the following : 1. A diagonal matrix Λ that corresponds to a scaling of the axes only; 2. A low rank matrix Λ realized e.g. via picking Ω in n × n' with n' n; this corresponds to a low dimensional linear transformation, and hence allows for an easy visualization of the data. 3. Local metrics attached to all prototypes, i.e. Λ = Λj depends on the considered prototype; this allows for much richer shapes of the receptive fields. The following formal derivatives of metric updates can be used for all these choices in an analogous form. GMLVQ : The question occurs of how metric parameters can be determined to give an optimum result. For prototype-based schemes that are based on a cost function, an immediate solution exists : we can substitute the standard Euclidean metric [1] by the more general form [18 ] in 23 See e.g. J. A. Lee and M. Verleysen, Nonlinear Dimensionality Reduction (Berlin: Springer, 2007). 24 For recent surveys, see e.g. A. Bellet, A. Habrard, and M. Sebban, “A Survey on Metric Learning for Feature Vectors and Structured Data,” CoRR (2013), abs/1306.6709; B. Kulis, “Metric Learning: A Survey,” Foundations and Trends in Machine Learning 5, no. 4 (2013): 287–364. 25 See e.g. Schneider et al., “Adaptive Relevance Matrices in Learning Vector Quantization”; A. Gisbrecht and B. Hammer, “Relevance Learning in Generative Topographic Mapping,” Neurocomputing 74, no. 9 (2011): 1359–71; B. Arnonkijpanich, A. Hasenfuss, and B. Hammer, “Local Matrix Adaptation in Topographic Neural Maps,” Neurocomputing 74, no. 4 (2011): 522–39.
200
coding AS literacy — Metalithikum IV
the cost function; optimization is done simultaneously with respect to the prototype locations and metric parameters. As an example, we have a look at the GLVQ costs. The general metric [18 ] yields the derivative ∂dΛj (x,wj )
∂wj
= −2Λj · (x − wj )
[19]
and the corresponding factor Λ is included in the prototype update. These updates for the prototypes are accompanied by updates for the metric parameters Ωj and Ωk assigned to the closest correct and wrong prototype, respectively, as follows : @dΛj (xi ,wj ) ∆Ωj ∼ − Φ' · µ + (xi ) · @Ωj
[20]
@dΛk (xi ,wk ) ∆Ωk ∼ Φ' · µ – (xi ) · @Ωk
[21]
where the quantities Φ', µ +, and µ – are as beforehand, using the more general metric, and the derivative of the distance with respect to the matrix yields @dΛj (x, wj ) @(Ωj)lm
= 2 · [ Ωjt (x − wj ) ]m ( [ x ]l − [ wj ]l )
[22]
See e.g. Schneider et al.’s article “Adaptive Relevance Matrices in Learning Vector Quantization” for the exact derivation of this update rule. Interestingly, this metric adaptation scheme does not only increase the representational capability of prototype-based techniques, rather it also improves its interpretability : every input dimension is accompanied by a weighting scheme that indicates its relevance for the given task at hand. This added insight has turned out as crucial in particular in interactive data-analysis schemes such as in the context of biomedical data analysis.26 Metric Learning in SOM : Interestingly, a similar scheme can be used in the context of SOM. We have essentially two choices : provided auxiliary label information is available, the metric parameters can be adapted according to a discriminative objective, this way interleaving discriminative metric updates and unsupervised data representation. See Gisbrecht and Hammer’s “Relevance Learning in Generative Topographic Mapping” for one example realization of this principle in the context of GTM.
26 W. Arlt, M. Biehl, A. E. Taylor, S. Hahner, R. Libe, B. A. Hughes, P. Schneider, D. J. Smith, H. Stiekema, N. Krone, E. Porfiri, G. Opocher, J. Bertherat, F. Mantero, B. Allolio, M. Terzolo, P. Nightingale, C. H. L. Shackleton, X. Bertagna, M. Fassnacht, and P. M. Stewart, “Urine Steroid Metabolomics as a Biomarker Tool for Detecting Malignancy in Adrenal Tumors,” J Clinical Endocrinology and Metabolism 96 (2011): 3775–84.
VI SOM and Learning Vector Quantization for Complex Data
201
Alternatively, metric adaptation can be done in a purely unsupervised form, substituting the metric in the quantization error or SOM’s cost function by a general quadratic form. One example for this principle is presented in the approach in “Local Matrix Adaptation in Topographic Neural Maps,” mainly relying on neural gas for its efficiency, SOM being similar. Interestingly, online as well as batch update can be realized. Imposing the constraint det Λi = 1, the batch update of the matrix becomes Λi := Si−1 (det Si )1/ n [23]
where Si refers to a generalized data covariance matrix centered around the receptive fields Si =
∑ χl∗ ( xj ) · exp(−nd(l,i ))(xj − wi )(xj − wi )t [24] lj
This explicit representation also indicates the form of the receptive fields that are uncovered in the purely unsupervised setting : local scaling of the dimensions takes place such that the data variance is taken into account, better covering parts with a large variance as compared to more focused dimensions. Since the matrix Si has the form of a data covariance matrix centred in wi with according data weighting, the result can be interpreted as a standard Gaussian that takes into account the corresponding correlation and scaling of the data dimensions. IV Relational and Kernel Mapping Classical SOM and LVQ as well as metric adaptive versions as introduced above rely on a vectorial representation of data. In many application scenarios, data are not given as vectors, but they have a more complex form, such as sequences, graphs, or general structures.27 One principled interface to such data is offered by a characterization in terms of pairwise similarities or dissimilarities: dij = d (xi,xj ). [25]
These values can easily be defined for data points xi and xj that are nonvectorial, resorting e.g. to alignments, distances for structures, or general information theoretic principles, such as realized within the compression distance. In general, the dissimilarity need not correspond to a Euclidean metric and a Euclidean embedding of data need not exist. Rather, we only assume that the resulting matrix D has zero diagonal and is symmetric. Median SOM : Since no vector space embedding of data is available, it is not possible to smoothly adapt prototypes in the standard online or batch form 27 B. Hammer, C. Saunders, and A. Sperduti, “Special Issue on Neural Networks and Kernel Methods for Structured Domains,” Neural Networks 18, no. 8 (2005): 1015–18.
202
coding AS literacy — Metalithikum IV
for SOM. One solution has been proposed in Kohonen and Somervuo’s “How to Make Large Self-Organizing Maps for Nonvectorial Data” : prototype locations are restricted to the positions offered by data points, i.e. we enforce wi ∈ {x1,…,xN}. In “How to Make Large Self-Organizing Maps for Nonvectorial Data,” a very intuitive heuristic of how to determine prototype positions in this setting has been proposed. As shown in Cottrell et al.’s “Batch and Median Neural Gas,” it is possible to derive a very similar learning rule from the cost function of SOM [4] : note that the SOM cost function [4] is well defined as soon as the term d (xj,wk ) is meaningful. By restricting prototype positions to data, it is assured that this quantity is an element of the well-defined set of pairwise dissimilarities dij. The question arises of how to optimize these costs, which are now discrete with respect to the prototype positions. Like in batch SOM, an iterative optimization with respect to the assignments of data to prototypes and with respect to the prototype positions is possible. Thereby, the latter step does not allow an explicit solution, rather the best position can be determined by exhaustive search. The resulting update, in complete analogy to batch SOM (including convergence proof), yields compute ξi∗ (xj ) for all i optimize wk := argminxl
∗ 2 ∑ χi (xj) exp(−nd (i,k)/σ )d (xj,xl ) .
[26] [27]
i,j
Unlike batch SOM, which is linear time with respect to the number of data points N, this computation is quadratic in N. Therefore, in the original proposal by Kohonen,28 the summation as well as possible candidates are restricted to a neighborhood of the winner, leading to a speeding up but no longer guaranteeing convergence in the worst case. In complete analogy, alternatives such as batch neural gas can be extended to dissimilarity data by means of the generalized median and the respective cost function. In theory, exactly the same principle could easily be used for LVQ in the context of dissimilarity data, albeit it has not yet been tested in the literature to our knowledge. Relational and Kernel Approach : The discrete nature of median clustering causes a severe risk to being trapped in local optima of the cost function. Indeed, if median SOM is used for distance data that is given by the standard Euclidean metric, its representational capability is strictly weaker than the classical SOM due to the restriction of prototype locations. Hence the question arises of how to realize a continuous adaptation of prototypes in the context of discrete dissimilarity data. One key observation underlying the following argumentation consists in the fact that every symmetric dissimilarity matrix D gives rise to a vectorial 28 Kohonen and Somervuo, “How to Make Large Self-Organizing Maps for Nonvectorial Data.”
VI SOM and Learning Vector Quantization for Complex Data
203
embedding of the observed data, the so-called pseudo-Euclidean embedding of data :29 for every finite set of pairwise distances dij we can find vectors xi and a quadratic form realized by a diagonal matrix Ipq such that dij = (xi − xj) tIpq (xi − xj).
[28]
This is, in fact, a simple algebraic operation : we can turn dissimilarities into similarities by means of double centering : S = − 1/2 · JDJ t where J := I − 1/n 11t
[29]
The latter matrix can be diagonalized as S = Q ΛQ t, which gives rise to the representation S = XIpq Xt = Q | Λ|1/2
Ipq 0 0 0
| Λ|1/2Qt
[30]
where Ipq constitutes a diagonal matrix with p entries 1 and q entries −1 on the diagonal, p + q ≤ n. The triple (p,q,n − p − q) is often referred to as the signature of this embedding space. Essentially, this representation directly gives coordinates for the embedding via Q | Λ| 1/2, and it uses a specific quadratic form that can be decomposed into a Euclidean part (the first p dimensions) where the simple Euclidean distance is taken, and a correction part (the next q entries) that allows to correct strict Euclideanity by means of negative contributions to the dissimilarity. These data stem from a Euclidean vector space if and only if q = 0, i.e. the negative contributions vanish. In this case, S constitutes a kernel matrix. Otherwise, strict non-Euclideanity is present. These considerations give rise to the central choice to model prototypes by means of linear combinations of this embedding : wi = ∑ αij xj with j
∑ αij = 1.
[31]
j
Hence such a vector-space embedding allows a smooth adaptation of the prototypes by means of an adaptation of the coefficients αij. However, in practice, this embedding is not known priorly, and its explicit computation is cubic in N. Therefore, typically, the embedding is not performed explicitly, rather the following identity is used : for prototypes as represented by [31], it holds that d (wi,xj ) = [Dαi ] j − 1/2 · αit Dαi.
[32]
This equation has a few immediate interesting consequences : 1. Distance-based computations are invariant to the vectorial embedding of the data. Hence we can build canonic distancebased models for general dissimilarity data D this way. 29 See e.g. E. Pekalska and R. P. Duin, The Dissimilarity Representation for Pattern Recognition: Foundations and Applications (Singapore: World Scientific, 2005).
204
coding AS literacy — Metalithikum IV
2. The distance computation does not require an explicit embedding to be known; rather, distances of data and prototypes can be computed based on the matrix D and the coefficients αij only. Hence we can represent prototypes implicitly by referring to the coefficient vectors with coefficients αij. 3. Any optimization algorithm needs to address the coefficients αij, which can be adapted in a smooth way. Any learning technique that optimizes the corresponding cost function based on these distances with respect to the parameters αij is suited as a learning algorithm. These observations allow an immediate transfer of the winner-takes-all framework of SOM and LVQ to the setting of relational data D, and it also allows a transfer of the underlying cost functions. Cost-function optimization can be done in two different ways : - We can minimize the cost functions by means of a stochastic gradient descent with respect to the model parameters αij under the constraint Σj αij = 1. - Alternatively, we can try to reformulate a learning approach for the vectorial setting, which is done implicitly in pseudo-Euclidean space, in terms of the parameters αij only. While approach [1] is always applicable and possesses convergence guarantees as a gradient technique, approach [2] is not always possible. It requires that the updates in the embedding space can be expressed as updates of the coefficients only. It has been discussed in Hammer et al.’s “Learning Vector Quantization for (Dis-)Similarities” that the latter is not possible for a gradient technique with respect to the prototypes wi unless the dissimilarity matrix is related to a kernel, for example. Here, we give two examples that illustrate each of these possibilities. Relational SOM : Prototype updates for batch SOM decompose into contributions of the coefficients α. The adaptation rule for relational SOM becomes as follows : compute d (wi,xj ) based on equation
[32]
compute χi∗ (xj ) based on these values
adapt αkj :=
∑i χi∗ ( xj ) · exp(−nd (i,k )) ∑i, j χ∗i ( xj )· exp(−nd (i,k ))
[33]
[34]
[35]
This procedure is equivalent to an implicit application of SOM in the pseudo-Euclidean embedding space. It is independent of the concrete embedding and gives the same results if an alternative embedding is used. Further, it is equivalent to standard SOM if a Euclidean embedding of data exists. For general dissimilarities, it constitutes a reasonable extension of SOM to the general case with continuous updates of prototypes.
VI SOM and Learning Vector Quantization for Complex Data
205
This procedure, however, has one drawback : although it constitutes an exact implementation of SOM in pseudo-Euclidean space, it is no longer clear that the procedure offers an optimization of the corresponding SOM cost function in the embedding space. This is due to the fact that batch SOM itself does not necessarily optimize the cost function in non-Euclidean space, since the mean of a receptive field is no longer the least-squares minimizer of the incorporated data for a quadratic form that is not positive definite. Since the latter is related to NP complete optimization problems, however, it constitutes a reasonable approximation with often excellent results.30 Relational GLVQ : Relational LVQ variants can be derived based on the same principles, but relating to a gradient descent with respect to the parameters αij. From an abstract point of view, the LVQ costs have the form f (d (xk,wm )k,m ) [36]
where we can plug in equation [32] for the distances. The derivative with respect to a coefficient αij becomes @f @αjl
=
Σ i
@f (d (xk,wm )k,m ) @d (xi,wj )
· (dil −
Σ α d ) jl
[37]
ll'
l'
which yields the update rule for the coefficients, followed by subsequent normalization of the coefficients. Efficient Approximations : All relational learning methods suffer from a quadratic time complexity as compared to linear complexity for their vectorial counterparts. In addition, relational clustering requires linear space complexity since it stores prototypes in terms of coefficient vectors representing the relevance of every data point for the respective prototype. This fact makes the interpretability of the resulting map difficult since it is no longer easily possible to inspect prototypes in the same way as data points. Further, the quadratic time complexity makes the methods infeasible already for medium-sized data sets. Different heuristics have recently been proposed to speed up median and relational clustering and to improve its interpretability. As an example, for better interpretability, different techniques to approximate found prototypes by only few exemplars have recently been proposed, with a simple approximation of a prototype by its K closest neighbors, dubbed K-approximation, offering a simple and astonishingly accurate technology.31 To improve the computational complexity, two techniques have become popular : patch processing, which can be used in online mode, and the 30 Hammer and Hasenfuss, “Topographic Mapping of Large Dissimilarity Datasets.” 31 D. Hofmann, F.-M. Schleif, B. Paassen, and B. Hammer, “Learning Interpretable Kernelized Prototype-Based Models,” Neurocomputing 141 (2014): 84–96.
206
coding AS literacy — Metalithikum IV
Nyström approximation, which is based on an efficient approximation of the matrix of dissimilarities. More precisely, the main idea is to process data consecutively in patches of fixed size.32 The prototypes counted with multiplicities according to their receptive fields represent already seen data, and they are included as regular points counted with multiplicities in the next patch. This way, all information is taken into account either directly or in compressed form in the succeeding clustering steps. If transferred to dissimilarity data, this approach refers to a linear subset of the full dissimilarity matrix only : only those dissimilarities are necessary that correspond to a pair of data in the same patch; further, distances of prototypes representing the previous points and data points in a patch are used. In consequence, an only linear subpart of the full dissimilarity matrix is used this way. Since it is not known priorly which prototypes are used for the topographic mapping, however, the method requires that dissimilarities can be computed instantaneously during the processing. For real-life applications this assumption is quite reasonable; e.g. biological sequences can be directly stored and accessed in a database; their pairwise comparisons can be done on demand using sequence alignment. Median clustering can directly be extended in a similar way. For relational clustering, since prototypes are presented indirectly by referring to the data points, a K-approximation of the prototypes is required as an additional step in every patch run. As an alternative, the Nyström approximation has been introduced as a standard method to approximate a kernel matrix.33 It can be transferred to dissimilarities.34 The basic principle is to pick M representative landmarks in the data set that give rise to the rectangular sub-matrix DM,N of dissimilarities of data points and landmarks. This matrix is of linear size, assuming M is fixed. It can be shown (see e.g. Gisbrecht et al.’s “The Nyström Approximation for Relational Generative Topographic Mappings”) that the full matrix can be approximated in an optimum way in the form D ≈ D tM,N D 1M,M DM,N [38]
where DM,M is induced by an M × M eigenproblem depending on the rectangular sub-matrix of D. Its computation is O(M 3) instead of O(N 2) for the full matrix D. The approximation [38] is exact if M corresponds to the rank of D. It is possible to integrate the approximation [38] in such a way into the distance computation [32] such that the overall effort is 32 N. Alex, A. Hasenfuss, and B. Hammer, “Patch Clustering for Massive Data Sets,” Neurocomputing 72, nos. 7–9 (2009): 1455–69. 33 C. Williams and M. Seeger, “Using the Nyström Method to Speed Up Kernel Machines,” in Advances in Neural Information Processing Systems 13, ed. T. K. Leen, T. G. Dietterich, and V. Tresp (Cambridge, MA: MIT Press, 2001), 682–88. 34 A. Gisbrecht, B. Mokbel, and B. Hammer, “The Nyström Approximation for Relational Generative Topographic Mappings,” in NIPS Workshop on Challenges of Data Visualization (2010).
VI SOM and Learning Vector Quantization for Complex Data
207
only linear with respect to N. This way, a linear approximation technique for relational clustering results.35 V Recursive models Relational SOM or LVQ variants can be used to cluster discrete data structures, provided a distance measure for their comparison is available. This also opens a possibility to process time-series data for which dynamic time warping, as an example, constitutes a very popular measure of comparison. In contrast, recursive models take the point of view that every entry of a time series should be clustered within its temporal context. Hence time series-entries themselves are compared using a standard Euclidean distance, but the notion of the winner depends on the temporal context. There exists a variety of very different approaches that incorporate a temporal dynamics into SOM, early approaches being based on cognitively plausible leaky integrators, while later approaches depend on a much richer dynamics.36 Interestingly, these techniques can be put under a common umbrella, since they essentially share the underlying dynamics and training mode, but they differ as regards the representation of the temporal context. For simplicity, we will focus on the Merge SOM (MSOM) as an exemplary method, and shortly explain how this method differs from the other approaches. MSOM : The MSOM has been proposed as an efficient and flexible model for unsupervised processing of time series.37 We are interested in stimuli with a temporal characteristic, i.e. one or more time series of the form x = (σ 1,…,σ t,…) with elements σ t ∈ n. Time series are processed recursively, i.e. the entries σ t are fed consecutively to the map, starting from σ 1. As before, MSOM is represented by k prototypes equipped with a lattice structure. Like SOM, every prototype is equipped with a weight vector wi ∈ n that represents the current entry. In addition, every neuron is equipped with a context ci ∈ n that represents the temporal context in which prototype i should become the winner. The dynamics of MSOM incorporates a recursive processing of a given time series in its temporal context. A merge parameter is fixed γ ∈ (0, 1), which specifies the relevance of the context for its representation, and a context weight α ∈ (0, 1) is chosen that specifies the relevance of the context for the winning entry. Then, the recurrent dynamic of the
35 For detailed formulas, see ibid. 36 See e.g. the overviews Hammer et al., “Recursive Self-Organizing Network Models”; Hammer et al., “General Framework for Unsupervised Processing of Structured Data”; Barreto et al., “A Taxonomy for Spatiotemporal Connectionist Networks Revisited.” 37 M. Strickert and B. Hammer, “Merge SOM for Temporal Data,” Neurocomputing 64 (2005): 39–72.
208
coding AS literacy — Metalithikum IV
computation at time step t for sequence (σ 1,…, σ t,…) is determined by the following equation : the winner for time step t is
I (t ) = argmini {d i (t )} [39]
where
d i (t ) = α · wi − σt 2 + (1 − α) · ci − Ct2
[40]
denotes the activation (distance) of neuron i, and C t is the expected (merged) weight / context vector, i.e. the content of the winner of the previous time step C t = γ · cI ( t−1) + (1 − γ) · wI ( t−1).
[41]
Thereby, the initial context C 0 is set to zero. Training takes place in Hebb style after each time step t : wi ∼ exp(−nd(i, I(t )) /σ2 ) · (st − wi )
[42]
c i ∼ exp(−nd(i, I(t )) /σ2 ) · (Ct − ct )
[43]
This adaptation rule can be interpreted as an approximation of a stochastic gradient for SOM’s cost function where the recursively defined distance d is used instead of a standard vectorial one, and contributions that are caused by earlier time series are neglected, i.e. the gradient is truncated one time step in the past. MSOM accounts for the temporal context by an explicit vector attached to each prototype that stores the preferred context of this prototype. The way in which the context is represented is crucial for the result, since the representation determines the induced similarity measure of sequences. Thus, two questions arise : 1. Which (explicit) similarity measure on sequences is induced by this choice? 2. What is the capacity of this model? Interestingly, both questions can be answered for MSOM : 1. If neighborhood cooperation is neglected and provided enough neurons, Hebbian learning converges to the following stable fixed point of the dynamics:
t−1
Σ
wopt(t ) = s , copt(t ) = γ (1 − γ) i−1 · s t−i i=1 t
[44]
and opt(t ) is winner for time step t. This result states that the representation of context that arises in the weights ci consists of a leaky integration over the sequence entries, which is also known as a fractal encoding of the entries. 2. If sequence entries are taken from a finite input alphabet, the capacity of MSOM is equivalent to finite state automata. This
VI SOM and Learning Vector Quantization for Complex Data
209
latter fact, however, does not characterize the learning behavior; although every finite automaton can be represented, it is not clear whether it can be learned with Hebbian updates from suitable data. General Recurrent SOMs : It has been pointed out that several popular recurrent SOM models share the dynamics of MSOM whereby they differ in their internal representation of the context.38 In all cases, the context is extracted as the relevant part of the activation of the map in the previous step. Thereby, the notion of “relevance” differs between the models. Assume an extraction function is the fixed rep :
k
[45]
r
→
where k is the number of neurons and r is the dimensionality of the context representation. Then the general dynamics is given by d i (t ) = α · d (wi,st ) + (1 − α) · dr (ci,Ct )
[46]
where C t = rep d 1 (t − 1), … , d N ( t − 1)
[47]
extracts the relevant information from the activation of the previous time step, ci ∈ r, d is a similarity measure on n, and dr is a similarity measure on r. This formulation emphasizes the importance of an appropriate internal representation of complex signals by means of a context ci. The representation function rep extracts this information from the computation. MSOM is obtained for r = n, dr = d, and rep as the merged content of context and weight of the winner in the previous step. Alternative choices have been proposed as follows :39 1. Only the Neuron Itself: The temporal Kohonen map (TKM) performs leaky integration of the distances of each neuron. The dynamics can be obtained by setting r = N, rep = id, dr as the standard dot product, and ci as the i’ th unit vector, which realizes the “focus” of neuron i on its own activation. The recurrent SOM is similar in spirit, but it integrates vectors instead of distances and requires a vectorial quantity d i (t). 2. Full Information: The recursive SOM (RecSOM) chooses r = N. rep(x1,…,xN) = (exp(−x1),…,exp(−xN)) is one-one, i.e. all information is kept. The feedback SOM is similar to RecSOM with respect to the context; however, the context integrates an additional leaky loop onto itself.
38 Hammer et al., “Recursive Self-Organizing Network Models”; Hammer et al., “General Framework for Unsupervised Processing of Structured Data.” 39 Hammer et al., “Recursive Self-Organizing Network Models”; Hammer et al., “General Framework for Unsupervised Processing of Structured Data.”
210
coding AS literacy — Metalithikum IV
3. Winner Location: The SOM for structured data (SOMSD) is restricted to regular lattice structures. Denote by L(i ) the location of neuron i in a d-dimensional lattice. Then r = d and rep(x1, … , xn) = L(i0) where i0 is the index of the winner argmini { xi }. This context representation is only applicable to priorly fixed, though not necessarily Euclidean lattices. Obviously, these unsupervised recurrent methods separate into two categories : representation of the context in the data space as for TKM and MSOM, and representation of the context in a space that is related to the neurons as for SOMSD and RecSOM. In the latter case, the representation space can be enlarged if more neurons are considered. In the first case, the representation capability is restricted by the data dimensionality. VI Conclusions We have presented an overview of prototype-based models for general data, including metric learning for vectorial data that is subject to different scaling and correlations, relational approaches for abstract data represented by pairwise dissimilarities, and recursive approaches for discrete data structures such as time series or, as a generalization, tree structures. Thereby, a principled treatment of the fundamental techniques via cost functions that depend on pairwise dissimilarities enabled a quite general derivative of the methods for both cases, the supervised and the unsupervised setting. Besides this general approach, quite a few of the techniques have so far been used for specific settings only, such as e.g. recursive techniques that are not in use for LVQ schemes so far due to the expected long-term-dependencies problem, or batch approaches for LVQ schemes, which are missing because a corresponding treatment of a related probabilistic setting such as EM optimization is not yet available. It remains to test the strengths and weaknesses of these techniques in benchmarks, and to provide a generic toolbox that allows gentle access to all methods. Matrix learning for GMLVQ and relational variants of SOM and NG are already included into the novel SOM toolbox.40 Acknowledgments Financial support from the Cluster of Excellence 277 Cognitive Interaction Technology funded in the framework of the German Excellence Initiative is gratefully acknowledged.
40 See “SOM Toolbox,” Aalto University website, http://research.ics.aalto.fi/software /somtoolbox/.
VI SOM and Learning Vector Quantization for Complex Data
211
VII The Common Sense of Quantum Theory: Exploring the Internal Relational Structure of Self-Organization in Nature Michael Epperson
Michael Epperson (PhD, University of Chicago, 2003) is the founding director of the Center for Philosophy and the Natural Sciences in the College of Natural Sciences and Mathematics at California State University, Sacramento, where he is a research professor in history and philosophy of science. He is the author of Quantum Mechanics and the Philosophy of Alfred North Whitehead (Fordham University Press, 2004) and Foundations of Relational Realism: A Topological Approach to Quantum Mechanics and the Philosophy of Nature, coauthored with Elias Zafiris (Lexington Books, 2013).
214
codingAS ASliteracy — Metalithikum literacy — Metalithikum IV coding
Among the many exotic interpretations of quantum theory — those entailing “multiverse” cosmologies, “time reversal,” “retro-causality,” and physical superpositions of alternative actual system states — lies a single core principle: That quantum theory’s most emblematic feature is its invalidation of classical logic — the very foundation of intuitive, critical reasoning — at the level of fundamental physics. As a result, quantum mechanics has become widely popularized, and in many cases, marketed, as mystifying and essentially incomprehensible to nonspecialists. Yet at the heart of this popularization lies a paradox: the rules of classical logic purportedly invalidated by quantum mechanics are, at the same time, necessarily presupposed by VII The Common Sense of Quantum Theory
215
quantum mechanics; indeed, they are the very rules used to formalize quantum mechanics in the first place. The requirements of (1) a local Boolean measurement context, represented in the standard formalism as an orthonormal measurement basis, and (2) observables represented as self-adjoint projection operators associated with this basis, together guarantee that any quantum mechanical observable is always definable in terms of mutually exclusive and exhaustive outcome states — that is, outcome states relatable as an exclusive disjunction of contradictories, in satisfaction of both the principle of noncontradiction and the principle of the excluded middle. As a result, local Boolean measurement contexts are themselves 216
coding AS literacy — Metalithikum IV
always relatable coherently and consistently in quantum mechanics. The coherence and consistency of these relations derive from the fact that these relations are always internal relations, as evinced by the noncommutativity of quantum observables. There is indeed a crisis, for unlike the flourishing situation in the history of knowledge, the philosophical reflection about science has lost its way — or stagnates. The fashionable authors see only uncertainties, paradigms without enduring principles, an absence of method, and a presence of erratic revolutions, precisely when we should be trumpeting the success of a science whose extent and consistency are unprecedented. […] Beyond the shadow of a doubt, the origin of this crisis is to be found in an event that no one has fully recognized in all its significance: the irresistible irruption of the formal approach in some fundamental sciences such as logic, mathematics, and physics. As a consequence, these disciplines have become practically impenetrable, which explains the capitulation or the adventurousness of so many commentators, not to mention the disarray of the honest man or woman who wonders what those who should understand these subjects are talking about. — Roland Omnès, Quantum Philosophy: Understanding and Interpreting Contemporary Science
For a great many physicists and philosophers, including eminent quantum theorist Roland Omnès, author of the above epigraph, the current fashionable tendency toward exotic interpretation of quantum theory has become a serious impediment to progress in physics. Perhaps just as significant, it has become a serious impediment to the promise of a scientifically literate society — one that has grown increasingly comfortable with the idea of forfeiting its hope of ever understanding physics, content
VII The Common Sense of Quantum Theory
217
instead to be merely entertained by its alluring mysteries. As a result, recent years have brought an unparalleled explosion of media marketing of science for popular audiences, complete with competing troupes of celebrity physicists whose philosophy of education-via-entertainment aims to fortify us with science costumed as spectacle. However wellintentioned this aim might have been initially, the omnipresent forces of modern marketing have today bent it into full inversion. Science costumed as spectacle — intended as a vehicle for education — is now, in most cases, merely spectacle costumed as science. Indeed, it is the great irony of modern physics that a foundation so stable and reliable as quantum mechanics has proven so vulnerable to this trend. It is beyond dispute, for example, that the “popular understanding” of quantum theory is wholly derived from its most exotic and entertaining interpretations — those entailing parallel universes, retrocausality, “teleportation,” etc. One reason is that after over a century of development, the key conceptual and interpretive problems of quantum theory remain unsettled, even in the wake of evolutionary improvements in technology and experimental methodology. There are three central principles of quantum theory that have contributed especially heavily to the “impenetrability” and “adventurousness” Omnès forewarns against in the above epigraph, and which taken together have made it increasingly fashionable to dismiss as vacuous the entire enterprise of constructing a persuasive ontological interpretation : 1. Objective Indeterminacy: Quantum indeterminacy displaces the classical physical first principle of “objective determinism as a necessary implication of mathematical objectivity” — that mathematical necessity at the conceptual level implies deterministic contingency at the physical level. Instead, in quantum theory, mathematical probability implies indeterminacy at the physical level. 2. Objective Local Contextuality: The local context dependence of quantum measurement is evinced by the fact that (a) probable outcome states of measured systems are definable only according to the Boolean-logical contextual measurement basis of a particular chosen detector. In the conventional Hilbert space formalism, this is exemplified by the requirement that the measurement basis be orthonormal, such that potential outcome states are always represented as self-adjoint operators whose projections upon this orthonormal basis define these potential states as Boolean-logical, mutually exclusive “true-false” propositions;1 and (b) actual outcome states of measured systems always conform to this contextual measurement basis. Intuitively, both (a) 1
218
The local measurement context can likewise be represented by a Boolean subalgebra. The advantages of this representation will be discussed later in this volume.
coding AS literacy — Metalithikum IV
and (b) are natural implications of the fact that in any ontological interpretation of quantum mechanics, the detector is most fundamentally understood as a quantum system in the same sense that the measured system is; thus, a measurement interaction is most fundamentally understood as an entanglement between (at least) two quantum systems — i.e., a system detector as composite quantum system. This is typically interpreted as a problem because in the pure state, superpositions of potential outcome states can be defined without reference to this Boolean contextual measurement basis (per principle 3, below); but this reasoning is undermined by the fact that the evolution of the pure state to the mixed state always entails such contextualization, such that the probability-valuated outcome states constitutive of the latter always conform to the Boolean contextual measurement basis of the particular quantum measurement, as does the unique actual outcome state terminal of the measurement. 3. Objective Superposition of States: The nature of the pure state in quantum measurement — a coherent superposition of potential measurement outcome states — violates the first principles of classical logic, namely, the Principle of Non-contradiction (PNC, at most one outcome state is true) but also the Principle of the Excluded Middle (PEM, at least one outcome state is true). PNC [¬ (P^ ¬ P)] is violated because the potential outcome states integrated in the pure state are not necessarily mutually exclusive. And because they are related only by conjunction without complement in the pure state (“Schrödinger’s cat is alive and dead”), it is impossible to schematize these potential outcome states in terms of Boolean logic, which also requires relation by disjunction (“Schrödinger’s cat is alive or dead”). Likewise, PEM [P ∨ ¬ P] is violated because there is no mechanism in quantum theory by which one of the potential outcome states must be actualized. Together, these three features have inspired a variety of ontological interpretations of quantum theory, many of which advocate some signature radical conceptual innovation grounded in one or more of these features. The problem of objective local contextuality, for example, combined with the problem of objective superposition of states — specifically, reducing the pure state to a unique actual outcome state — inspired the following exotic idea : because (a) the object of direct experimental observation in a quantum measurement interaction is always a system in a unique actual state, and (b) unique outcome states are always consequent of experimental observation in quantum mechanics, it is thus reasonable to suppose that the human mind, as the final observational mechanism in any chain of instrumental observation, somehow participates in the creation of the objective facts yielded by quantum measurement. Early explorations of this idea by Wigner and von Neumann, for
VII The Common Sense of Quantum Theory
219
example, have been refined into formalized ontological interpretations by theorists such as Henry Stapp.2 Likewise, a similarly radical and increasingly popularized interpretation of quantum mechanics is the Many Worlds Interpretation (MWI) mentioned earlier. MWI begins with the conceptual innovation of treating the pure state as a superposition of alternative actual outcome states rather than a superposition of alternative potential outcome states, such that the concept of potentiality is wholly assimilated into the concept of actuality. In other words, “everything that can happen does happen” — a presupposed first principle of MWI that, despite its frequent characterization as a “scientific” deduction, has been consistently defied by experimental demonstration (namely, as evidenced in every recorded experimental application of the scientific method to date). MWI advocates counter this criticism by appealing to an additional presupposed principle — that of the “multiverse” — by which it is stipulated that each alternative outcome state is actualized in its own universe, with each universe spatiotemporally isolated from the others, and thus experimentally inaccessible to the scientific method. But this, of course, only reinforces the original criticism : that the popular portrayal of MWI and its categorical principles as scientifically deduced and scientifically meaningful is wholly inappropriate. Nevertheless, advocates of the MWI interpretation continue to argue that because the pure state entails violations of both PNC and PEM, quantum theory therefore entails the scientific invalidation of PNC and PEM — that is, it invalidates the presupposition of exclusive disjunction for contradictory actual outcome states (“It is necessarily true that Schrödinger’s cat is either alive or dead”) and therefore entails a comprehensive, categorical invalidation of Boolean logic in fundamental physics. By this argument, physics requires the replacement of Boolean logic by some particular formalization of a comparatively exotic modal logic that, for an ontologically significant interpretation of quantum theory, implies at least some commitment to modal realism. Again, MWI entails that every potential quantum measurement outcome is actualized in its own relative world, and that a “multiverse” of countless alternative worlds exists, each spatiotemporally isolated from the others. While the conceptual framework of MWI was originally conceived by Hugh Everett (his “relative state interpretation”) as a novel conceptual solution to the measurement problem rather than an ontological implication of modal realism, recent emphasis of the non-Boolean, exotic modal logical nature of quantum mechanics naturally implies an MWItype ontological scheme. In his On the Plurality of Worlds, for example, where modal realism is formalized, philosopher David Lewis writes : “I advocate a thesis of plurality of worlds, or modal realism, which holds 2
220
See for example Henry P. Stapp, Mindful Universe: Quantum Mechanics and the Participating Observer (Berlin: Springer, 2007).
coding AS literacy — Metalithikum IV
that our world is one world among others. There are countless other worlds. […] They are isolated : there are no spatiotemporal relations at all between different things that belong to different worlds. Nor does anything that happens at one world cause anything to happen at another. […] The other worlds are of a kind with this world of ours. […] Nor does this world differ from others in its manner of existing.”3 It should be emphasized here that despite the obvious conceptual compatibilities, for technical reasons that exceed the purpose of the current discussion, Lewis did not believe MWI to be a precise ontological exemplification of modal realism.4 The point of the comparison here is simply that MWI, as an ontological interpretation of quantum mechanics, requires that “meaningful statements” generated by the interpretation must resort to modal logical relations because they exceed the restrictions of Boolean logical relations. This, in turn, allows for interpretive modal logical statements such as “Schrödinger’s cat exists as a superposition of actual live and dead states,” rather than the Boolean logical interpretive statement, “Schrödinger’s cat exists as a superposition of potential live and dead states, such that at most and at least one of these states will be actual upon measurement.” Of course, the more intuitive Boolean logical interpretive statement is far less radical than the modal logical statement. But more important, it is only the Boolean logical interpretation that has seen universally consistent empirical validation, given that observation of alternative outcome states across multiple universes is not possible even in principle, according to the theory’s own parameters. MWI, in other words, is not falsifiable by the scientific method. Nevertheless, recent experiments in quantum mechanics have been reported with a commitment, either explicit or implicit, to the more exotic modal logical interpretation.5 It is not surprising, then, that the idea of an ontologically significant interpretation of fundamental physical theories is once again trending from approbation-as-profound to dismissal-as-vacuous. It is the Boolean logical interpretive statement, “Schrödinger’s cat exists as a superposition of potential live and dead states, such that at most and at least one of these states will become actual upon measurement,” that has been consistently exemplified via a history of clear empirical demonstration, where the pure state superposition of potential outcome states always evolves to a reduced matrix of probable outcome states. This is crucial because as probabilities, the alternatives must be (a) mutually 3 4 5
David Lewis, On the Plurality of Worlds (Oxford: Basil Blackwell, 1986), 2–3. See for example David Lewis, “How Many Lives Has Schrödinger’s Cat?,” in Lewisian Themes: The Philosophy of David K. Lewis, ed. Frank Jackson and Graham Priest (Oxford: Oxford University Press, 2004), 4–23. See for example A. D. O’Connell et al., “Quantum Ground State and Single-Phonon Control of a Mechanical Resonator,” Nature 464, no. 7289 (2010): 697–703; and its companion news article, Geoff Brumfiel, “Scientists Supersize Quantum Mechanics,” Nature, March 17, 2010, doi:10.1038/news.2010.130.
VII The Common Sense of Quantum Theory
221
exclusive, in satisfaction of PNC (i.e., “at most one actual outcome state”), and (b) exhaustive, in satisfaction of PEM, in that when added together the probabilities sum to unity (i.e., “at least one actual outcome state”). This categorical presupposition of PEM, like PNC, is thus arguably a first principle of any ontological interpretation of quantum mechanics, and it is represented in quantum theory by Born’s rule, which likewise is accepted as an underivable presupposition of quantum theory. The probabilistic character of quantum mechanics formalized via Born’s rule is of first importance in this regard because it makes explicit the theory’s necessary presupposition of not just PNC and PEM, but also, more broadly, the first principles of Boolean logic (conjunction, disjunction, and complement), and the derivative principle of material implication (the latter of which will be introduced later in this essay). For it is only via a presupposition of Boolean logic that probability theory is possible,6 in particular the sum rule for probabilities and the quantum theoretical exemplification of the latter in Born’s rule. It is clear, then, that the categorical first principles of PNC and PEM, and their function within a Boolean logical framework of additional presuppositions, are central to each of the three problematic issues discussed above : objective indeterminacy, objective local contextuality, and objective superposition of states. Indeed, there are only two aspects of quantum mechanics that, when considered in isolation (i.e., in abstraction from the theory as a whole), violate these Boolean first principles. The first, as has already been discussed, is the pure state superposition of alternative potential outcome states. The second is closely related to the first, and can be understood as its connection to the problem of objective local contextuality : although individual local measurement contexts are Boolean, the attempt to relate multiple local measurement contexts together in a single measurement — a normative function of quantum mechanics — yields nonBoolean potential relations across these contexts, because the variously contextualized observables do not commute. In other words, in a composite quantum system, when considering the joint probability of outcome a1 generated by measurement of observable a via its local Boolean contextual measurement basis A and the probability of outcome b1 by measurement of observable b via its local Boolean contextual measurement basis B, one finds that (a1 ∙ b1) ≠ (b1 ∙ a1). There is thus a fundamental asymmetry of relation in quantum mechanics that is not present in classical mechanics where observables do commute.7 Of course, relational asymmetries of this kind 6 7
222
See for example R. T. Cox, “Probability, Frequency and Reasonable Expectation,” American Journal of Physics 14 (1946): 1–13. In Whitehead’s Treatise on Universal Algebra, he refers to the inherent “truism” and “paradox” of such noncommutativity (discussed further in Michael Epperson and Elias Zafiris, Foundations of Relational Realism: A Topological Approach to Quantum Mechanics and the Philosophy of Nature [Lanham, MD: Lexington Books, 2013], 103–38).
coding AS literacy — Metalithikum IV
also exist in classical physics; conditional probabilities, for example, are perfectly expressible via classical probability theory — e.g., in cases where P (A || B), the probability of “A given B,” is not equivalent to P (A) alone. But whereas classical probability theory allows for cases where P (A || B) is equivalent to P (A), such that A and B can be qualified as mutually independent, this is not possible in quantum mechanics because of the noncommutativity of quantum observables (the phenomenon of “quantum entanglement” is a related physical indication of this). Thus, quantum mechanics requires a different kind of probability conditionalization rule than its classical analogue — namely, one that depicts the evaluation of quantum observables as a fundamentally asymmetrical relational process. To further clarify this point, consider that the conventional ontological implication of commutativity in classical mechanics is that the order of observation (i.e., the order of predication via measurement) is irrelevant because all observables are thought to possess well-defined values at all times, regardless of whether or not they are measured. But in quantum mechanics, this is not the case; asymmetrical probability conditionalization evinces that the act of observation is generative of novel facts, not merely revelatory of preexisting facts. 8 That is, outcome states yielded by quantum mechanical measurement are not merely revealed subsequent to measurement, but rather generated consequent of measurement, as evinced by the Heisenberg uncertainty relations as well as other equally fundamental principles of quantum theory. The outcome states generated by quantum mechanics, then, are properly understood as ontological events, not merely epistemic qualifications, and as such their asymmetrical relational dependencies are likewise ontologically significant. For example, if a = “opening a screen door” and b = “walking through a screen door,” it is quite obvious why (a ∙ b) ≠ (b ∙ a). Each conjunction represents a possible physical state, but these states and their ingredient conditional probabilities are quite different (as anyone fortunate enough to witness this particular demonstration will verify). Thus the order of generation of predicative facts in quantum mechanics yields asymmetrical potential relational dependencies among these facts and among their measurement contexts. These potential dependencies are represented in the conventional Hilbert space formalism by using the tensor product to relate potential outcome states subsuming multiple observables and their respective measurement contexts. If, for 8
As evidenced by the experimental disconfirmation of local hidden variables theories. See also J. S. Bell, “On the Einstein Podolsky Rosen Paradox,” Physics 1, no. 3 (1964): 195–200. See also the Kochen-Specker Theorem: Simon Kochen and Ernst Specker, “The Problem of Hidden Variables in Quantum Mechanics,” Journal of Mathematics and Mechanics 17 (1967): 59–87. See also the famous experiments of Aspect et al.: A. Aspect et al., “Experimental Tests of Realistic Local Theories via Bell’s Theorem,” Physical Review Letters 47, no. 460 (1981).
VII The Common Sense of Quantum Theory
223
example, observable a has two potential values, a1 and a2 in Boolean measurement context A, and observable b has two potential values, b1 and b2 in Boolean measurement context B, then a1 ∙ b1 (the statement “observable a is in state a1 and observable b in state b1”) is expressed as the tensor product a1 b1. This means that the specification of a is impossible without reference to the specification of b, and vice versa. This is the essence of quantum entanglement — the impossibility of defining the combined states of a and b as a ∙ b, as though a and b were mutually independent. Thus the pure state Ψ of this composite system must represent all potential propositional relations between a and b — but more than that, it must represent all potential relations among all observables constitutive of the system, in abstraction from any particular contextualization. Further complicating matters, it is an essential and infamously challenging feature of quantum mechanics that these manifold potential relations constitutive of the pure state are formally equivalent to a reduced, contextualized selection of these potential relations. In the present example, this is expressed as : Ψ = 1/√2 (a 1
b1 + a2
b2).
/ 2 , as will be explained presently, represents the probability 1/2 for either the composite state a1 b1 or a2 b2 , and probability 0 for either the composite state a1 b2 or a2 b2. The reason that the possible depen dencies do not include a1 b2 and a2 b1 is that the individual local contexts A and B are Boolean, such that a1 and a2 are mutually exclusive (i.e., a1 ∨ a2) in context A, and b1 and b2 are mutually exclusive (b1 ∨ b2) in context B, and these Boolean relations are correlated from A to B via a “compatibility condition.”9 In other words, the Boolean structure of context A “overlaps” with the Boolean structure of context B, such that if contexts A and B are each represented as Boolean subalgebras, or equivalence classes of Boolean subalgebras,10 the Boolean structure of each context can be extended to that of the other where these subalgebras overlap. It must be emphasized here that Boolean measurement contextuality in this sense is not merely an epistemic coordination, but rather an ontologically significant, logical conditioning of potential outcome states such that they can evolve, through coherent integration, into probable outcome states11 — a conditioning that always occurs in quantum mechanics. Indeed, this is arguably its most essential feature, if not its signature feature — that quantum measurement always yields 1 √
9 Epperson and Zafiris, Foundations of Relational Realism, 58–62, 148–56. 10 Jeffrey Bub, “Quantum Logic, Conditional Probability, and Interference,” Philosophy of Science 49 (1982): 402–21. See also Jeffrey Bub, “The Problem of Properties in Quantum Mechanics,” Topoi 10, no. 1 (1991): 27–34. 11 This is a conditioning process analogous to von Neumann’s “Process 1.” See for example John von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton, NJ: Princeton University Press, 1955), 352.
224
coding AS literacy — Metalithikum IV
probability outcomes, and therefore always presupposes a Boolean logical relational structure. The fact of this presupposed structure, then, confutes the often repeated claim that quantum mechanics is somehow essentially barren of the principles of classical logic, just because it contravenes certain principles of classical, deterministic physics. As discussed earlier, the presupposition of a Boolean-logical measurement context is represented in the conventional Hilbert space formalism of quantum mechanics via (1) the requirement of an orthonormal measurement basis (representing the Boolean measurement context), and (2) the requirement that observables in quantum mechanics (in this case, a and b) are representable only by self-adjoint operators projected onto this measurement basis. As a heuristic model evincing these requirements, consider, for example, a two-dimensional Hilbert space — an abstract vector space named after the mathematician David Hilbert. In quantum mechanics, a Hilbert space can be thought of as a “potentiality and probability space” where logically conditioned propositions about a measured system state can be mathematically represented in terms of vectors and their relations. In a Hilbert space of two dimensions, representable as a simple Cartesian x-y coordinate system, there can be sets of, at most, two mutually orthogonal vectors. In a space of three dimensions, x-y-z, there can be sets of, at most, three mutually orthogonal vectors, and so on. This correspondence serves as the foundation for the concept of an orthonormal measurement basis, where vectors representing actual system states are of unit length and mutually orthogonal. It is easy to see how this measurement basis (i.e., measurement context) might be used to represent a Boolean-contextualized quantum observable a that has only two possible outcome values, a1 and a2 : one first recognizes that as a Boolean-contextualized observable, these possible outcome values must be relatable as mutually exclusive and exhaustive when actualized as a measurement outcome. To this end, a1 and a2 are represented as mutually orthogonal vectors in our two-dimensional Hilbert space. (As noted above, however, if there were three possible outcome values, a1, a2, and a3, we would require the addition of a third dimension — i.e., a z-axis — to accommodate sets of three mutually orthogonal vectors, and so on.) Thus, for observable a, the orthonormal measurement basis a1 ⊥ a2 represents Boolean-contextualized possible outcome values a1 ∨ a2 (that is, a1 or a2). In quantum mechanics, however, the simple disjunction a1 ∨ a2 always evolves to become an exclusive disjunction a1 ∨ a2 (“either a1 or a2”) with the actualization of a unique outcome state; that is, the simple disjunction “a1 or a2” is additionally conditioned by PNC (“if a1 then not a2” and vice versa) thus becoming an exclusive disjunction of contradictories, “either a1 or a2.” Given this Boolean framework of quantum mechanical contextualization of observable a, the quantum mechanical evaluation of the observable as
VII The Common Sense of Quantum Theory
225
either a1 or a2 upon measurement can be formalized by simply adding a third vector, ψ, representing the objectively indeterminate state of the system in abstraction from the contextualized measurement, and projecting it, via a projection operator, onto the vectors a1 and a2. Since a vector’s unit length represents “probability = 1,” one can simply invoke the Pythagorean theorem to show that the length of each projection, ψ onto a1, and ψ onto a2, when squared, represents the probability valuation of outcome states a1 and a2, respectively. The real power of this formalism as a means of representing Booelancontextualized quantum observables becomes apparent when one considers less idealized systems — i.e., those comprising observables with more than two possible outcome values. As noted above, for example, if a quantum observable has three possible outcome values — a1, a2, and a3 — adding a third dimension to our Cartesian coordinate system (xy-z) allows for sets of — at most — three mutually orthogonal vectors. Thus a1 ⊥ a2 ⊥ a3 represents a1 ∨ a2 ∨ a3. In this case, a1⊥ is a subspace that includes a2 and a3. But in less idealized systems, where observables can have countless values, the expression an ∨ an ⊥ is far from trivial; indeed, its equivalence in quantum theory to an ∨ ¬ an clearly evinces the centrality of quantum theory’s presupposition of Booleancontextualized observables, and more generally, its presupposition of PNC and PEM. Thus, for any Boolean measurement context A, the following truth-functional, material implication holds : Context A an
¬ an⊥
T
T
T
F
F
F
T
T
F
F
T
→ T
It is important to emphasize here that this truth table represents the Boolean contextualization of a quantum state in terms of a particular observable, not the evaluation of this observable. Recall that the evaluation of a quantum observable entails the actualization of one of its potential values — i.e., it is the unique, actual outcome state consequent of measurement. In the above example, we can signify this actualized outcome state as a1 or a2, which is the actualization of either potential outcome state a1 or a2, respectively, according to the probability valuation of each. Again, the presupposition of this actualization, in satisfaction of PNC and PEM, is reflected in the formalism by the fact that together, the probability valuations of a1 and a2 sum to unity. More broadly, then, the concept of “evaluation of observables via quantum measurement” entails the totality of the process by which an initially
226
coding AS literacy — Metalithikum IV
indeterminate, uncontextualized actual state ψ, defined as an abstract integration of countless potential outcome states (where neither PNC nor PEM hold) evolves, by way of its local Boolean contextualization in terms of some particular observable a, to become a reduced integration of probable outcome states a1 or a2, thus satisfying PNC and PEM, and terminating in the actualization a1 or a2 according to the probability valuations of a1 and a2 respectively. Thus in quantum mechanics probability presupposes actuality (as it does in classical mechanics) and actuality presupposes probability (which it does not in classical mechanics). Likewise, evaluation presupposes Boolean contextualization and Boolean contextualization presupposes evaluation. Evaluation and contextualization, in other words, as ontologically relational features of actuality and potentiality, are thus mutually implicative in quantum mechanics. Indeed, it is in light of this mutual implication that quantum indeterminacy can be more comprehensively understood — that is, beyond just the probabilistic character of quantum measurement; for while it is true that the evaluation of observables presupposes Boolean contextualization, it is never the case that the latter determines the former. Rather, evaluation is conditioned by Boolean contextualization, and this conditioning is indeterminate. Row 3 of the above truth table, for example, describes a state evaluable as neither an nor an⊥, where the contextual implication an → ¬ an⊥ is nevertheless true. Likewise, this contextual implication itself, as a truth function, is logically equivalent to the expression ¬ an ∨ ¬ an⊥, indicating again that the state is evaluable as neither an nor an⊥. The state can, in other words, be defined objectively in a way that exceeds the particular Boolean context given by an → ¬ an⊥. In quantum mechanics, as noted above, this objectively indeterminate state of the system in abstraction from any particular local Boolean contextualization is represented by the pure state vector ψ. Formally, the evaluation of an observable a via the Boolean contextualization of ψ is given in the expression : || ψ = α || αn + β || α n⊥ , with || α|| 2 + || β ||2 = 1.
(For present purposes, the bracketing of terms with the symbols | is simply understood as signifying, in the conventional Dirac notation, that these terms are quantum state vectors.) One can easily interpret this formal expression by revisiting the earlier discussion : || Ψ is a vector of unit length, thus representing an objective, though indeterminate, uncontextualized state. The right side of the equation represents the contextualization of || Ψ by context A, as described above. The evaluation of the observable a defined by that contextualization entails the projection of the state vector || Ψ upon the contextualized, mutually exclusive potential outcome states (eigenstates) || an and || an⊥ .
VII The Common Sense of Quantum Theory
227
|| α || 2 + || β || 2 = 1 is reflective of the presupposition of PEM, such that the coefficients and β,12 when squared, give the probabilities of || an and || an⊥ , respectively, as actual outcome states terminal of the measurement interaction. That is, since these probabilities sum to unity (i.e., they are exhaustive), it is presupposed that one of them will become actualized as a measurement outcome state. In summary, the requirements of (1) a local Boolean measurement context, represented in the standard formalism as an orthonormal measurement basis, and (2) observables represented as self-adjoint projection operators associated with this basis, together guarantee that any quantum mechanical observable is always definable in terms of mutually exclusive and exhaustive outcome states — that is, outcome states relatable as an exclusive disjunction of contradictories, in satisfaction of both PNC and PEM. As a result, local Boolean measurement contexts are themselves always relatable coherently and consistently in quantum mechanics; but as discussed earlier, and as will be explored further presently, relations among contexts are always internal relations, as evinced by the noncommutativity of quantum observables. An internal relation is one in which the objective properties of a relatum are modified by the relation. They are relations, in other words, that are constitutive of a given relatum, rather than external to it.13 For example, a child’s genetic history is understood to be internally related to that of his grandfather, since the grandfather’s history is internally constitutive of the child’s. Conversely, the grandfather’s genetic history is externally related to that of the child, since it is objectively independent of any relationship with the latter. Likewise, with respect to the concept of a history in general, including the concept of quantum mechanical histories,14 time itself is asymmetrical in this way, such that facts of “the present” are understood as internally related to the facts of the past, while the facts of the past are externally related to those of the present. In precisely the same way, in quantum mechanics, the outcome state of a measured system is internally related to its initial state, with its particular contextualization. Further, since relata are qualified as probabilities in quantum mechanics, probability conditionalization in composite quantum systems is another example of asymmetrical internal relation. While classical conditional probabilities are not the same as quantum conditional 12 The italicized Greek characters representing these coefficients should not be confused with the nonitalicized Greek characters representing actualized outcome states (i.e., where a potential outcome state an is actualized as an upon measurement). 13 See for example G. E. Moore, “External and Internal Relations,” Proceedings of the Aristotelian Society 20 (1919–20): 40–62. 14 See for example Roland Omnès, The Interpretation of Quantum Mechanics (Princeton, NJ: Princeton University Press, 1994), 122–42. See also Robert Griffiths, “Consistent Histories and the Interpretation of Quantum Mechanics,” Journal of Statistical Physics 36, no. 1 (1984): 219–72; and J. J. Halliwell, J. Pérez-Mercader, and W. H. Zurek, Physical Origins of Time Asymmetry (Cambridge: Cambridge University Press, 1996), 210.
228
coding AS literacy — Metalithikum IV
probabilities (it can be argued, however, that the latter are properly understood as a generalization of the former), they are sufficiently analogous to employ the simpler classical notation here : P(A || B) — the probability of A given B — entails that A is internally related to B. When it is also true that B is externally related to A (i.e., A and B are not mutually internally related), this asymmetry is reflective of the noncommutativity of quantum observables, such that P(A || B) ≠ P(B || A). At the most fundamental level, there are only three possible modes of relation among local Boolean measurement contexts in quantum mechanics : disjoint: A ∩ B = Ø [a] overlapping: A ∩ B = A
B
implicative: A ⇒ B or B ⇒ A
[b] [b]
Only [b] and [c] involve internal relations, so we will begin with them. With respect to [c], the mode of implicative internal relation, note that in distinction from the logical connective → signifying material implication (a purely syntactic, truth-functional implication) employed earlier in the relation of contextualized observables (where a → b is read “if a then b”), asymmetrical internal relation between measurement contexts A and B, as well as between an actualized outcome state αn and its particular measurement context A, is both syntactic and semantic, and thus requires a different connective. To this end, the statement “context A is internally related to context B, and context B is externally related to context A” is signified herein via the expression A ⇒ B, alternatively read “A only if B.” Likewise, the statement “actualized outcome state αn is internally related to context A” is signified via the expression αn ⇒ A. While this is analogous to the concept of “logical entailment” (“A entails B,” or “B is deducible from A”), as well as to C. I. Lewis’s notion of “strict implication” (both notions being semantic rather than syntactic), internal relation cannot be simply assimilated to these. G. E. Moore defines internal relation thus : For any internal relational property F belonging to x, the following must be true :15 (∀x) (∀y) ((Fx → (¬ Fy
y ≠ x))
[1]
In other words, “If x has the relational property F, then from y ’s lack of this property it can be deduced that y is not identical with x.” If this proposition is true for a particular value of F, then F is an internal relational property. If false, F is an external relational property. It is interesting to note that in comparison with [1], there is a similar proposition that, as Moore argued,16 can be said to hold true for all relational properties F, whether internal or external : ((∀x) (∀y) ((Fx
(¬ Fy → y ≠ x))
[2]
15 Moore, “External and Internal Relations.” 16 Ibid.
VII The Common Sense of Quantum Theory
229
This expression is identical to [1], except that the connectives → (material implication) and (entailment) are transposed. Expression [2] thus reads, “Given that x has the relational property F, one can deduce, as a matter of fact — i.e., semantic content, not just syntactic form — that if y lacks this property, then it cannot be identical with x.” In quantum mechanics, this distinction between form and fact is reflected in the distinction between contextualized potential outcome states and actual outcome states, respectively. “Facts” are evaluated observables — i.e., actualized potentia; they are actualizations αn of potential measurement outcomes an whose Boolean contextualization A allows these potential outcomes to be valuated as probabilities. Since, for internal relations, both [1] and [2] are true (again, such that the connectives → and can be transposed) it would seem that the statement “αn is internally related to A” exhibits a mutually implicative relationship between the concept of material implication and the concept of logical entailment in quantum mechanics — i.e., the mutual implication of syntax and semantics, of form and fact. In this way, internal relation can be thought of as connoting a kind of “ontological implication” that subsumes but exceeds logical implication at the level of fundamental physics. In quantum mechanics, there are two basic categories of internal relation, both of which are asymmetrical, and both of which are operative in every quantum measurement interaction : (1) the global outcome state internally related to locally contextualized measurement outcomes (i.e., extension of the local to the global); and (2) locally contextualized measurement outcomes internally related to the global initial state (i.e., restriction of the local by the global). By category [1], locally contextualized measurement outcomes — “quantum facts” — are understood as constitutive of a novel global totality consequent of measurement — i.e., a novel augmentation of the initial global state. By category [2], locally contextualized measurement outcomes are internally related to the global totality of facts subsuming the measured system’s initial actual (though indeterminate) state. More specifically, both the potential and actual outcome states of a measured system are internally related to the system’s local Boolean contextualization of the global totality of facts constitutive of the measured system’s initial state (see von Neumann’s projection postulate and the Lüders rule). In quantum mechanics, in other words, a measurement’s own particular local contextualization of the initial global state is always understood to be internally constitutive of the measurement outcome. For example, in the expression introduced earlier, ||Ψ = α || an + β || an ⊥ it is always the case that α || an + β || an⊥ is internally related to the objective but indeterminate totality of facts constitutive of || Ψ .
230
coding AS literacy — Metalithikum IV
Nonlocal probability conditionalization is a well-known exemplification of categories (1) and (2) together,17 where locally contextualized potential measurement outcomes of one component of a composite quantum system are internally related to (and thus logically conditioned by) locally contextualized actual outcomes within a different component (see von Neumann’s projection postulate and the Lüders rule applied to nonlocal EPR correlations).18 As the conceptual framework presented in this volume unfolds, we will see that [1] and [2] above are, in fact, dipolar aspects of a single, unified relational process inherent in every quantum measurement event. A key principle of this framework, as is evident above, and as Elias Zafiris and I discussed in detail in Foundations of Relational Realism : A Topological Approach to Quantum Mechanics and the Philosophy of Nature, is that locally Boolean-contextualized quantum mechanical internal relations always require reference to a totality of facts whose global relations are non-Boolean. This key principle is most easily understood as an analogue to the concept of the measured system’s state vector || Ψ introduced earlier : Recall that || Ψ represents an objective though indeterminate state of a measured system in abstraction from any particular local contextualization, and is thus non-Boolean. Likewise, the global state vector || Ψ , introduced earlier, represents an objective though indeterminate state of a composite quantum system, where “global” is defined as the totality of this composite system. As we saw earlier, || Ψ , as an uncontextualized representation of the global state, is, like || Ψ , non-Boolean. The individual local systems constitutive of the global composite system, one will recall, are not mutually independent and therefore relations between them must be expressed as a tensor product. Writ large, of course, || Ψ could just as easily represent the totality of the universe itself considered quantum mechanically — i.e., as an uncontextualized objective though indeterminate state of the universe. In that light, it becomes intuitively clear why there is no sense in which a “global Boolean context” for a global state vector || Ψ can be specified quantum mechanically in the same way that local Boolean contexts can be specified for || Ψ : it would be akin to measuring the universe itself as either “here” or “there” in the same way that any system within the universe can be so measured. (Elsewhere we have argued that this is a quantum mechanical exemplification of the logical and philosophical problem of predicating totalities.)19 Further still, as seen in the previous discussion of composite quantum systems, quantum mechanics does not allow for the analytic depiction of a “global Boolean contextualized state” as simply 17 Epperson and Zafiris, Foundations of Relational Realism, 58–62, 148–56. 18 Ibid., 58–62, 71–78, 307–12. 19 Epperson and Zafiris, Foundations of Relational Realism, 110–19.
VII The Common Sense of Quantum Theory
231
an exhaustive concatenation of all local Boolean contextualized states, as though these local states were mutually independent (namely, the Kochen-Specker theorem).20 What can be specified quantum mechanically is always a mutually implicative relationship between local and global, such that neither can be abstracted from the other in any quantum measurement event. This relationship has only two possible modes, and these correspond, respectively, with the two categories of asymmetrical internal relation discussed above; and like the latter, these two modes are mutually implicative and always jointly operative in every quantum measurement event : [1] Extension of the local to the global, wherein locally contextualized facts (i.e., measurement outcomes) condition global potentia nonlocally. This is evinced, for example, by nonlocal probability conditionalization in quantum mechanics. And [2] restriction of the local by the global, wherein global facts condition local potentia and their local contextualization. This is evinced, for example, via the phenomenon of environmental quantum decoherence. The central conceptual challenge, then, for any ontological interpretation of quantum mechanics, is not only the problem of measurement, nor is it the problem of actualization of potentia (i.e., the problem of the existence of facts); the central challenge underlying both of these problems, rather, is properly understanding, via a coherent and empirically adequate conceptual scheme, the mutually implicative relationship between local and global in quantum mechanics. This necessarily entails the construction of a formal philosophical and mathematical framework that adequately depicts how the logical features of this relationship can be shown to condition the causal features. In summary, the fundamental dipolar process in which any such coherent framework must be anchored — the essential process of quantum mechanics — is [1] the asymmetrical internal relation of a global outcome state to locally contextualized measurement outcomes (i.e., extension of the local to the global); and [2] the asymmetrical internal relation of locally contextualized measurement outcomes to the global initial state (i.e., restriction of the local by the global). Again, it is a fundamental principle of quantum mechanics per the Kochen-Specker theorem that this dipolar process excludes the possibility of global Boolean contextualization, either synthetically via extension, or analytically via restriction. The constituent facts of this totality, in other words, are not determinate; they cannot all be assigned definite bivalent truth values (i.e., either true or false) such that PNC and PEM are satisfied among all possible relations of these facts. This totality, then, is not only epistemically indeterminate globally; it is ontologically indeterminate 20 Simon Kochen and Ernst Specker, “The Problem of Hidden Variables in Quantum Mechanics,” Journal of Mathematics and Mechanics 17 (1967): 59–87.
232
coding AS literacy — Metalithikum IV
locally — that is, as locally constitutive of the particular quantum measurement event internally related to it. As we argue in Foundations of Relational Realism, it is via this dipolar process of asymmetrical internal relation that manifold local Boolean contextualized facts can be coherently and objectively integrated nonlocally — even if not comprehensively as a global Boolean totality — such that PNC and PEM condition causal relations not only within local Boolean contexts, but also across these contexts, even when measured systems are spatially well separated. It is via asymmetrical internal relation, in other words, that a global totality of facts can be coherently internally constitutive of a local quantum process.
VII The Common Sense of Quantum Theory
233
VIII GICA: Grounded Intersubjective Concept Analysis. A Method for Im-proved Research, COMMUNICATION, and Participation Timo Honkela et al.1 I Introduction 239 · I.I Contextuality and subjectivity 240 · I.II Shedding light on subjectivity: crowdsourcing 242 · I.III Becoming conscious of individual differences as a way of increasing understanding 243 · I.IV False agreements and false d isagreements 244 · I.V Making differences in understanding visible 244 — II Theoretical background 245 · II.I Cognitive theory of concepts and understanding 245 · II.II Subjective conceptual spaces 249 · II.III ntersubjectivity in conceptual spaces 250 · II.IV Conceptual differences in collaborative problem s olving 250 — III The GICA method 252 · III.I Introduction to subjectivity and context analysis 254 · III.II Preparation and specifying the topic 257 · III.II.I Determining relevant stakeholder groups 257 · III.II.II Collecting focus items from relevant stakeholders and others 257 · III.II.III Collecting context items 258 · III.III Focus session 259 · III.III.I Filling in the Tensor 259 · III.III.II Data analysis and visualization 260 · III.IV knowledge to action 264 — IV Discussion 265 · IV.I GICA as a participatory method 266 · IV.I.I Focusing on subjective differences 268 · IV.I.II Barriers for successful communication in p articipatory processes 269 · IV.II Summarizing our contribution and future directions 270 — V ACKNOWLEDGMENTS 272 — VI Further References 273
1
236 236
Authors: Timo Honkela, Nina Honkela, Krista Lagus, Juha Raitio, Henri Sintonen, Tiina Lindh-Knuutila, and Mika Pantzar
codingAS ASliteracy — Metalithikum literacy — Metalithikum IV coding
In this text, we describe a method that can be deployed to illuminate the differences among and between people with regard to how they conceptualize the world. The Grounded Intersubjective Concept Analysis (GICA) method first employs a conceptual survey designed to elicit particular ways in which concepts are used among participants, aiming to exclude the level of opinions and values. Subsequent analysis and visualization reveals potential underlying groupings of people, concepts, and contexts. An awareness and reflection process is then initiated based on these results. This method, combining qualitative and quantitative components, leads into the externalization and sharing of tacit knowledge. VIII GICA: VIII Grounded Grounded Intersubjective Intersubjective Concept Concept Analysis Analysis
237
The GICA method thus builds up a common ground for mutual understanding, and is particularly well suited for strengthening participatory processes. Participatory methods have been designed to include stakeholders in decision — m aking processes. They do this by eliciting different opinions and values of the stakeholders. The underlying assumption, however, is that there are no significant conceptual differences among the participants. Nevertheless, often the failures of the participatory process can be traced back to such hidden conceptual differences. As an unfortunate outcome, crucial experiential knowledge may go unrecognized or differences in the meanings of words used may 238
coding AS literacy — Metalithikum IV
be misconstrued as differences in opinions. The GICA method aims to both alleviate and mitigate this problem. I Introduction The ability to understand each other is one that is frequently taken for granted; only the occasional failure reveals how problematic understanding language can be. Establishing the connection between a word and its typical and appropriate use demands a reliance upon a long learning process. This process, made possible and guided by our genetic makeup, further requires an extensive immersion into a cultural and contextual mode of using words and expressions in order for it to succeed. The extent to which these contexts are shared among individual language speakers governs how we are then able to understand each other. However, when our learning contexts differ, misalignments in understanding the concepts themselves arise and subsequent communication failures begin to occur. It is obvious that if the context of learning had been completely different, i.e., if two persons have learned different languages, the basis for mutual understanding through exchanging linguistic expressions is very limited or even nonexistent. Self-evidently, without access to gestures or any external context it is not possible to know what “Ble mae’r swyddfa bost agosaf?” or “Non hurbilen dagoen postetxean da?” means, unless that is, one has learned the Welsh or Basque language. This trivial example can naturally be extended to encompass more serious situations as well. Considering the readers of this article it is fair to assume that every one of them speaks English. Nevertheless, it is difficult for most to understand expressions like “a metaphyseal loading implant employs a modified mechanoregulatory algorithm” or “bosonic fields commute and fermionic fields anticommute” unless one is an expert in the particular area of medicine or physics concerned. Even expressions in everyday informal language such as “imma imba, lol” can seem obscure if one is not familiar with contemporary youth language on the Internet. In addition to these kinds of clear-cut cases there are more subtle situations in which two or more persons might think that they understand each other when actually they do not. It seems reasonable to think that one assumes to be understood if one says : “This is
VIII GICA: Grounded Intersubjective Concept Analysis
239
not fair,” “Do you like me?,” “I saw a small house,” or “That country is democratic.” However, it is far from guaranteed that the others would actually interpret words “fair,” “like,” “small,” or “democratic” in the same way as the speaker. In this text, building on the previous works,2 we present our methodological work innovation that aims to improve 9a) mutual understanding in a communicative context, and 9b) the inclusion of stakeholder concerns in complex decision-making contexts. The proposed method builds on 91) an understanding of the grounded nature of all concepts, and the dynamic and subjective nature of concept formation and use; and 92) the recognition that the best approach to elicit and represent such concepts is one that combines elements from both qualitative case research and quantitative learning methods. We call this method Grounded Intersubjective Concept Analysis (GICA). The word “grounded” here refers to both the qualitative method of Grounded Theory3 and to the idea of the embodied grounding of concepts in human experience.4 The method employs three main steps : (A) Preparation, (B) Focus session(s), and (C) Knowledge to action activities. These steps can be repeated iteratively. The focus sessions are supported with computational tools that enable the analysis and visualization of similarities and differences in the underlying conceptual systems. We begin our explanation by showing examples of contextuality and subjectivity in interpretation and continue by considering a modern Internet-based activity (in this instance crowdsourcing), that highlights subjective differences in knowledge-intensive activities. I.I Contextuality and subjectivity It is commonplace in linguistics to define semantics as dealing with prototypical meanings whereas pragmatics would be associated with contextual meanings. However, for our purposes, this distinction is not relevant since interpretation of linguistic expressions is always already contextualized in some way, often extending to multiple levels of context, both linguistic and extralinguistic in nature. The opposite is true when, for example, an ambiguous word such as “break” appears in i solation. Devoid of any specific context one can 2
3 4
240
See Timo Honkela and Ari M. Vepsäläinen, “Interpreting Imprecise Expressions: Experiments with Kohonen’s Self-Organizing Maps and Associative Memory,” in Proceedings of ICANN 2011, International Conference on Artificial Neural Networks 1. (1991); Timo Honkela, Ville Könönen, Tiina Lindh Knuutila, and Mari Sanna Paukkeri, “Simulating Processes of Concept Formation and Communication,” Journal of Economic Methodology 15, no. 3 (2008): 245–59; Nina Janasik, Timo Honkela, and Henrik Bruun, “Text Mining in Qualitative Research Application of an Unsupervised Learning Method,” Organizational Research Methods 12, no. 3 (2009): 436–60. Barney G. Glaser and Anselm L. Strauss, The Discovery of Grounded Theory: Strategies for Qualitative Research (Chicago: Aldine Publishing Company, 1967). Stevan Harnad, “The Symbol Grounding Problem,” Physica D: Nonlinear Phenomena 42, no. 1(1990): 335–46.
coding AS literacy — Metalithikum IV
only try to guess which of its multiple meanings could be in question. If there is even a short contextual cue — “break the law,” or “have a break,” or “how to break in billiards” — it is usually possible to arrive at a more accurate interpretation. Additionally, the extra-linguistic context of an expression usually helps in disambiguation. In some cases, the interpretation of expression can be numerically quantified and thus more easily compared. For instance, the expression “a tall person” can be interpreted as a kind of measure of the height of the person. The interpretation of “tallness” can be experimentally studied in two ways. Either one can be asked to tell the prototypical height of a person that is tall, or one can tell whether different persons of a particular height range are tall or not (maybe associated with some quantifiers such as “quite” or “very”). Sometimes this kind of quantification is conducted using the framework of fuzzy set theory.5 However, consideration of the tallness of a person is only the tip of an iceberg where the complexity of interpretation is concerned. A small giraffe or building is usually higher than a tall person. A person whose height is five feet (one meter and fifty-two centimeters) is not prototypically considered tall unless that person is a young child. Also, many other contextual factors influence interpretations of concepts such as gender, historical time (people used to be shorter hundreds of years ago), and even profession (e.g. basketball players versus fighter pilots). The tallness example also provides a valuable window onto subjectivity. If we ask of a thousand people the question “How tall is a tall person?” we receive many different answers, and if we ask “If a person is x cm tall, would you call him / her a tall person?” the answer again varies among respondents. The distribution of answers to such questions reflects individual variations in the interpretation of “tall.” If the pattern in question is more complex and a number of contextual features are taken into account, the issue of subjective models becomes even more apparent, unless it is assumed that such information for interpretation (linking language with perceptions) would be genetically determined. This kind of genetic determination of most concepts is, however, highly unlikely in the light of empirical evidence. Another simple example on subjectivity is found in naming colors. Differences and similarities in color naming and color concepts in different languages have been studied carefully.6 In addition, unless prototypical colors such as pure black, white, red, green, etc., are chosen, individual people tend to name a sample color in different ways. For example, dark blue for one person, may be black to someone else. A similar straightfor5 6
Lotfi A. Zadeh, “Fuzzy Sets,” in Information and Control 8, no. 3(1965): 338–53. Paul Kay and Chad K. McDaniel, “The Linguistic Significance of the Meanings of Basic Color Terms,” in Language (1978): 610–46; and Richard S. Cook, Paul Kay, and Terry Regier, “The World Color Survey Database: History and Use,” in Handbook of Categorisation in the Cognitive Sciences (Amsterdam: Elsevier, 2005).
VIII GICA: Grounded Intersubjective Concept Analysis
241
ward illustration of the subjectivity of interpretation is to be found in the naming of patterns. For instance, people name the shapes shown in Fig. 1 in different ways, except for the clear cases in the ends of the continuum.7 Fig. 1 A continuum of shapes.
It is important to note that the kind of subjectivity discussed above is usually not dealt with in computational or formal theories of language and understanding. On the other hand, this phenomenon is self-evident for practition-ers in many areas of activity as well as in relation to practice-oriented fields in the humanities. However, subjectivity has been difficult to quantify. In this text, we introduce a method that is designed to make the subjectivity of interpretation and understanding explicit and visible even in nontrivial cases. I.II Shedding light on subjectivity: crowdsourcing In the Web 2.0 world, crowdsourcing is a specific activity that relies upon input from large numbers of people.8 Tasks traditionally performed by an employee or contractor are outsourced to a group of people or community in the form of an open call. Social bookmarking is a specific example of crowdsourcing, in which the expert task of metadata provision in the form of library categories or keywords is replaced by a virtual group activity involving multitudinous, dispersed users. In contrast with the expert activity, in social bookmarking no formalized category system or keyword list is employed. This means, in practice, that individual variations in labeling become clearly visible. For example, in the social bookmarking website delicious.com the Wikipedia page on the Self-Organizing Map (SOM) method has been given one to twelve labels by 128 different users (as October 20, 2009). The labels for SOM given by at least ten users are: AI, neural networks, visualization, som, wikipedia, clustering, programming, neural, kohonen, research, network, algorithms, and statistics. Additionally, a large number of other labels were suggested by one or two users only. Examples of these rarer labels include: neurotic, mind map, and research result. This example illustrates the existence of a shared prototypical core surrounded by an expansive cloud of variation. An important observation is that the most common 7 8
242
Timo Honkela and Matti Pöllä, “Concept Mining with Self-Organizing Maps for the Semantic Web,” in Advances in Self-Organizing Maps (Berlin: Springer, 2009), 98–106. Cf. e.g. Daren C. Brabham, “Crowdsourcing as a Model for Problem Solving: An Introduction and Cases,” Convergence: the International Journal of Research into New Media Technologies 14, no. 1 (2008): 75–90.
coding AS literacy — Metalithikum IV
labels coincide with the ones that an expert would give. This is partly explained by the fact that many of the crowdsourcing participants are either experts, students as emerging experts, or professional amateurs. Moreover, the subjectivity in labeling is not limited to social bookmarking. Furnas et al. have found that in spontaneous word choice for objects in five domains, two people favored the same term with a probability that was less than 0.2.9 Similarly for indexing documents with words, Bates has shown that different indexers, who are well trained in the indexing scheme, might assign index terms for a given document differently.10 Further to this, it has also been observed that an indexer might use different terms for the same document at different times. These kinds of differences can partly be explained by randomness in the word choice but an essential component also seems to be the differences in how people conceptualize various phenomena. In the following section, we introduce with increasing detail how these differences might be made visible. I.III Becoming conscious of individual differences as a way of increasing understanding For the most part, people do not seem to be aware of the subjectivity of their perceptions, concepts, or worldviews. Furthermore, one could claim that we are more typically conscious of differences in opinions, whereas differences in perception or on the conceptual level are less well understood. It is even possible that, in order to function efficiently, it is best to assume one’s tools of communication are shared by others with whom one interacts. However, there are situations where this assumption fractures to a sufficient degree that it merits further attention, an example being the case when speakers of the same language from diverse disciplines, interest groups, or otherwise closely knit cultural contexts come together to deliberate upon shared issues. The principle assumption of the GICA method innovation is the recognition that, although different people may use the same word for some phenomenon, this does not necessarily mean that the conceptualization underlying this word usage is the same; in fact, apparent similarities at the name level may hide significant differences at the level of concepts. Furthermore, there may be differences at many levels : experiences, values, understanding of the causal relationships, opinions, and with regard to the meanings of words. Differences in the meanings of words are the most deceptive, because to discuss any of the other differences, a shared vocabulary which is understood in roughly the same way, is necessary. Often a difference in the meanings of used words remains 9
George W. Furnas et al., “The Vocabulary Problem in Human-System Communication,” Communications of the ACM 30, no. 11 (1987): 964–71. 10 Marcia J. Bates, “Subject Access in Online Catalogs: A Design Model,” Journal of the American Society for Information Science 37, no. 6 (1986): 357–76.
VIII GICA: Grounded Intersubjective Concept Analysis
243
unrecognized for a long time; it may, for instance, be misconstrued as a difference in opinion. Alternatively, a difference in opinion, regarding a decision that the group makes, may be masked and remain unrecognized, because the same words are used seemingly in accord, but in fact are afforded different meanings by different people. When these differences are not recognized during communication, it can lead to discord and unhappiness about the end result. As a result, the joint process may be considered to have failed in one or even all of its objectives. Mustajoki presents a model of miscommunication for which the underlying insights and motivations resemble, to a significant degree, this article as well as the model presented in Honkela et al.11 He concludes that in scientific literature on failures in communication, different terms are occasionally used to describe similar matters and researchers also tend to deploy identical terms with different meanings. In this article, we do not aim to review the research on miscommunication but suggest referral to Mustajoki as a good overview. However, in the following, we present as our contribution a division into two main types of problems.12 I.IV
False agreements and false disagreements Undiscovered differences in meaning can cause two types of problems. The first type is false agreement, where on the surface it looks as if we agree, but in fact our conceptual difference hides the underlying difference in opinions or worldviews. For example, we might all agree that “our university should be innovative” or that “our university should aim at excellence in research and education” but could totally disagree about what “innovative” or “excellence” means. As another example, we might agree that “we need a taxing system that is fair and encourages people to work” but might be in considerable disagreement regarding practical interpretations of “fair” and “encourages.” The second type of problem caused by undiscovered meaning difference is false disagreement. If we are raised (linguistically speaking) in different subcultures, we might come to share ideas and views, but might have learned to use different expressions to describe them. This may lead to a considerable amount of unnecessary argument and tension, with surface disagreement in effect masking an underlying agreement. Since so much human endeavor conducted with others is focused upon uncovering conceptual differences in one way or another, it would be beneficial to have tools that can aid us in the discovery process — tools that might make visible the deeper conceptual level behind our surface level of words and expressions. 11 Arto Mustajoki, “Modelling of (Mis)communication,” Prikladna lingvistika ta ligvistitshni tehnologii: Megaling-2007 35 (2008); Timo Honkela, Ville Könönen, Tiina Lindh Knuutila, and Mari Sanna Paukkeri, “Simulating Processes of Concept Formation and Communication,” in Journal of Economic Methodology 15.3 (2008): 245-259. 12 Mustajoki, “Modelling of (Mis)communication.”
244
coding AS literacy — Metalithikum IV
I.V
Making differences in understanding visible Our aim with the Grounded Intersubjective Concept Analysis (GICA) method is to devise a way in which differences in conceptualization, such as those described above, can be made visible and integrated into complex communication and decision-making processes. An attempt to describe the meaning of one word simply by relying on other words often fails. This failure occurs because the descriptive words themselves are understood differently across varying domains, each potentially having a large number of words that have specific or specialized meanings only in the context of that domain. The more specific aims of this work are to define the problem domain, to explain processes of concept formation from a cognitive perspective based on our modeling standpoint, and to propose a methodology that can be used for making differences in conceptual models visible in a way that forms a basis for mutual understanding when different heterogeneous groups interact. Contexts of application being, for example: public planning processes, environmental problem solving, interdisciplinary research projects, product development processes, and mergers of organizations. Our hope is that the GICA method, by allowing the elicitation, representation, and integration of concepts grounded in the experience of stakeholders, takes participatory methods one step further. By allowing the integration of the conceptual and experiential worlds of lay stakeholders usually deemed marginal from the point of view of existing modes of producing expertise (such as, e.g., science and engineers), it can be seen as a providing a tool for reducing marginalization. In the following, we will discuss the theoretical background of the GICA method, and present the method in detail. If the reader is mainly interested in the practical value and application of the method, the following section may be skipped. II Theoretical background The following section on the cognitive theory of concepts focuses on how subjectivity in understanding can be explicitly modeled in a way that provides a basis for the quantitative analysis presented in part 3. II.I Cognitive theory of concepts and understanding The sheer volume of philosophical and scientific literature on concepts is huge and thus it is not possible due to limitations of space, to review here any significant proportion of it. According to a common view, concepts are seen independent of any historical, contextual, or subjective factors. Works in the tradition of analytical philosophy, e.g., by Alfred Tarski, Rudolf Carnap, and Donald Davidson, typically represent this point of view. One attempt to create a connection between linguistic expressions and formal concepts can be found in the work of Richard Montague. His
VIII GICA: Grounded Intersubjective Concept Analysis
245
central thesis was that there is no essential difference between the semantics of natural languages (like English) and formal languages (like predicate logic), i.e., there is a rigorous way in which to translate English sentences into an artificial logical language.13 Montague grammar is an attempt to directly link the syntactic and semantic level of language. In order to do so, Montague defined the syntax of declarative sentences as tree structures and created an interpretation of those structures using an intensional logic. The end result was a focus on such aspects of language that nicely fit with the theoretical framework created by Montague. Examples of language considered includes sentences like “Bill walks,” “Every man walks,” “The man walks,” and “John finds a unicorn.”14 It may be fair to say that most of the linguistic and closely related cognitive and social phenomena such as ambiguity, contextuality, variation, change, culturally dependent aspects, etc., are set aside. In order to make his formal apparatus work, Montague assumes that the original sentences can be considered unambiguous even though ambiguity is a central phenomenon in language at many levels of abstraction. While the idea of being rigorous may be considered a proper stance, it often leads to the negligence of the original complexity of the phenomenon being considered.15 In the current work, we wish to take a step back from the earlier formal work like Montague’s and provide means to tackle the complexities of natural language understanding. Many philosophers outside the analytical tradition have already for some time criticized the approach of logical formalization within the philosophy of language. For instance, representatives of phenomenology (e.g. Edmund Husserl and Martin Heidegger), hermeneutics (e.g., Heidegger and Hans-Georg Gadamer), and critical theory (e.g., Max Horkheimer and Jürgen Habermas) have each presented alternative views. Rorty attacks the correspondence theory of truth according to which truth is established by directly comparing what a sentence asserts regarding the “facts.”16 Rorty even denies that there are any ultimate foundations for knowledge at all. He calls for a socially based theory of understanding. He also strongly criticizes the notion of truth itself : Truth is not a common property of true statements, and “Good” is what proves itself to be so in practice. Rorty combines pragmatism (e.g., John Dewey and Charles S. Peirce) with the later philosophy of language by Wittgenstein (1953), which declares that meaning is a social-linguistic product.17 It is far 13 Richard Montague, “The Proper Treatment of Quantification in Ordinary English,” in Approaches to Natural Language: Proceedings of the 1970 Stanford Workshop on Grammar and Semantics, ed. Jaakko Hintikka, Patrick Suppes, and J. M. E. Moravcsik (Dordrecht: Springer Netherlands, 1973), 221–42. 14 Ibid. 15 Heinz von Foerster, Understanding Understanding: Essays on Cybernetics and Cognition (Vienna: Springer, 2003). 16 Richard Rorty, Philosophy and the Mirror of Nature (Princeton, NJ: Princeton University Press, 1979). 17 Ludwig Wittgenstein, Philosophical Investigations (London: John Wiley & Sons, 2010).
246
coding AS literacy — Metalithikum IV
from obvious that communication between speakers of one and the same language would be based on commonly shared meanings, as often suggested by the proponents of formal semantics, either explicitly or implicitly. This leads to a rejection of the idea of an idealized language user and to the rejection of the possibility of considering central epistemological questions and natural language semantics without consideration of subjectivity and variability. In other words, personal language is idiosyncratic and based on the subjective experiences of the individual.18 At the sociocultural level, humans create and share conceptual artifacts such as symbols, words, and texts. These are used as mediators between different minds. When communicating and sharing knowledge, individuals have to transform their internal representation into an explicit representation to be communicated — and vice versa. These internalization and externalization processes take place as a continuous activity. In externalization, the internal view is externalized as explicit and shared representations. In the internalization process, individuals take an expression and make it their own, perhaps using it in a unique way.19 The internalization of linguistic signs typically takes place as an iterative process. An individual is exposed to the use of an expression in multiple contexts, this distribution of contexts providing a view on the meaning of the expression as it is commonly understood by the others. However, due to differences in individual lives and learning paths, different subjects gain different conceptual constructions, resulting in the inclusion of subcultures as well as individual idiosyncrasies. The idea of different points of view is illustrated in figure 2. Fig. 2 Illustration of the effect of the point of view. The cone can be seen as a triangle or as a circle depending on the dimensions of reality that are observed or factors that are valued.
One classical, but perhaps in this context less useful, approach for defining concepts is based on the idea that a concept can be characterized by a set of defining attributes. As an alternative, the prototype 18 Timo Honkela, “Philosophical Aspects of Neural, Probabilistic and Fuzzy Modeling of Language Use and Translation,” Proceedings of IJCNN’07, International Joint Conference on Neural Networks (2007): 2881–88. 19 See J. Santrock, “Cognitive Development Approaches,” in A Topical Approach to LifeSpan Development (New York: McGraw-Hill, 2004), 200–25.
VIII GICA: Grounded Intersubjective Concept Analysis
247
theory of concepts proposes that concepts have a prototype structure and there is no delimiting set of necessary and sufficient conditions for determining category membership — instead, the membership can be graded. In prototype theory, instances of a concept can be ranked in terms of their typicality. Membership in a category is determined by the similarity of an object’s attributes to the category prototype. The development of prototype theory is based on the works by, among others, Rosch and Lakoff.20 Gärdenfors distinguishes between three cognitive levels of representation.21 The most abstract level is the symbolic level, at which the information is represented in terms of symbols that can be manipulated without taking into account their meaning. The least abstract level is the subconceptual representation, where concepts are explicitly modeled at the mediating level of their conceptual representation. A conceptual space is built upon geometrical structures based on a number of quality dimensions.22 Concepts are not independent of each other but can be structured into domains, e.g., concepts for colors in one domain, spatial concepts in another domain. Figure 3 shows an example of a conceptual space consisting of two quality dimensions, and two different ways (A and B) of dividing the space into concepts. In general, the theory of conceptual spaces proposes a medium to get from the continuous space of sensory information to a higher conceptual level, where concepts could be associated with discrete symbols. This mapping is a dynamic process. Gärdenfors has proposed that, for example : Multidimensional Scaling (MDS) and Self-Organizing Maps (SOM) can be used in modeling this mapping process.23 The simplest connection between the SOM and conceptual spaces is to consider each prototype or model vector in a SOM as an emergent conceptual category. Related research on conceptual modeling using the SOM includes Ritter & Kohonen,24 Honkela et al.,25 Lagus et al.26 and Raitio et al.27). 20 Eleanor H. Rosch, “Natural Categories,” Cognitive Psychology 4, no. 3 (1973): 328–50; George Lakoff, Woman, Fire and Dangerous Things: What Categories Reveal about the World. (Chicago: University of Chicago Press, 1987). 21 Peter Gärdenfors, Conceptual Spaces: The Geometry of Thought (Cambridge, MA: MIT Press, 2000). 22 Ibid. 23 Ibid; Teuvo Kohonen, Self-Organizing Maps (Vienna: Springer Science & Business Media, 2001). 24 Helge Ritter and Teuvo Kohonen, “Self-Organizing Semantic Maps,” Biological Cybernetics 61, no. 4 (1989): 241–54. 25 Timo Honkela, Ville Pulkki, and Teuvo Kohonen, “Contextual Relations of Words in Grimm Tales Analyzed by Self-Organizing Maps,” Proceedings of ICANN-95, International Conference on Artificial Neural Networks 2 (1995): 3–7. 26 Krista Lagus, Anu Airola, and Mathias Creutz, “Data Analysis of Conceptual Similarities of Finnish Verbs,” Proceedings of the 24th Annual Meeting of the Cognitive Science Society, (2002): 566–571. 27 J. Raitio, R. Vigário, J. Särelä, and T. Honkela, “Assessing Similarity of Emergent Representations Based on Unsupervised Learning,” Proceedings of IJCNN 2004, International Joint Conference on Neural Networks 1 (2004): 597–602.
248
coding AS literacy — Metalithikum IV
II.II Subjective conceptual spaces Two persons may have very different conceptual densities with respect to a particular topic. For instance, in figure 3, person A has a rather evenly distributed conceptual division of the space, whereas person B has a more fine-grained conceptual division on the left side of the conceptual space, but has lower precision on the right side of the space.
[A]
[B]
When language games were included in the simulation model, it resulted in a simple language emerging in a population of communicating autonomous agents.28 In the population, each agent first learned a conceptual model of the world, in solitary interaction with perceptual data from that world. As a result, each agent obtained a somewhat different conceptual representation (figure 2 representing a schematic illustration of the kinds of differences that can arise). Later, common names for the previously learned concepts were learned in communication with another agent. Honkela et al. noted that in a two-agent communication case each agent estimates the conceptual space of the other agent.29 As a cognitive process this estimation can have varying degrees of allocentric and egocentric characteristics based on current constraints on cognitive processing, context, and the goals of the agent (e.g., lying).30 This is due to the tendency of people to cooperate in a conversation, for example, by trying to find common ground, 31 something which the estimation process of the agents closely resembles.
28 Tiina Lindh-Knuutila, Timo Honkela, and Krista Lagus, “Simulating Meaning Negotiation Using Observational Language Games,” in Symbol Grounding and Beyond (Vienna: Springer, 2006), 168–79. 29 Honkela et al., “Simulating Processes of Concept Formation and Communication.” 30 Henri Sintonen, Juha Raitio, and Timo Honkela, “Quantifying the Effect of Meaning Variation in Survey Analysis,” in Artificial Neural Networks and Machine Learning — ICANN 2014 (Vienna: Springer International Publishing, 2014), 757–64. 31 H. Paul Grice, “Logic and Conversation,” in Syntax and Semantics, vol. 3, Speech Acts, ed. P. Cole and J. Morgan (New York: Academic Press, 1975), 183–98; Herbert H. Clark and Susan E. Brennan, “Grounding in Communication,” in Perspectives on Socially Shared Cognition 13 (1991): 127–49.
VIII GICA: Grounded Intersubjective Concept Analysis
249
Fig. 3 Illustration of differing conceptual densities of two agents having a twodimensional quality domain. Points mark the locations of the prototypes of concepts. Lines divide the concepts according to Voronoi tessellation. Both agents can discriminate an equal number of concepts, but the abilities of the agent B are more focused on the left half of the quality dimension 1, whereas agent A represents the whole space in a more evenly distributed manner.
However, maintaining common ground is cognitively demanding,32 so agents may need to fall back to a more egocentric strategy and use their own conceptual space during interaction. This could be modeled with a weighing function that determines the level of allocentrism available.33 II.III Intersubjectivity in conceptual spaces If some agents speak the “same language,” many of the symbols and the associated concepts in their vocabularies are also the same, a subjective conceptual space emerging through a process of individual self-organization. The input for the agents consists both of environmental perceptions and expressions communicated by other agents. The subjectivity of the conceptual space of an individual is a matter of degree. The conceptual spaces of two individual agents may differ to a greater or lesser degree. The convergence of conceptual spaces stems from two sources: similarities between the individual experiences (as direct perceptions of the environment) and communication situations (mutual communication or exposure to the same linguistic / cultural influences such as upbringing and education, also including, but not limited to, artifacts such as newspapers, books, etc.). In a similar manner, the divergence among conceptual spaces of agents is caused by differences in the personal experiences / perceptions and differences in the exposure to linguistic / cultural influences and artifacts. The basic approach regarding how autonomous agents could learn to communicate and form an internal model of the environment applying the SOM algorithm was introduced, in a simple form, in Honkela 1993.34 The model has been later substantially refined.35 II.IV Conceptual differences in collaborative problem solving Collaborative problem solving among experts can in principle be achieved in two ways: (1) by bringing forth a combination of the opinions of the experts concerned through voting, or (2) by a more involved sharing or integration of expertise and experience at the conceptual level. A particular form of sharing expertise is sharing prototypes. This 32 Boaz Keysar, Dale J. Barr, Jennifer A. Balin, and Jason S. Brauner, “Taking Perspective in Conversation: The Role of Mutual Knowledge in Comprehension,” in Psychological Science 11, no. 1 (2000): 32–38. 33 Sintonen et al., “Quantifying the Effect of Meaning Variation in Survey Analysis.” 34 Timo Honkela, “Neural Nets That Discuss: A General Model of Communication Based on Self-Organizing Maps,” ICANN’93 (London: Springer, 1993), 408–11. 35 See Timo Honkela and Juha Winter, Simulating Language Learning in Community of Agents Using Self-Organizing Maps, Computer and Information Science Report A71, Helsinki University of Technology, Helsinki (2003); Lindh-Knuutila, et al., “Simulating Meaning Negotiation Using Observational Language Games;” Honkela et al., “Simulating Processes of Concept Formation and Communication;” Sintonen et al., “Quantifying the Effect of Meaning Variation in Survey Analysis.”
250
coding AS literacy — Metalithikum IV
form refers to a process in which an expert communicates prototypical cases to the other expert. So-called boundary objects,36 i.e., objects or facts that are well known across various backgrounds and scientific disciplines, are often used as suitable prototypical cases. In the methodological context of the SOM and other prototype-based conceptual models, this means transmitting a collection of model vectors. Let us consider the three features — or as Gärdenfors defines them, essentially quality dimensions37 — that span the conceptual space, the data set (“experience”) used by an individual expert in learning the structure of its conceptual space, and finally the naming of concepts. These elements give rise to a typology of conceptual differences among experts. In the following example, we present these different categories as well as the basic approaches for dealing with problems related to each category. a. In the simplest case, the quality dimension space and data set are (nearly) equivalent for both agents with only concept naming differing between agents. An agent has an individual mapping function that maps each symbol to the conceptual space of the agent. In a classical simulation of this kind, a number of robots with cameras learned to name visual objects in a similar manner. Active research in language games and language evolution has since emerged.38 Chen has presented a specific solution to the vocabulary problem among humans based on clustering.39 Irwin’s view that contextual knowledge may ultimately be constructed in scientific terms might be rooted in the view that differences in perspective are mainly a matter of concept naming. This view might also figure in the background of much traditional or “standard” thinking in the domains of medicine and innovation. b. As a step toward increased differences among the agents, one may consider the situation in which the feature space is equivalent, but the data set per expert varies. One expert has denser data from one part of the concept space, the other for another part (see figure 2). An obvious approach for efficient decision making is to use the expertise of those agents whose conceptual mapping is densest with regard to the problem at hand. However, in many cases, problem solving requires the combination of many elements, e.g., as solutions of subproblems. In those cases, each 36 Susan Leigh Star and James R. Griesemer, “Institutional Ecology, ‘Translations,’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907–39,” Social Studies of Science 19, no. 3 (1989): 387–420. 37 Gärdenfors, Conceptual Spaces. 38 Paul Vogt, “The Emergence of Compositional Structures in Perceptually Grounded Language Games,” Artificial Intelligence1 67, no. 1 (2005): 206–42: Lindh-Knuutila et al., “Simulating Meaning Negotiation Using Observational Language Games”; Honkela et al., “Simulating Processes of Concept Formation and Communication.” 39 Hsinchun Chen, “Collaborative Systems: Solving the Vocabulary Problem,” in IEEE COMPUTER 27, no. 5 (1994): 58–66.
VIII GICA: Grounded Intersubjective Concept Analysis
251
element can be dealt with by the expert with the densest conceptual mapping regarding a particular subproblems. Collins and Evans’s advocacy of the extension of “technical” expertise to include also “uncertified,” experience-based expertise might be rooted in the view that there exists a multitude of dense data sets, some of which carry official accreditation while others do not.40 c. Finally, consider the most challenging case where neither the quality dimension space nor the data set are the same for both agents. Figure 1 depicts a simple case in which the quality dimension spaces are different, therefore offering different viewpoints of the same “data sample” to the agents. In this case, a process of data augmenting can take place: if a subset of data samples known to both can be found (for example, boundary objects known across disciplines, or in terms of medicine, a particular patient’s case), each agent can bring forth their particular knowledge (i.e., values of quality dimensions known only to them) regarding that case. Furthermore, in addition to collaborating in solving the present problem, both agents potentially have the opportunity to learn from each other: to augment their own representation with the new data offered by the other expert. Obtaining augmented information regarding several data samples will lead to the emergence of new, albeit rudimentary quality dimensions, and allow easier communication in future encounters. As an example, mutual data augmentation can take place between doctors of different specialization, doctors and patients, or between doctors and nurses, who consider simultaneously the same patient case. In optimal circumstances, this may eventually lead to better expertise for both. However, this requires that the doctor also trusts the patient, and is willing to learn and store experiential data communicated by the patient. Essentially the same preconditions and constraints that tension the process of data augmenting apply in the contexts of environmental policy and innovation. III The GICA method We present here a method called Grounded Intersubjective Concept Analysis (GICA) for improving the visibility of different underlying conceptual systems among stakeholder groups. The method is comprised of three main stages : A Preparation. B Focus session(s). C Knowledge to action activities. 40 Harry M. Collins and Robert Evans, “The Third Wave of Science Studies: Studies of Expertise and Experience,” Social Studies of Science 32, no. 2 (2002): 235–96.
252
coding AS literacy — Metalithikum IV
These steps can be repeated iteratively, with focus sessions supported by computational tools that enable the analysis and visualization of similarities and differences in the underlying conceptual systems. In this presentation, we use the SOM algorithm41— however, other methods for dimensionality reduction and visualization could be used including multidimensional scaling (MDS),42 Curvilinear Component Analysis (CCA),43 Isomap,44 or Neighbor Retrieval Visualizer (NeRV).45 Hierarchical clustering or decision tree learning methods are not recommended for the current application as they may create artifact-laden categorical distinctions that actually do not exist. In fact, one of the underlying motivations for the proposed method is to help people to realize that real world phenomena have a lot of underlying complexity that is not visible if conceptual categorizations are applied too straightforwardly. Fig. 4 Aalto University students participating in the EIT ICT Labs activity “Well-Being Innovation Camp.”
Moving forward, the GICA method is now illustrated with a case study related to concepts of well-being. The topic featured in the EIT ICT Labs activity “Well-Being Innovation Camp” that took place from October 26 to 29, 2010 in Vierumäki, Finland. The seminar participants were mainly from Aalto University School of Science and Technology, Macadamia Master’s Program in Machine Learning and Data Mining, and from Aalto University School of Art and Design, Department of Design (see figure 4). 41 See Teuvo Kohonen, “Self-Organized Formation of Topologically Correct Feature Maps,” Biological Cybernetics 43, no. 1 (1982): 59–69; and Kohonen, Self-Organizing Maps. 42 J. B. Kruskal and M. Wish, Multidimensional Scaling (Beverly Hills, CA: Sage, 1978); Jarkko Venna and Samuel Kaski, “Local Multidimensional Scaling,” Neural Networks 19, no. 6 (2006): 889–99. 43 Pierre Demartines and Jeanny Hérault, “Curvilinear Component Analysis: A SelfOrganizing Neural Network for Nonlinear Mapping of Data Sets,” Neural Networks, IEEE Transactions on 8, no. 1 (1997): 148–54. 44 Joshua B. Tenenbaum, Vin De Silva, and John C. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science 290, no. 5500 (2000): 2319–23. 45 Jarkko Venna, Jaakko Peltonen, Kristian Nybo, Helena Aidos, and Samuel Kaski, “Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization,” Journal of Machine Learning Research 11 (2010): 451–90.
VIII GICA: Grounded Intersubjective Concept Analysis
253
III.I Introduction to subjectivity and context analysis A word does not carry any information in its form about its meaning. The surface form of the word “cat” is close to the word “mat,” but its meaning can be deemed to be closer to “dog” than to “mat.” It is possible, though, to study the relationships of words based on the context in which they appear. Let us consider the illustrative example shown in figure 5. On the left most column are the words under consideration. There are eight columns each indicating a document. The cells in the table contain frequencies of how often a word appears in a document. In this simple example, it is clear already through visual inspection that the words “house,” “building,” “bridge,” and “tower” appear frequently in the documents numbered from 1 to 4, whereas the words “cat,” “dog,” “horse,” and “cow” can be found often in documents numbered from 5 to 8. Word
Fig. 5 An illustrative example of a data set where the number of occurrences of the words in eight different documents is given.
cat dog horse cow house building bridge tower
Document number 0 1 0 1 8 7 3 2
1 0 0 1 3 1 7 9
0 0 1 0 2 1 5 8
0 2 0 0 9 7 1 0
7 6 2 4 0 0 0 0
4 3 8 6 0 1 0 0
9 7 5 8 1 0 0 1
7 5 3 2 1 0 0 0
The SOM serves several analytical functions.46 Firstly, it provides mapping, from a high-dimensional space into a low-dimensional space, thus creating a suitable means for the visualization of complex data. Secondly, the SOM reveals topological structures of the data. Two points in close proximity to each other on the map are also near each other in the original space (however, the long map distances do not always correspond with long distances in the original space). The SOM has been used extensively for analyzing numerical data in a number of areas, including various branches of industry, medicine, and economics.47 The earliest case of using the SOM to analyze contexts of words was presented by Ritter and Kohonen.48 The illustrative simple data set shown in figure 5 can be analyzed using the SOM, resulting in a map shown in figure 6. Relative distances in the original eight-dimensional space are illustrated by the shading : the darker an area on the map, the higher the distance. Therefore, it is clearly visible, for instance, that the words “tower,” “bridge,” “house,” and “building” are separated from the words “horse,” “cow,” “cat,” and “dog.” When richer contextual data is available, more fine-grained distinctions emerge.49 46 Kohonen, Self-Organizing Maps. 47 Ibid. 48 Ritter and Kohonen, “Self-Organizing Semantic Maps.” 49 See for example Honkela et al., “Contextual Relations of Words in Grimm Tales Analyzed by Self-Organizing Maps”; Lagus et al., “Data Analysis of Conceptual Similarities of Finnish Verbs.”
254
coding AS literacy — Metalithikum IV
Fig. 6 A map of words as a result of a SOMbased analysis of a term-document matrix.
In the GICA method, statistical analysis of items such as words in their contexts is taken a step further. The introductory section of this text aimed to carefully demonstrate that subjectivity is an inherent aspect of interpretation. In order to capture the aspect of subjectivity, we add a third dimension to the analysis. Namely, we extend the equation items × contexts into items × contexts × subjects, i.e., we consider the contribution of each subject in the context analysis. This idea is illustrated in figure 7.
For the practical analysis of the data, it is often useful to flatten the tensor on one axis to obtain an analysis of the focus items, subjects, and contexts. Such flattening is shown in figures 8 and 9. Recently Raitio et al. have proposed a method to analyze the data tensor directly, without flattening it.50
50 J. Raitio, T. Raiko, and T. Honkela, “Hybrid Bilinear and Trilinear Models for Exploratory Analysis of Three-Way Poisson Counts,” in Proceedings of ICANN 2012, 22nd International Conference on Artificial Neural Networks, 2 (2012): 475–482.
VIII GICA: Grounded Intersubjective Concept Analysis
255
Fig. 7 An illustration of an item-context matrix expanded into a “subjectivity tensor.” In other words, we perform an extension of an a×c-dimensional matrix into an a×c×s-dimensional data matrix in which the data provided by different subjects on focus items and contexts are included. Here a refers to the number of items, c to the number of contexts, and s to the number of subjects.
Fig. 8 The a×c×sdimensional subjectivity tensor flattened into a matrix in which each row corresponds to a unique combination of a subject and an item and each column corresponds to a particular context. The number of rows in this matrix is a×s and the number of columns is c. A specific analysis of such a matrix on well-being concepts is shown in figure 14. If this matrix is transposed, i.e., columns are transformed into rows and vice versa, an analysis of the contexts can be obtained. This is demonstrated in figure 16.
Fig. 9 The a×c×sdimensional subjectivity tensor flattened into a matrix in which each column corresponds to a subject and each row to a unique combination of an item and a context. The number of rows in this matrix is a×c and the number of columns is s. A transpose of this matrix gives rise to a map of persons (see figure 17).
256
coding AS literacy — Metalithikum IV
III.II Preparation and specifying the topic The purpose of the preparatory step is to collect necessary information for a workshop or series of workshops that are called focus sessions. This preparation is typically organized by a person or group to whom the topic is important but who preferably does not have a strong bias related to the topic and thus is able to respect the importance of multiple points of view. In the preparation, both the topic at hand and relevant stakeholder groups need to be specified. The representatives of stakeholder groups help in collecting information needed in the focus session stage (B). The detailed steps of the preparation are described below. The topic needs to be described in some detail to set a context for the process. The topic may be anything ranging from issues such as nuclear power to others like preventive healthcare. In our illustrative case the topic is well-being. Related, but not full GICA analyses of subjective conceptions have been conducted in the areas of philosophical education and religious belief systems.51 III.II.I Determining relevant stakeholder groups When the topic has been fixed, it is important to determine the relevant stakeholder groups and invite representatives of those groups into the process. For the success of the process, it is beneficial to invite people with very different backgrounds concerning their education and experience related to the topic at hand. Inside a company, this might mean inviting representatives from marketing and sales as well as product development departments. In our case, we had two student groups, one consisting of information and computer science (specifically, machine learning and data mining) students, with design students comprising the other. In a full-scale application of the GICA method, well-being concepts could be considered by stakeholders representing citizens / consumers, patients groups, health care professionals, administrators, and politicians. III.II.II Collecting focus items from relevant stakeholders and others The focus items should represent central conceptual themes related to the topic at hand. These items are usually terms that are used in the domain, with the assumption that they are known by all the participants without further explanation. They may be terms toward which we suspect exist conceptual differences among the participants, or terms upon which having a shared understanding is of central importance. In any case, the focus items should be chosen in such a way as to maximize the possibility to reveal significant conceptual differences. 51 Anna-Mari Rusanen, Otto Lappi, Timo Honkela, and Mikael Nederström, “Conceptual Coherence in Philosophy Education — Visualizing Initial Conceptions of Philosophy Students with Self-Organizing Maps,” Proceedings of CogSci 8 (2008): 64–70; Ilkka Pyysiäinen, Marjaana Lindeman, and Timo Honkela, “Counterintuitiveness as the Hallmark of Religiosity,” Religion 33, no. 4 (2003): 341–55.
VIII GICA: Grounded Intersubjective Concept Analysis
257
In our illustrative example, the items are chosen from the domain of wellbeing. Originally the list consisted of eight items (well-being, fitness, tiredness, good food, stress, relaxation, loneliness, and happiness), but at a later stage of the process the list was narrowed down to four items (relaxation, happiness, fitness, well-being). III.II.III Collecting context items The next step in the method is to collect a number of relevant contexts toward which the previously collected focus items can be reflected. In principle, the context items can be short textual descriptions, longer stories, or even multimodal items such as physical objects, images, or videos. The underlying idea is that between the focus items and the contexts there exist potential links of varying degrees. It is important to choose the contexts in such a manner that they are as clear and unambiguous as possible. Differences in the interpretations of the focus items is best revealed if the “reflection surface” of the context items is as shared as possible among the participants. Therefore, the context items can include richer descriptions and even multimodal grounding. The number of focus items and contexts determines the overall number of inputs to be given. Naturally, if the number of focus items and / or contexts is very high, the task becomes overwhelming to the participants. Therefore the number of focus items should be kept reasonable, for instance between ten and fifteen, and the number of contexts should be such that the dimensions are enough to illuminate the differences between the conceptual views of the persons. In other words, there is an important link to the theoretical aspects introduced in part 2. Namely, that focus items are positioned relative to the space spanned by the dimensions of the contextual items. Fig. 10 Most common items associated by the participants with eight terms related to well-being.
258
Item
Frequency
Item
Frequency
friends health family sleep music work time happiness depression stress sports healthy fresh food darkness sport freedom travelling social interaction
33 23 23 14 13 11 10 10 10 9 9 9 9 9 9 8 8 7 7
safety exercise delicious success sleeping relaxation pressure nutrition nature home wine satisfaction physical health love hurry healthy food deadline bed …
7 7 7 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 …
coding AS literacy — Metalithikum IV
In the well-being workshop, the participants were asked to list concepts related to eight areas related to well-being (well-being, fitness, tiredness, good food, stress, relaxation, loneliness, and happiness). The participants listed 744 terms among which 182 were mentioned by more than one person. Unique items included “homesickness,” “handicrafts,” “grandma’s pancakes,” etc. The terms that appeared more than five times are shown in figure 10. From the set of these thirty-seven terms twenty-four were finally selected as the context items (see figure 11). Time Travelling Sport Exercise Work Friends Social interaction Sharing
Family Health Sleep Music Pleasure Satisfaction Relaxation Harmony
Freedom Enjoyment Success Nutrition Sun Nature Forest Money
Fig. 11 Context items for the well-being case, selected from the most common terms generated by workshop participants by association.
III.III Focus session The topic, focus items, and contexts are presented by the session organizer to the participants. The presentation should be conducted in as “neutral” a manner as possible to avoid raising issues that refer to the value or opinion differences related to the topic. Naturally, such connotations cannot be fully avoided and therefore some means for creating a generally relaxed and respectful atmosphere should be employed. The presentation of the focus items should be very plain so that no discussion is conducted related to them, meaning in a basic sense that they are simply listed. On the other hand, the contexts are introduced with greater detail as they are meant to create common ground. Referring to the theory of concepts (see section 2), the context items serve as the quality dimensions that span the conceptual space. In order to effectively compare the differing conceptions related to the focus items, it is thus important that the grounding dimensions are understood as commonly as possible, although this is of course only possible to a finite degree. Following this step, the participants are aware of the context items that are used in the analysis and should be ready to fill in a questionnaire that is presented to them in the next step. III.III.I Filling in the Tensor Participants are asked to fill in a data matrix that typically consists of the focus items as rows and the contexts as columns. Each individual’s task is to determine how strongly a focus item is associated with a context. Here, a graded scale can be considered beneficial. There are several options regarding methods of data collection. It is possible to create a form on paper that is given to the participants to be
VIII GICA: Grounded Intersubjective Concept Analysis
259
Fig. 12 A fragment of an input form implemented using Google Docs.
filled in (such as in figure 8). Filling in the data takes place usually during the session because it is preceded by the introduction to the contexts. If there are any open questions related to the contexts, these are answered and discussed in a shared manner so that potential for creating a shared ground is maximized. The data can also be collected with the help of some technological means. For instance, the participants may have access to a Web page containing the input form, or the same functionality can be provided with mobile phone technology. In our well-being case, we used Google Docs to implement the questionnaire (see figure 12). This kind of Webbased solution makes it easier to continue with the analysis as the data is already in electronic form. Before further processing, the inputted data must be encoded as a data matrix that captures all of the participants’ answers. There are several possible strategies to achieving this goal, each one with its own limitations and considerations that need to be taken into account. For instance, if the data has been gathered in paper form, there must be sufficient human resources available to digitize the data. A simple and effective solution is to have a spreadsheet format. Whichever encoding mechanism is employed in it, we now have a “data sheet” from each participant of the kind depicted in figure 8. Together these sheets form the data tensor, which in our example is 4 × 24 × 13 (concepts, context items, subjects), and each point in the tensor is a number between 1 and 5. The data analysis process is presented in the following. III.III.II Data analysis and visualization The data collected in the previous task is analyzed using a suitable data analysis method. A vital factor is the ability to present the rich data in a compact and understandable manner so that the conceptual differences are clearly visible. In the following, we present an example
260
coding AS literacy — Metalithikum IV
where we look at some details of the data tensor using histograms, and then try to form an overview using the SOM algorithm. As discussed earlier, other similar methods can also be applied. The diagrams in figure 13 represent a small fraction of the data gathered in our well-being case study. Analysis of this data can be very informative for gaining deeper understanding of individual concepts in terms of their context items. For example, here we can conclude that the strong connection between friends and happiness is generally agreed upon, whereas the connection between money and happiness shows considerable variation, with social interaction and forest falling in between. In the section that follows, we will look at ways to summarize the data tensor from various perspectives in a more holistic sense. Fig. 13 The distribution of answers for some context items associated with happiness. Among these four contexts, friends seem to be most positively associated with happiness, followed by social interaction. At least in this rather small data set, money has an interesting bimodal distribution with two peaks.
When the subject-focus item-context tensor is available, there are several options for analyzing it. The basic option is to consider all of
VIII GICA: Grounded Intersubjective Concept Analysis
261
the alternatives presented in Figures 8 and 9, alternative approaches include creating a map of 1) the subjects and focus items jointly based on the context items (see figure 14), 2) the context items based on how they were associated with the focus items by each of the subjects (see figure 16), and 3) the subjects based on their responses considering the relationship between the focus and context items (see figure 17). Fig. 14 Map of the subjects (numbered 1–13) and their views on well-being (Wel), happiness (Hap), fitness (Fit), and relaxation (Rel).
In our case, the subjects cannot be identified, neither are they divided into any classes. In an analysis supporting some participatory process, the subjects could be labeled on the map to include stakeholder information. This would facilitate insights into differences of the conceptual views held by different stakeholder groups. In a closely related study, we analyzed the State of the Union addresses. A striking result was that the Democrats and Republicans seem to have a different underlying concept for the word “health.”52 In the present analysis concerning concepts of well-being, one clear finding becomes apparent. Namely, after careful inspection, figure 14 reveals that the views on relaxation are widely scattered on the map, whereas the concepts of happiness and fitness are especially much more concentrated on the map and therefore shared intersubjectively. Happiness is located on the left and lower parts of the map, fitness on the upper and upper right parts. As a strikingly different case, relaxation is not viewed in a uniform manner by the subjects. For example, for subject 9, relaxation is located on the upper left corner of the map 52 Timo Honkela, Juha Raitio, Krista Lagus, Ilari T. Nieminen, Nina Honkela, and Mika Pantzar, “Subjects on Objects in Contexts: Using GICA Method to Quantify Epistemological Subjectivity,” Proceedings of IJCNN 2012, The 2012 International Joint Conference on Neural Networks (2012): 2875–83.
262
coding AS literacy — Metalithikum IV
whereas relaxation for subject 7 is located on the opposite corner. The rest of the subjects are scattered around the map without any obvious discernable pattern.
In addition to considering the value of the context assessment of each subject-focus item pair shown in figure 14, one can also analyze the relationships between the distributions of each context item. The distributions are shown in figure 15. For instance, the distribution of the exercise context on the map coincides very well with the focus items on fitness in figure 14. The distribution of exercise seems to be quite opposite to that of travelling, social interaction, or friends. This seems to indicate that the participants have viewed exercise to be separate from the social aspect of life. It is not a surprise that the distributions on pleasure and satisfaction coincide almost fully. The relationships between the focus items can be made explicit by creating a map shown in figure 16. As an example of a clear result, one can pay attention to some specific pairs of context items. Each item in the pairs “money-success,” “sharing-social interaction,” “sport-exercise” and “sleep-relaxation” can be found near one another on the map. They can therefore be considered as closely related context items among the participants of this survey. As we are not showing the identity of the persons who participated in the analysis, an informed interpretation of the map shown in figure 17 is not possible here. Remembering that the dark colors on the map
VIII GICA: Grounded Intersubjective Concept Analysis
263
Fig. 15 Distributions of context items on the map shown in figure 14. In these diagrams, a dark color corresponds to a high value (close to 5) and a light color to a low value (close to 1).
Fig. 16 Map of context items.
denote large distances in the original data space, it can be concluded, for instance, that subjects 6 and 9 are very similar to each other but at the same time considerably different from all the others. Fig. 17 Map of people.
III.IV knowledge to action The results of the analysis can be utilized in many ways when working with the participants. For example, the map of people can be examined to see whether clear groups of different conceptualizations arise. Differences discovered among the groups can then be analyzed more closely (i.e., which concepts differ, and in which of the context terms, as was done with the relaxation concept). Interactive presentation of such results to the participants and subsequent discussion is likely to clarify the different conceptualizations among the group. Overall, we expect to see heightened mutual respect and increased ease
264
coding AS literacy — Metalithikum IV
of communication among the participants. Figure 18 summarizes the benefits and areas of use for the method. Fig. 18 Potential epistemological and social outcomes of the use of the GICA method.
IV Discussion In addition to the specific approach presented in this text, there are other alternatives for obtaining data for a focus item-context- subject matrix. The subjects may be asked to provide associations to the focus items. This gives rise to a sparse data set that resembles labeling data gathered through crowdsourcing. Furthermore, one can create the matrix by analyzing text corpora written by different authors. In order to conduct the analysis in a meaningful and successful manner, a sophisticated preprocessing phase is needed. Perhaps the most advanced but at the same time most challenging approach would be to apply brain imaging techniques.53 Conducting a conceptual survey requires considerable resources and for this reason alternative means for obtaining subjectivity data are useful. Honkela et al. introduce the use of text mining for this task.54 The basic idea is to analyze a number of documents stemming from different persons and to compare the use of a set of words or phrases by them. The comparison is based on analyzing the contexts in which each person has used each word. The more similar the contextual patterns between two persons for a word, the closer the conceptions are considered to be. One potentially useful practice is to use the GICA method to detect and highlight both potential and actual cases of false disagreement (or 53 See Friedemann Pulvermüller, “Brain Reflections of Words and Their Meaning,” in Trends in Cognitive Sciences 5, no. 12 (2001): 517–24. 54 Honkela et al., “Subjects on Objects in Contexts.”
VIII GICA: Grounded Intersubjective Concept Analysis
265
equally false agreement; see section 1.4). The scientific method can be used as a notable example. Numerous studies rely on data gathered from surveys to draw conclusions, but there might be hidden differences in the understanding of the wording of the survey. Using the GICA method, the effect of meaning variations on the survey results was demonstrated by Sintonen et al. when participants evaluated food items and their prices in an online survey.55 IV.I GICA as a participatory method In our view, the new method is highly useful in many contexts, for example: 1) participatory decision-making processes,56 and 2) participatory or user-centered design.57 Here, we will consider only the first of these application contexts in more detail. We begin by providing a brief review of participatory methods at a general level. Next, we discuss the so-called Q method, which has been claimed to provide access to differences in individual subjectivities, and relate this method to a general discussion on barriers for successful communication in participatory processes. Finally, we consider the usefulness of our method in the selected application context. Research across various disciplines and topic areas has identified a number of methods instrumental for involving people in decision making. For instance, Rowe and Frewer list over a hundred different public engagement methods, some examples of which are community-based initiatives, community research, deliberative polling, citizen juries, stakeholder dialogues, scenario workshops, consultative panels, participatory planning processes, participatory development, consensus conferences, stakeholder collaboration, and integrated resource management.58 Rowe and Frewer have developed a framework for evaluating different public participatory methods specifying a number of evaluation criteria essential for effective public participation.59 These fall into two types : acceptance criteria, which concern features of a method that makes it acceptable to a wider public; and process criteria, which concern features of the process that are liable to ensure that 55 Sintonen et al., “Quantifying the Effect of Meaning Variation in Survey Analysis.” 56 See for example, Gene Rowe and Lynn J. Frewer, “Public Participation Methods: A Framework for Evaluation,” Science, Technology & Human Values 25, no. 1 (2000): 3–29; Elizabeth C. McNie, “Reconciling the Supply of Scientific Information with User Demands: An Analysis of the Problem and Review of the Literature,” Environmental Science & Policyx 10, no. 1 (2007): 17–38. 57 Douglas Schuler and Aki Namioka, Participatory Design: Principles and Practices (Hillsdale, NJ: Erlbaum, 1993); Peter M. Asaro, “Transforming Society by Transforming Technology: The Science and Politics of Participatory Design,” Accounting, Management and Information Technologies 10.4 (2000): 257–90. 58 Rowe and Frewer, “Public Participation Methods”; McNie, “Reconciling the Supply of Scientific Information with User Demands.” 59 Rowe and Frewer, “Public Participation Methods.”
266
coding AS literacy — Metalithikum IV
it takes place in an effective manner.60 Examples of the former are representativeness of participants, independence of true participants, early involvement, influence on final policy, and transparency of the process to the public. Examples of the latter are resource accessibility, task definition, structured decision making, and cost-effectiveness.61 In their later typology of public engagement mechanisms, Rowe and Frewer identify key variables that may theoretically influence effectiveness — participant selection method, facilitation of information elicitation, response mode, information input, medium of information transfer, and facilitation of aggregation — and based on these variables categorize public engagement mechanisms into communicative, consultative, and participatory.62 The methods listed by Rowe and Frewer that come closest to the method presented in this text fall under the descriptor participatory. Mechanisms listed under this type are action planning, citizen’s jury, consensus conference, deliberative opinion poll, negotiated rule making, planning cell, task force, and town meeting with voting.63 Rowe and Frewer furthermore divide these mechanisms into four subtypes. Participation type 1 encompasses action planning workshops, citizens’ juries, and consensus conferences, and is characterized by the controlled selection of participants, facilitated group discussions, unconstrained participant responses, and flexible input from “sponsors” often in the form of experts. The group output is not structured as such. Participation type 2 includes negotiated rule making and taskforces. This subtype is similar to type 1 but with the difference that there is no facilitation of the information elicitation process. Often small groups are used, with ready access to all relevant information, targeted to solve specific problems. Participation type 3 contains deliberative opinion polls and planning cells. This class also possesses similarities to type 1 but with the difference that structured aggregation takes place. In the case of deliberative opinion polling, the selected participants are polled twice, before and after deliberation of the selected issue, and in this process, structured aggregation of all participant polls is attained. Planning cells tend to include various decision aids to ensure structured consideration and assessment and hence aggregation of opinions. Finally, participation type 4, encompassing town meeting with voting, is different from the other subtypes in that selection of participants is uncontrolled, and there is no facilitation of information elicitation, although aggregation is structured.64 60 Gene Rowe and Lynn J. Frewer, “A Typology of Public Engagement Mechanisms,” Science, Technology & Human Values 30, no. 2 (2005): 251–90. 61 Ibid. 62 Ibid., 251, 265. 63 Ibid., 277. 64 Ibid., 281–82.
VIII GICA: Grounded Intersubjective Concept Analysis
267
IV.I.I Focusing on subjective differences Although recognized by most methods of public engagement and every active practitioner, reference to differences in the ways in which participants subjectively experience the world is not explicitly made in the methods listed by Rowe and Frewer. However, this has been the explicit focus of the so-called Q methodology developed at the beginning of the twentieth century by the British physicist and psychologist William Stephenson,65 a methodology increasingly used in social scientific research, including research on participatory methods in environmental decision making. 66 The name “Q” comes from the form of factor analysis that is used to analyze the data. Normal factor analysis, or the “R” method, strives to find correlations between variables across a sample of subjects. The Q method, in contrast, looks for correlations between subjects across a sample of variables; it reduces the many individual viewpoints on the subject down to a few “factors,” which represent shared ways of thinking about some issue.67 The Q method usually starts with a social scientific researcher collecting a consensus on some issue, i.e., a summary presentation, in the form of statements of all things people say about that issue. Commonly a structured sampling method is used in order to ensure that the statement sample includes the full breadth of the consensus. Next, data for Q factor analysis is generated by a series of “Q sorts” performed by one or more subjects. A Q sort is a ranking of variables, typically presented as statements printed on small cards, according to some condition of instruction. In a typical Q study, a sample set of participants, a “P set,” would be invited to represent their own views on chosen issues by sorting the statements from agree (+5) to disagree (– 5), with scale scores provided to assist the participants in thinking about the task.68 The use of ranking is intended to capture the idea that people tend to think multiple ideas in relation to each other, rather than in isolation.69 Unlike objective tests and traits, subjectivity is here understood to be self-referential, i.e., it is “I” who believes that something is the case and who registers that belief by placing a statement, e.g., toward the +3 pole of the Q-sort scoring continuum.70 65 Steven R. Brown, “The History and Principles of Q Methodology in Psychology and the Social Sciences,” (2001), http://facstaff.uww.edu/cottlec/QArchive/Bps.htm 66 Stentor Danielson, Thomas Webler, and Seth P. Tuler, “Using Q Method for the Formative Evaluation of Public Participation Processes,” Society & Natural Resources 23, no. 1 (2009): 92–96. 67 See Wikipedia, s.v. “Q methodology,” http://en.wikipedia.org/wiki/Q_methodology; and Brown, “The History and Principles of Q Methodology in Psychology and the Social Sciences.” 68 Ibid. 69 Wikipedia,s.v. “Q methodology.” 70 Brown, “The History and Principles of Q Methodology in Psychology and the Social Sciences.”
268
coding AS literacy — Metalithikum IV
The factor analysis of correlation matrices leads to what Stephenson called “factors of operant subjectivity,” so-called because the emergence of those factors is in no way dependent on effects built into the measuring device. The Q methodology is thus based on the axiom of subjectivity and its centrality in human affairs, and it is the purpose of the Q technique to enable persons to represent their vantage points for purposes of holding it constant for inspection and comparison.71 IV.I.II Barriers for successful communication in participatory processes The Q method can thus be understood as a problem-solving strategy overcoming difficulties associated with perspective differences among participants in participatory processes. This is also the view of Donner,72 who understands this methodology as filling the gap between qualitative tools used to capture these perspectives, which can be detailed and contextual but also messy, time-consuming, and difficult to administer consistently, and quantitative tools, which can be clear and methodical, but also oversimplified, rigid, and unwieldy. The Q method, by being a tool that combines the richness of interviews with the standardization of a survey, thus represents an attempt to make such differences “discussible, as an early step in a collaborative effort to help construct action plans that most stakeholders can embrace.”73 It allows social scientific researchers to explore a complex problem from a subject’s — the participant’s — point of view, i.e., in accordance with how they see the issue at hand: “Because the results of a Q-sort analysis capture the subjective ‘points of view’ of participants, and because the data is easy to gather, easy to analyze, and easy to present, Q-methodology is good not only as a research tool but also as a participatory exercise.”74 From research on the integration of knowledge perspectives we know that overcoming difficulties associated with differences in perspective is no easy task.75 There are numerous barriers to such communication across perspectives. For instance, Bruun et al. list the following barriers : (1) structural barriers, which concern the organizational structure of knowledge production; (2) knowledge barriers, 71 Ibid. 72 Jonathan C. Donner, “Using Q-Sorts in Participatory Processes: An Introduction to the Methodology,” Social Development Papers 36 (2001): 24. 73 Ibid. 74 Ibid. 75 Julie Thompson Klein, Interdisciplinarity: History, Theory, and Practice (Detroit: Wayne State University Press, 1990); Henrik Bruun, Richard Langlais, and Nina Janasik, “Knowledge Networking: A Conceptual Framework and Typology,” in VEST 18, nos. 3–4 (2005): 73–104; H. Bruun, J. Hukkinen, K. Huutoniemi, and J. Thompson Klein, Promoting Interdisciplinary Research: The Case of the Academy of Finland (Helsinki: Edita, 2005).
VIII GICA: Grounded Intersubjective Concept Analysis
269
which are constituted by the lack of familiarity that people working within one knowledge domain have with people from other knowledge domains; (3) cultural barriers, which are formed by differences in cultural characteristics of different fields of work and inquiry, particularly the language used and the style of argumentation (this category also includes differences in values); (4) epistemological barriers, which are caused by differences in domains of how they see the world and what they find interesting in it; (5) methodological barriers, which arise when different styles of work and inquiry confront each other; (6) psychological barriers, which occur as a result of the intellectual and emotional investments that people have made in their own domain and intellectual community; and finally (7) reception barriers, which emerge when a particular knowledge perspective is communicated to an audience that does not understand, or does want to see, the value of communication across and integration of knowledge perspectives.76 IV.II Summarizing our contribution and future directions The Q method attempts to address and mitigate cultural, knowledge, and epistemological barriers. The method thus shares many of the assumptions of the GICA method developed in this text, not least the “axiom of subjectivity.” However, the Q methodology does not explicitly address the potentially different ways in which members of P sets understand the concepts used in the consensus or statement pool differently. It might be that when reading the statements in the consensus, different participants take the concepts used in them to mean very different things. Therefore, although sophisticated from the point of view of capturing the potentially different ways in which the participants experience their world, the Q method might still mask conceptual differences in underlying worldviews. We therefore propose an additional barrier to successful communication across different perspectives, that of conceptual barriers, which occur as a result of the different ways in which stakeholders conceptualize and make sense of their worlds.77 Furthermore, based on our brief review of participatory methods and barriers to collaborative action and of the until now closest approximation of subjective differences, the Q method, we suggest that the GICA method as we have developed it here provides a unique and relatively easily administered way of approaching the subtle yet potentially significant conceptual differences of participants in various kinds of participatory processes. As such, it contributes to a wider project aimed at
76 Bruun et al., Promoting Interdisciplinary Research, 60–61. 77 Honkela et al., “Simulating Processes of Concept Formation and Communication.”
270
coding AS literacy — Metalithikum IV
decreasing language-related misunderstandings and resulting collapses of meaning and action. We are aware of the fact that often conceptualizations are tightly connected with values.78 We plan to further extend and enhance the GICA method to address this issue explicitly by allowing both statements related to each topic and the underlying concepts to be analyzed in parallel. This text is the first full presentation of the GICA method in its explicit form. Plans are in place to apply the method to diverse real world scenarios in order to gain additional understanding on the applicability of the method and to facilitate its further development. We are also considering providing access to computational tools that would assist the community in use of the method. In summary, our wish is that the GICA method will be a useful tool for increasing mutual understanding wherever it is needed. In an ideal case, the impacts include better communication and mutual understanding in families, at workplaces and other similar local contexts, as well as in large social settings such as relations between countries. In enterprises, the GICA method can be used to improve constructive participation in a manner that does not merely pay lip service to the concept itself. Last but not least, the GICA method can be used to improve democratic processes. Regarding surveys, we have suggested, based on empirical evidence, that a large amount of “semantic noise” can make statistical analysis of survey answers quite inappropriate,79 and thus the GICA method has great value and potential in the overall process. Otherwise, considerations such as sampling at worst only serve as “pseudoscience.” Similar considerations have to be taken into account in democratic processes. Whether a thousand, a million, or a billion people are involved in a given democratic process, simple voting is not at all satisfactory, because again, the “semantic noise” has a much larger effect, for example, than any variations in the voting method. Therefore, the focus on theoretical research in this area should be focused more strongly on cognitive science and linguistics than on the formal grounds. Taking into account subjectivity and multiple points of view is even more important when improved versions of democracy are being built, e.g., based on social media, in which each person can really participate in the decision making directly, not only through elected (or unelected) representatives. This scenario places significant demands on the methods used in supporting the communication, and we suggest that the GICA is one such method. 78 Nina Janasik, Olli Salmi, and Vanesa Castán Broto, “Levels of Learning in Environmental Expertise: From Generalism to Personally Indexed Specialisation,” in Journal of Integrative Environmental Sciences 7, no. 4 (2010): 297–313. 79 Sintonen, “Quantifying the Effect of Meaning Variation in Survey Analysis.”
VIII GICA: Grounded Intersubjective Concept Analysis
271
Acknowledgments T. H. wishes to express his deep gratitude to the organizers of Metalithikum Colloquy #5 “Coding as Literacy: Self-Organizing Maps” for the invitation to participate in this very unique interdisciplinary event. This included delivering a guest lecture as well as conducting insightful discussions. Regarding this book chapter and the underlying work extending over several years, we gratefully acknowledge the financial support from the Academy of Finland (T. H., K. L., T. L.-K., M. P., J. R.), TEKES—the Finnish Funding Agency for Technology and Innovation (J. R.), and the Aalto University School of Science (T. H., K. L., T. L.-K., J. N., and J. R.). KL, TL-K, NKWe also thank Ilari T. Nieminen for his help in analyzing the workshop data. Also the substantial editorial help provided by the colloquy organizers is warmly recognized and appreciated. This text is an updated edition by Honkela, Janasik, Lagus, LindhKnuutila, Pantzar, and Raitio.80
80 Timo Honkela, et al., “GICA: Grounded Intersubjective Concept Analysis — a Method for Enhancing Mutual Understanding and Participation,” in Technical Report, TKKICS-R41, (Espoo: Aalto University / ICS, 2010).
272
coding AS literacy — Metalithikum IV
Further References
Steven R. Brown, Political Subjectivity: Applications of Q Methodology in Political Science (New Haven, CT: Yale University Press, 1980). Steven R. Brown, “Q Methodology and Qualitative Research,” Qualitative Health Research 6, no. 4(1996): 561–67. Tiina Lindh-Knuutila, Juha Raitio, and Timo Honkela, “Combining Self-Organized and Bayesian Models of Concept Formation,” Proceedings of NCPW11 (2009). Bruce F. McKeown and Dan B. Thomas, Q Methodology (Thousand Oaks, CA: Sage, 1988). Michael Polanyi, The Tacit Dimension (Garden City, NY: Doubleday and Co., 1966).
Authors Timo Honkela works as a professor at the Department of Modern Languages, University of Helsinki, and the National Library of Finland, Center for Preservation and Digitisation, in the area of digital humanities. Earlier he was the head of the Computational Cognitive Systems research group at Aalto University. Honkela has a long experience in computational modeling of linguistic and socio-cognitive phenomena. Specific examples include leading the development of the GICA method for analyzing subjectivity of understanding, an initiating role in the development of the Websom method for visual information retrieval and text mining, and collaboration with Professor George Legrady in creating Pockets Full of Memories, an interactive museum installation. Lesser-known work include statistical analysis of Shakespeare’s sonnets, historical interviews, and climate conference talks, and analysis of philosophical and religious conceptions. Nina Honkela University of Helsinki, Faculty of Social Sciences Department of Social Research University of Helsinki, Faculty of Social Sciences Department of Political and Economic Studies Consumer Society Research Centre Unioninkatu 40, FI-00014 University of Helsinki, Finland Krista Lagus University of Helsinki, Faculty of Social Sciences Department of Political and Economic Studies Consumer Society Research Centre Unioninkatu 40, FI-00014 University of Helsinki, Finland Aalto University, Department of Computer Science, P.O. Box 15400, FI-00076 Aalto, Espoo, Finland Juha Raitio Aalto University, Department of Computer Science, P.O. Box 15400, FI-00076 Aalto, Espoo, Finland Henri Sintonen Aalto University, Department of Computer Science, P.O. Box 15400, FI-00076 Aalto, Espoo, Finland VTT Technical Research Centre of Finland, P. O. Box 1000, FI-02044 VTT, Finland Tiina Lindh-Knuutila Aalto University, Department of Neuroscience and Biomedical Engineering, P.O. Box 12200, FI-00076 Aalto, Espoo, Finland Mika Pantzar University of Helsinki, Faculty of Social Sciences Department of Political and Economic Studies Consumer Society Research Centre Unioninkatu 40, FI-00014 University of Helsinki, Finland
VIII GICA: Grounded Intersubjective Concept Analysis
273
IX “Ichnography” —The Nude and Its Model
The Alphabetic Absolute and Storytelling in the Grammatical Case of the Cryptographic Locative
Vera Bühlmann I THEME one, PLOT one: HUMANISM 280 · Blessed Curiosity 280 · The Alphabetic Absolute 285 · The Comic 287 · Mediacy and Real Time 291 · How to Address the Tense-ness of Radioactive Matter in a Universe’s Instantaneity? 296 · The Unknown Masterpiece : The Depiction of Nothing-at-All 303 · The Signature of the Unknown Masterpiece 317 — II THEME one, PLOT two: THE SUMMATION OF INFINITE TERMS IN SERIES 323 · Science, Liberalization, and the Absolute 323 · Two Kinds of Mathesis : General and Universal 326 · Cartesian Limits 328 · Algebra in the Service of Parabolic In-vention 331 — III THEME one, PLOT three: NAMING THAT OF WHICH WE KNOW NOTHING 334 · We Are Leibniz’s Contemporaries 334 · Algebra’s Scope of INFINITARY Discretion 335 · “Nature Is There Only Once”: The Promise of a General Metrics 338 · Symbolisms and Modes of Determination 340 · Psycho-political Struggle around the Cardinality and Ordinality of Sums (Totals) 342 · The Presumptuousness of Universal Measure 345 · Discrete Intellection of Invariances vs. Measuring the Continuity Owed to Constant Values 346
Vera Bühlmann is senior researcher and lecturer at the Chair for Computer Aided Architectural Design (CAAD), Swiss Federal Institute of Technology (ETH) in Zurich. She studied English language and literature and philosophy at the University of Zurich, and holds a PhD from the Institute for Media Sciences, University of Basel. Her work revolves around a “quantum-semiotics” and “natural communication,” and explores how an algebraic understanding of code and programming languages enables us to consider computability within a general literacy of architectonic articulations. She is author of Die Nachricht, ein Medium: Annäherungen an Herkünfte und Topoi städtischer Architektonik (Ambra, 2014). www.monasandnomos.
276
codingAS ASliteracy — Metalithikum literacy — Metalithikum IV coding
This article discusses different modes of how the ominous “all” can be plotted as “comprehension” via narrative, calculation, and measurement. The main interest thereby regards how the apparent “Real Time” induced by the logistical infrastructures established by communicational media becomes articulable once we regard “Light Speed” as the tense-ness proper to spectral modes of depicting the real in its material instantaneity. The “real” in such depiction features as essentially arcane, and its articulation as cryptographical. The articulation of the real thereby takes the form of contracts. We suggest to take cryptography at face value, i.e., as a “graphism” and “script,” whose (cipher) texts we can imagine to IX “Ichnography”—The “Ichnography” — The Nude and Its Model
277
be signed according to a logics of public key signatures: while the alphabets that constitute such a script are strictly public, a ciphertext’s “graphism” cannot be read (deciphered and discerned) without “signing” it in the terms of a private key. This perspective opposes the common view that we are living in “post-alphabetical” times, and instead considers the idea of an alphabetic absolute. This bears the possibility for a novel humanism, based not on the “book” (Scriptures) but on the laws of things themselves. The article traces and puts into profile classical positions — e.g., by Descartes, Leibniz, Dedekind, Cantor, Noether, Mach-on the role of “script” in mathematics, the possibility of a general and / or 278
coding AS literacy — Metalithikum IV
universal mathesis, the role of measurement in relation to conceptions of “nature.” “In 1958 you wrote about alienation produced by non-knowledge of the technical object. Do you always have this in mind as you continue your research?” Anita Kechickian
“Yes, but I amplify it by saying that the technical object must be saved. It must be rescued from its current status, which is miserable and unjust. This status of alienation lies, in part, with notable authors such as Ducrocq, who speaks of “technical slaves.” It is necessary to change the conditions in which it is located, in which it is produced and where it is used IX “Ichnography”—The Nude and Its Model
279
primarily because it is used in a degrading manner. […] It’s a question of saving the technical object, just as it is the question of human salvation in the Scriptures.” Gilbert Simondon THEME one, PLOT one: HUMANISM Blessed Curiosity When Michel Serres was invited to give a talk in celebration of the 150th anniversary of the school he attended, Lycée Saint-Caprais in Agen, France, he used the occasion to share what he calls “a confession.” It is a short and humorous text, full of tender memories about all sorts of more or less innocent mischief, but it also places a ruse that both supports as well as upsets the honorary frame of generational sequentiality in which he had been invited to speak. “God has given us the endless freedom to disobey him, and this is how we can recognize him as our Father,” Serres sets out, and continues: “Scarcely installed in the terrestrial Paradise, Adam and Eve quickly eat the apple and pips, immediately leaving that place of delights and fleeing towards hazy horizons. Only a few months old, the infant tries to say no; those among you who raise children will learn this and know it in overabundance.”1 The presumptuous ruse Serres has placed in this “boring preamble of mixed theology and natural history,” as he calls the story of expulsion, the ruse from which he wittily distracts also by the grandness of the opening address in the first sentence, is a small change in the setup of the Great Story :2 when Adam and Eve give in to their human and purportedly corruptive and nonnatural inclinations for curiosities, they already have a 1
2
280
Michel Serres, “La confession fraternelle,” Empan 48 (2002): 11–16; originally a public lecture at Lycée Saint-Caprais, Agen, on the occasion of the school’s 150th anniversary (in 2000). Here cited from the unpublished “Fraternal Confession,” trans. Kris Pender, https://www.academia.edu/11074066/Michel_Serres_-_Fraternal_Confession. Editorial note: Especially in the beginning of this text, I use a range of words in capitalized spelling; I do so to indicate that these words are used here as titles — words in sheer formality that are expected to implicate capacities and responsibilities that are not attributable to the authority of a particular referent. Considered as titles, they rather implicate entire narratives that can be plotted in uncountable versions; the criteria for electing such words here were purely contextual.
coding AS literacy — Metalithikum IV
child, in Serres’s account. Thereby Serres purports nothing less than a naturalization of sexuality within God’s own likeness — Adam and Eve have a baby that must have been conceived and born before the disrespectful act had been committed! This mischief introduces into the narrative of the Tree of Life nothing less than an abundance of directions in which it might descend and branch off. What presumptuousness, indeed! One that dares to set out, high-spirited, light-humored, and quick, for nothing less than the Total, the Ultimate Sum, by unsettling the earthly grounds in which the Tree of Life roots. But how could such ground possibly be unsettled? Serres assumes that the Nature of the Human must, as everything else as well, be thought to factor in a Universal Nature — a nature of the universe — whose path of descendance is divine (omnipotent) and decided (lawful) as being undecided : it is a nature capable of developing in an uncountable abundance of directions, progressive ones as well as regressive ones. Such nature then must count as essentially arcane, a secret that can be preserved only in a “crypt,” as Serres refers to it elsewhere.3 Of course we know the term crypt from the architecture of churches, but it once meant more generally a “vault, cavern,” being derived from the Greek verb kryptein, “to hide, to conceal,” by nominalizing the adjective that was built from this verb.4 For Serres, there is a path for knowledge to access universal nature, but never a plain, pure, and immediate one. All knowledge is a reduced model of universal nature, a model that does not seek to represent that nature, but rather a model that seeks to keep alive as best as it can that nature’s character : to be secretive. The entire raison d’être of such Knowledge is to serve and obey — unconditionally, absolutely — nature’s secretive character. Such obedience can only be performed through disobedience, through mischief, through the comic. It can be performed by inventing a reduced model of the Secret, and this without the assurance of being initiated to it. Universal Nature’s secretive character can count as neither private nor public, as neither esoteric nor established insight; rather, we can refer to it as constitutive for both in a manner of which Serres maintains that only Law can be.5 Knowledge then embodies Law in the building of a Crypt, a vault, one that is growing deeper and vaster, more intertwined and winding, from the act of being frivolously explored, challenged, tested, strained in the very solidity in which it is built. To keep the Secret that is Universal Nature demands Absolute Obedience without tolerating submission : its secret indeed has one vulnerability, namely that obeying it can be confused with doing so in a servile manner. Serres calls the Evangelist — the mes3 4 5
Michel Serres, “Noise,” trans. Lawrence R. Schehr, SubStance 12, no. 3 (1983): 48–60, here 55. The article makes up chapter 1 of Serres’s 1982 book Genesis, trans. Geneviève James and James Nielson (Ann Arbor: University of Michigan Press, 1999 [1982]). Online Etymology Dictionary, s.v. “crypt,” http://www.etymonline.com/index. php?term=crypt&allowed_in_frame=0. Serres, “Fraternal Confession,” 8.
IX “Ichnography”—The Nude and Its Model
281
senger that claims to bring the Good News with no mediation necessary for receiving it — “Satan the Master of the world.”6 Other than she who strives to master the universe’s secret by keeping it encrypted, and who spends her time in that very vault that doesn’t cease to challenge and take issue with the earthly grounds where the iconic Tree of Life is rooted, she who strives to master the world “leads you to a very high mountain, shows you all the kingdoms in all their glory and promises to give them to you on condition that you grovel before him.”7 If knowledge of the Universe’s Nature is a Crypt, knowledge of the world is the crypt’s Flat Projection in terms that claim the authority to represent the crypt’s arcane. Such flat projection alone can claim to produce “positive” or “negative” knowledge; the crypt on the other hand embodies knowledge that is always already articulated, knowledge that presents insight only by leaving absent what it has intuited. Serres’s seeker of articulate knowledge, whom he addresses as the Researcher, serves the Law, she is an “official” whose duty it is to explore and challenge all the regularities that have been stated as lawful — without ever claiming to represent those regularities with official authority. “We always save ourselves by the law. Freedom comes from laws,” Serres tells his audience.8 Law binds and contracts the ambient terror of the jungle, in a manner that allows “a balance between hunting and being hunted, between eating and being eaten.”9 Law contracts violence. If those contracts are sound, whoever is subject to it can afford to live and care for all that is vulnerable as the source of all that is improbable and precious. With these elaborations, we can perhaps better appreciate the radicality of Serres’s confession : “I continue to make mischief in order to bear witness in the face of the world that we are not beasts, that therefore we have left or begin to leave the hell of violence, because we are men.”10 In Serres’s humorous Confession Story, giving in to the human inclination to be seduced by curiosity ceases to be a tragic act. Rather, it is the Researcher’s Official Duty to enjoy masquerade, to be transgressive by engaging in the challenges that motivate desire and seduction, pleasure and satisfaction, pain and relief. This is comic, and yet it is serious : a researcher “cannot cheat.”11 For “to obey, here, consists in submitting oneself to the laws of things as such and to thereby acquire freedom, whereas cheating consists in submitting oneself to the conventional laws of men.”12 In Serres’s inversive account where the universe has an active nature, rather than being imagined as either static or dynamic, cheating becomes equivalent to being obedient (to the laws of man), 6 Ibid., 7. 7 Ibid., 6. 8 Ibid., 8. 9 Ibid., 6. 10 Ibid., 7. 11 Ibid., 7. 12 Ibid.
282
coding AS literacy — Metalithikum IV
and disobedience comes to count as blessed rather than cursed : “Things contain their own rules. Less conventional than the rules of men, but as necessary as the body that falls and the stars that revolve; even more, difficult to discover. We can do nothing and should do nothing without absolute obedience to these things, loyal and hard. No expertize happens without this, no invention, no authentic mastering. Our power comes from this obedience, from this human and noble weakness; all the rest falls in corruption towards the rules.”13 For the researcher and the comedian, disobedience, as it characterizes the tragic manner of acting, is not thought to be nourished by delusions, to produce regret, anguish, and guilt that can be relieved only by comfort derived from acknowledging the principle impotence to which such “acting” is always already sentenced. Quite inversely, the very possibility for disobedience comes to feature in Serres’s account as that which is capable of preserving the possibility of salvation. Acts of comic disobedience replace the Scriptures as that which preserves and circulates that possibility. What in the Scriptures unfolds between the Two Covers of a Book (or the top and the bottom of an inscription plane, be it stone, clay, papyrus, or parchment) is thereby attributed a different status by Serres : the mediacy of what unfolds between the covers — on the limited inscription plane or the numerous sheets contained in a book — is attributed to be capable of capturing, conserving, and expressing a sense whose extension as meaning is in principle of vaster magnitude than that which the two covers (or the limiting ends of a plane) are Officially Entitled to contain. In Serres’s narrative, Adam and Eve have a child before they taste the pleasures of transgression and disobedience. With this, sexuality is decoupled and set free from being ascribed the prime motive animating the play of sinful seductions. And suddenly there is the possibility of a distance, a Genuine Mediacy capable of discerning a human world as a Locus in Quo that spans between the traditionally purported Covers of the Scriptures, the “original” act of Divine Judgment that is said to have predicated the nature of all that is, and the act that is consequential to Eve and Adam’s frivolity of tasting from the forbidden fruit, namely the Divine Sentence with which the ancestors of humankind are sent into expulsion, the act that leaves the Disobedient Ones alone with the representatives of Official Generality as the sole placeholders for a source of comfort. If, on the other hand, the divine entitlement of the covers is to preserve the possibility for disobedience, then the titles with which they express what they capture and conserve must pass on the virtually abundant activity of possible disobedience they are to guard in the service of duty they are to represent. Just like the plane of inscription they limit, or the sheets they bind, the covers too need to be capable of capturing, conserving, and expressing a sense whose extension as meaning 13 Ibid.
IX “Ichnography”—The Nude and Its Model
283
is in principle of vaster magnitude than that which the two covers are officially entitled to contain. In other words — and this, again, is Serres’s adorable humor — if you hold respect and esteem for official representations, then never trust official representations, especially while paying service to the law they represent! In an admittedly twisted but not really complicated way, as I have tried to depict, it is their entitlement as official representations to take care of their capability to compromise themselves. “To compromise” here is an important albeit dangerous term that I am using to translate the German word Bloßstellung, which means something like embarrassing exposure, a kind of personal vulnerability that comes from “lowering one’s guard” (sich Blöße geben). The guards of an official representation would of course be the official order, and what Serres then tells us is that the official representation must in turn have “capabilities of mediacy,” namely the capability to transmit and pass on the virtually abundant activity of possible disobedience — which it is entitled to delimit and protect — to the official order that is predicated to guard and protect, in its turn, the official representations. Like Eve and Adam’s child in Serres’s narrative, and like the unfolding mediacy between two entitled limits, the entitled limits as well must be respected in their divine nature, and this divine nature consists in being endowed with the possibility for disobedience. This very possibility is being guarded in Serres’s narrative, and it is what renders it capable of still preserving the plot of a story of salvation, despite the frivolous masquerade of that plot’s prime characters in which Serres engages as the narrator of that story’s novel articulation : “Contrary to what is sometimes said, this blessed disobedience solves many problems. In accumulating black follies and an experience which helps nobody, each generation blocks history so that we no longer see, in a moment, how to leave it; only children sometimes unblock the situation by seeing things in another way. Animals rarely disobey; genetic automates, some follow an instinct programmed since the origin of their species : that is why they have no history. We change, progress and regress, we invent the future because, deprogrammed, we disobey.”14 If Eve and Adam were with child naturally, before their frivolous act, then all those humanisms would be mistaken that purport that humankind has been left alone in the world, with the sole and tragic spirituality of a Regulative Machinery (instead of an arcane architectonic body of laws) that operates obediently and reliably in official generality, and that it is the tragedy of humankind that the very possibility for comfort is a finite good that this machinery must administrate to as best as it can. Because if Eve and Adam were with child naturally, the Tree of Life fol14 Ibid., 1.
284
coding AS literacy — Metalithikum IV
lows a sequential order too, which descends, branches off, but doing so in many directions and in no preset manner — history does not distance mankind from its lost original nature that had, purportedly, been corrupted when history begins. The sequential order now includes the possibility for Regression just as much as for Progression. Human nature now is not good nature — the spheres of “nature” and “value” are kept distinct now, they are kept apart by the Encryptions and Decipherments depicting the secret that is the universe’s nature, those Symbolic Building Blocks of the crypt that embodies the law obeyed by the kind of universal human nature of which Serres speaks. But if it is Codes that manifest those “building blocks” of the Crypt, the Great Story that knows the Age of the Tree of Life, what then are those “codes” made of? The Alphabetic Absolute I would like to suggest that we can think of this “materiality” as the Alphabetic Absolute. And I do not mean by this, of course, a particular linguistic alphabet to be now declared foundational and unconditional; I don’t even mean an alphabet of language, in any restrictive sense. Rather, I mean alphabets in a generalized sense, as applied in coding — numerical ones, linguistic ones, probabilistic ones, any ones. So what then counts as an alphabet? It is important to distinguish what can be called an alphabet from an inventory of signs, for example. An alphabet does not ever relate to things themselves, but to how one “speaks” when articulating something at stake. I put speak in quotation marks, because with such a generalized notion of the alphabet as I am suggesting, we can say “articulate” instead of “speak,” and thus all kinds of practices that articulate something — by composing elements, caring for junctions, for flexibility and conjugation (interlinking), practices that nest different hierarchies — can all be included in the kind of “speech” measured by an alphabet. This indeed may sound stranger than it is; it is well known for example that the letters for writing words and those with which we count and calculate share the same genealogy: both depend on an abstract place-value system within which they can operate. Numbers are depicted as numbers in the terms of a particular numerical value taken as the base of that system — sixty in the hexadecimal number system, ten in the decimal one, and so on. The letters of a script, on the other hand, are depicted within a finite set of characters that are arranged in linear sequentiality — the very name “alphabet” means exactly this: alpha and beta were the first two letters of the Greek alphabet. Thus, when suggesting of speaking of an alphabetic absolute, what I mean is to think of whatever it may be to which one feels inclined to ascribe a status of being impartial and unconditioned (absolute), as being an articulated crypt. The codes that can articulate such an absolute as a crypt need alphabets to build it from — rather than, for example, “notational systems,” because a notional system would already be too specific, for
IX “Ichnography”—The Nude and Its Model
285
it would imply a set of rules according to how the letters that operate within a place-value system can be combined. It is the power of an alphabet that many such syntaxes — more inclusively, such grammars — may be applied to it (there are very different languages coexisting, all using the Roman alphabet, for example). It seems to me that only an alphabetic absolute can integrate the kind of unconditional obedience Serres talks about, requiring as it does that one behaves wittily and mischievously. An alphabet does not yet distinguish false from correct usage, as a notational system would. We know the word Literacy in relation to the alphabet for precisely this reason: to be literate is pre-specific (undecided) with regard to whether one speaks / writes poetry, lies, wants to convince with arguments, or persuade with plausibility and opinion. And still, literacy can be measured — in terms of power of expression, imagination, distinction, elegance, being informed, and so on. But there are varieties of different metrics. In this, the Masterful Literate is someone who is literate more or less masterfully, as we are used to call a musician masterful, or an architect, or a doctor, or she who cares for and masters whatever practice. That is why the kind of unconditional obedience Serres talks about ought to be granted to an absolute that is alphabetic, a total of any alphabet conceivable, including all possible “couplings” and “multiplications” that constitute the ciphers articulated in codes. What I have called Serres’ “Officer” is a literate person in just this sense: she is the architect of articulated crypts that hollow existing standards. There is another reason why the Alphabetic seems to deserve a central role here. In all his writings Serres hails a novel humanism where history is not the consequence of a terrifying act of punishment and expatriation.15 Literally : “God has given us the endless freedom to disobey him, and this is how we can recognize him as our Father,” he maintains.16 Hence, a stance is needed that allows for coexistence of what is disparate.17 If the Tree of Life descends without linearly and progressively distancing us “contemporaries” from our origin, then originality is always “there,” and the Universe’s natural kinds, we have said, are many. This peculiar “there-ness” Serres suggests calling “noise” : “We must keep the word 15 It is not the place to elaborate on this here, unfortunately, but some of Serres’s major books must at least be mentioned: Hominescence (Paris: Le Pommier, 2001); Le tiers instruit (Paris: Bourin, 1991); Atlas (Paris: Éditions Julliard, 1984); Petite Poucette (Paris: Le Pommier, 2012); Le contrat naturel (Paris: Bourin, 1990); Récits d’humanisme: Bour (Paris: Le Pommier, 2006); L’incandescent (Paris: Le Pommier, 2003). 16 Serres, “Fraternal Confession,” 1. 17 I borrow this expression of the “disparate” from Gilles Deleuze’s philosophy of asymmetrical synthesis of the sensible: “Repetition is […] the formless power of the ground which carries every object to that extreme ‘form’ in which its representation comes undone. The ultimate element of repetition is the disparate [dispars], which stands opposed to the identity of representation. Thus, the circle of eternal return, difference and repetition (which undoes that of the identical and the contradictory) is a tortuous circle in which Sameness is said only of that which differs.” Gilles Deleuze, Difference and Repetition, trans. Paul Patton (New York: Columbia University Press, 1994), 57.
286
coding AS literacy — Metalithikum IV
noise, the only positive word that we have to describe a state that we always describe negatively, with terms like disorder.”18 For Serres, seduction, desire, and pleasure, and the existence of sound and fury, are natural forces that forever disturb pureness and harmony. They are the very conditions for the possibility of disobedience, and hence, also for the possibility of a kind of beauty that is beautiful because it can be compromised, embarrassed, exposed, and vulnerable — in short, “naked.” In an article to which I will turn shortly, Serres calls this beauty, which is pure because it can be embarrassed, “la belle noiseuse,” the beautiful querulent.19 The Comic Let us first come back to this aspect of the coexistence of the Disparate. The possibility of salvation, in the terms of “natural morals,” as Serres suggests, depends on the inversion of the idea of illusion and its opposite, truth. Disguise, masquerade, fashioning, and dressing up do not pose a threat to truth; rather, they are the conditions for it to be self-engendering, alive, sexual. A lot depends on recognizing reality as mediacy, and immediacy as illusion, Serres seems to tell us. Curiosity now appears as a stance that is neither sinful nor just, but then what? Curiosity diverts the attention that can be granted. It animates a play of amusement; it is quick and can never be at rest with what attracts it; it is a form of appreciation that depends on no intermediate didactic, appreciation that is possible in an unexpected encounter. All of this binds curiosity to the Comic. Early forms of comedy are said to originate in pagan manners of emancipating from traditional cults of worship; in their rituals of thanksgiving, for example, where particular Gods were celebrated, they began to frivolously dramatize the characters of these gods in masquerade. They would still perform the rituals, but now in a challenging rather than entirely serious manner. Comedy is older than tragedy, and it is purported as a manner of dealing with magnitudes one encounters as evidently “there” but disparate, non-fitting, without knowing how, why, or what else. The very possibility of a thinking of repetition as something that does not reproduce the same depends on the comic; for example, in Serres’s manner of thinking of the sequentiality of time in his tree of life.20 Comedy shares its origin with the carnivalesque, and in many ways it can be said to mark the early stages of coming of age—the youth going through comic situations when challenging the customs, expectations, and orders of their parents. Serres’s assumption of considering a nature and a sexuality of the universe itself has an important and difficult implication : it involves the assumption 18 Serres, “Noise,” 55. 19 Ibid. 20 Gilles Deleuze has devoted an entire study to such a notion of repetition; see Deleuze, Difference and Repetition.
IX “Ichnography”—The Nude and Its Model
287
of different kinds of natures, and hence morals of nature, that are all to be considered “universal.” This has consequences for science that considers a particular system of concepts universal (metaphysics), as well as one that considers physical nature as universal (“modern” science). In either one, the paradigm of a plurality of natural kinds translates into the assumption of categorically different and incompatible magnitudes — magnitudes that are, because they are categorically different, strictly not to be experimented with. Indeed, all attempts, however experimental, to disobey the rule of traditional hierarchies of subordination among the different magnitudes are then perceived outright as evil : we can easily remember the trials of Galileo or Kepler for assuming, in the case of the latter, an elliptic instead of perfectly round path of the planets (heavenly bodies), and hence addressing the course of the stars in the category of an imperfect circle. That was sheer frivolity and disrespectful in the eyes of the clerics at the time : the orders of the heavens, locus of the divine, could not possibly correspond to a measure that captures imperfect movement, because it would imply that what is moving, in the starry heavens, are magnitudes whose purity is corrupt and imperfect. It would imply that the most perfect and pure order, that of the heavens, would be the order of imperfect magnitudes. When Galileo and Kepler might stand for an endpoint of the reign of a particular dogmatism, we can see perhaps in Dante Alighieri’s Divina commedia an early announcement of what was to come. If today media apologetics are concerned with the Post-human, and a purported End of History, we can easily see a certain symmetry to the situation for which the names of Galileo and Kepler stand; and again, we have a much earlier literary work that seems to indicate such a development to come, namely Balzac’s monumental Comédie humaine, whose idea, he tells us in the preface, “originated in a comparison between Humanity and Animality.”21 Because “it is a mistake,” Balzac maintained, “to suppose that the great dispute which has lately made a stir, between Cuvier and Geoffroi Saint-Hilaire, arose from a scientific innovation.” At stake is the idea of a Unity of Plan, the idea that “the Creator works on a single model for every organized being.” This issue does not arise from scientific innovations, he insisted, rather “unity of structure, under other names, had occupied the greatest minds during the two previous centuries.”22 He goes on to name references and their core concepts in addressing the issue at stake : As we read the extraordinary writings of the mystics who studied the sciences in their relation to infinity, such as Swedenborg, Saint-Martin, and others, and the works of the greatest authors on Natural History — Leibnitz, Buffon, Charles Bonnet, etc., we 21 Honoré de Balzac, “L’avant-propos de la Comédie humaine” (1842–48), here cited from the translation “Author’s Introduction,” Project Gutenberg, http://www.gutenberg. org/files/1968/1968-h/1968-h.htm. There are no page numbers provided in this online reference. 22 Ibid.
288
coding AS literacy — Metalithikum IV
detect in the monads of Leibnitz, in the organic molecules of Buffon, in the vegetative force of Needham, in the correlation of similar organs of Charles Bonnet — who in 1760 was so bold as to write, “Animals vegetate as plants do” — we detect, I say, the rudiments of the great law of Self for Self, which lies at the root of Unity of Plan. There is but one Animal. The Creator works on a single model for every organized being. “The Animal” is elementary, and takes its external form, or, to be accurate, the differences in its form, from the environment in which it is obliged to develop. Zoological species are the result of these differences. The announcement and defence of this system, which is indeed in harmony with our preconceived ideas of Divine Power, will be the eternal glory of Geoffroi Saint-Hilaire, Cuvier’s victorious opponent on this point of higher science, whose triumph was hailed by Goethe in the last article he wrote.23 As Balzac announces, he himself had been convinced of such a scheme of nature (a unity of plan) long before his contemporaries raised its issue in terms of scientific innovations, and hence in a manner supposedly set apart from the spiritualism entailed by the authors on natural history — as if now it wouldn’t imply unanswerable questions anymore. So Balzac doesn’t refer to this scheme as a frame of reference for explaining particular postulates of scientific accounts. Rather he takes it as an inspiration for a kind of investigative storytelling : “Does not society modify Man, according to the conditions in which he lives and acts, into men as manifold as the species in Zoology?”24 And further on : “If Buffon could produce a magnificent work by attempting to represent in a book the whole realm of zoology, was there not room for a work of the same kind on society?”25 Somewhat surprising perhaps, his Comédie is all set up as a great project of taxonomy and categorization : “The differences between a soldier, an artisan, a man of business, a lawyer, an idler, a student, a statesman, a merchant, a sailor, a poet, a beggar, a priest, are as great, though not so easy to define, as those between the wolf, the lion, the ass, the crow, the shark, the seal, the sheep, etc.”26 But “the limits set by nature to the variations of animals have no existence in society. […] The social state has freaks which Nature does not allow herself; it is nature plus society. The description of social species would thus be at least double that of animal species, merely in view of the two sexes.”27 Furthermore, “animals have little property, and neither arts nor sciences; while man, by a law that has yet to be sought, has a tendency to express his culture, his thoughts, and his life in everything 23 Ibid. 24 Ibid. 25 Ibid. 26 Ibid. 27 Ibid.
IX “Ichnography”—The Nude and Its Model
289
he appropriates to his use. […] The dress, the manners, the speech, the dwelling of a prince, a banker, an artist, a citizen, a priest, and a pauper are absolutely unlike, and change with every phase of civilization.”28 In consequence, Balzac decides : “Hence the work to be written needed a threefold form — men, women, and things; that is to say, persons and the material expression of their minds; man, in short, and life.”29 But still, if this introduction is to set up Balzac’s project in clear demarcation to a scientific account, how should it be possible to embark upon such an immense project of taxonomy and categorization in the manner of storytelling? For Balzac the realist writer, such storytelling could only take the form of a natural history — yet a natural history of manners. Manners, if studied in a historical manner that works empirically, pose entirely new problems for a writer because they must be considered as what we would call today perhaps “a population effect” or “property of a collective.” But how to address abstract ideas such as a collective and its properties with enough intuitable distinction and common sense to work as a story? Balzac indeed asks himself : “But how could such a drama, with the four or five thousand persons which society offers, be made interesting? How, at the same time, please the poet, the philosopher, and the masses who want both poetry and philosophy under striking imagery? Though I could conceive of the importance and of the poetry of such a history of the human heart, I saw no way of writing it.”30 The way of writing that he eventually found was one of categorizing typicalities of entire scenes : “Not man alone, but the principal events of life, fall into classes by types. There are situations which occur in every life, typical phases, and this is one of the details I most sought after.” 31 And furthermore, he specifies, the possibility of his writing depended upon setting up a gallery : “It was no small task to depict the two or three thousand conspicuous types of a period; for this is, in fact, the number presented to us by each generation, and which the Human Comedy will require. This crowd of actors, of characters, this multitude of lives, needed a setting — if I may be pardoned the expression, a gallery.”32 Balzac, the great realist author of the nineteenth century, pursued capturing the richness of reality by embracing typification, masquerade, and modeling as means to work out — against our established and welltested intuition! — the truly fine distinctions that make reality “real.” Such storytelling ceases to lend its services to a representational paradigm; instead, it informs a new paradigm of writing and storytelling that doesn’t fit well with the modern categories of either fiction or documentary, either history or story. Balzac was clearly fascinated by the novel 28 Ibid. 29 Ibid. 30 Ibid. 31 Ibid. 32 Ibid.
290
coding AS literacy — Metalithikum IV
methods of population thinking, statistics, and the analytical capacity of these methods to at once resort to gross generalizations, as well as to reveal infinitesimally fine distinctions. Furthermore, he was well aware that the invention of electricity would profoundly unsettle the order of societies : “In certain fragments of this long work I have tried to popularize the amazing facts, I may say the marvels, of electricity, which in man is metamorphosed into an incalculable force; but in what way do the phenomena of brain and nerves, which prove the existence of an undiscovered world of psychology, modify the necessary and undoubted relations of the worlds to God? In what way can they shake the Catholic dogma?”33 It was clear to him that there is something “heretic” about these interests in the kind of abstract possibility that is owed to technology and scientific innovation. This is why I have suggested seeing in this work an early premonition of the themes that preoccupy intellectuals today — themes like an end to history, or post-humanism — that are rather straightforwardly tied up with a certain apocalypticism.
Mediacy and Real Time Gilbert Simondon, whom I cited at the beginning of this text, follows the same idea when he claims that the grand theme of alienation that haunts modernity, and the so-called industrialization of societies, depends upon finding ways to save the technical object: I believe there are humans in the technical objects, and that the alienated human can be saved on the condition that man is caring for them [the technical objects]. It must in particular never condemn them. In the Old Testament, there is a sort of jealousy of Yahweh toward the creature. And we say that transgresses the creature. But is not all creation a transgression? I think transgression, whose origin is the serpent, is the creation of a person. If Adam and Eve never left the Garden of Eden they would have not become human beings or inventors. Their one son was a shepherd, the other a farmer. Techniques were born there. Finally, technics and transgression seem to be the same. Blacksmiths were once considered as cursed.34 Simondon argues that what is called “human alienation” cannot be separated from our custom to degrade the technical object to a passive and servile status. The theme of alienation demands for the grand theme of salvation to be articulated on new grounds, he maintains. If, classically, the possibility for salvation is remembered, preserved, and articulated in the Scriptures — their theological as well as hermeneutic readings — we must make and reserve room now for an essentially arcane and enigmatic kind of possibility in new media. Marshall McLuhan, Friedrich Kittler, and 33 Ibid. 34 Simondon, “Save the Technical Object,” 2.
IX “Ichnography”—The Nude and Its Model
291
many other new media apologetics have suggested that with electronic media, we are living in “post-alphabetic” times; this entails that it is not the Scriptures that preserve and communicate the possibility of salvation anymore. Without elevating the technical object (its unconcealed and naked “naturalness” alias “pure functionality”) from its servile and passive status, we are living in a terrifyingly inhospitable and infinitely open universe, thus Simondon. Kittler, for example, but many others as well, see in this perceived inhospitality the pain of a narcissistic wound, which however turns into a new promise of salvation (although he would probably never say that) if only we lose our arrogant narcissism. Then this inhospitality can be perceived as the true recognition of the human, existential predicament, and the tortures of history would “end” : the post-alphabetic age that characterizes the end of the Gutenberg Galaxy is then conceived as the end of history — an end that is at the same time its completion, its infinitesimal self-reference, a dynamics that completes itself by being infinitary. This is a logic we can find developed with subtle care also in Giorgio Agamben’s writing. It is not that within history’s infinitary completion through self-reference there would be no salvation possible — rather, the promise of salvation is now tied up in the burden of bearing an irresolvable paradox, namely that the object of salvation must be unsavable. Salvation does not concern the active recovery of what was lost and the remembering of what was forgotten. In Agamben’s argument, “the lost and the forgotten do not demand to be found or remembered, but to remain such as they are, in their being-thus.”35 McLuhan takes a different path. For him, the end of the Gutenberg Galaxy doesn’t mean the End of History according to the above logic. Where the latter can be characterized as raising an anthropological stance to an absolute status, by addressing history as a political subject, McLuhan remains more committed to physics and science. For him, the post-alphabetic means the implosion of the experimental stage for objective representation in models of a universal “All” that is to be both origin and destiny of any scientific symbolization. In a quantum-magnetic, electronic universe, every medium adds something to reality, McLuhan insists. Along with the new scale, it introduces units, meters, and measures that permit mediating magnitudes with magnitudes, open-ended and infinitarily so : “It is the medium that shapes and controls the scale and form of human association and action.”36 With this view, McLuhan spiritualizes communicative activity such that communication takes on quasi-cosmic dimensions. It is not magnitudes anymore that are universal and therefore allow for reliable measurement; rather, in the mediacy that renders the real as real, measurement constitutes magnitudes, not the other way around : “Before the electric speed and 35 Alex Murray and Jessica Whyte, eds., The Agamben Dictionary (Edinburgh: University of Edinburgh Press, 2011), 193ff. 36 Marshall McLuhan, Understanding Media (Cambridge, MA: MIT Press, 1994 [1964]), 9.
292
coding AS literacy — Metalithikum IV
total field, it was not obvious that the medium is the message. The message, it seemed, was the ‘content,’ as people used to ask what a painting was about.”37 So on the one hand McLuhan spiritualizes communication in relation to the human scope of action and, hence, in relation to existence, if not to Being itself. But on the other hand he discredits, on objective grounds (by referring to the quantum kind of physics that made possible electromagnetic communication technology that in principle, if not in fact, operates at the speed of light), the very possibility of a prophetic word that supposedly reaches one via artifacts from a categorical beyond of this world; rather, the message (prophetic or not) that can be received, he maintains, is virtually any message : “The electric light is pure information. It is a medium without a message.”38 If virtually any message is no message than any actual message is a particular modulation of the generic actuality — movement in infinitive form — that McLuhan finds represented in the electromagnetic physicality of whatever it may be that moves at light speed. His dictum that the message needs to be looked for as immanent to the medium can be seen as answering to exactly this complex issue : in a quantum-magnetic, electronic universe, he tells us, every medium adds something to reality. This has hitherto been associated with the (potential) tremendousness of a cosmic order, but certainly not with a (potential) prudence of an anthropological one : “This is merely to say that the personal and social consequences of any medium — that is, any extension of ourselves — results from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology.”39 All technology counts, for McLuhan, as extensions of “man” — this cannot only be understood in terms of an augmentation of corporeal strength and the perceptive faculties, but also intellectually : technology incorporates mathematical principles invented or at least intuited by “the intellect.” With the herewith implied emphasis on the mathematical symbolisms that unlock novel scales of action, once they are externalized and embodied in technical “cases” and then reappropriated by our bodies in learning how to use them, McLuhan also insists that mathematics is an articulation of human intellect — however “natural” or “divine” one might specify the possibility of such “intellection.”40 In this, Kittler parts ways with McLuhan. For him, mathematics is the immediate expression 37 Ibid., 13. 38 Ibid., 8. 39 Ibid., 7; also see my book, Die Nachricht, ein Medium: Generische Medialität, städtische Architektonik (Vienna: Ambra, 2014). 40 This is not a polemical remark. With regard to mathematics, the question really is unsettling: Why does mathematics work? Why can we fly to the moon and back with mathematical understanding (and everything that builds upon it)? Stephen Hawkings edited an anthology on number theory, whose title features a citation by Leopold Kronecker, who once said: “God made the integers; all else is the work of man.” (Cited in Eric Temple Bell, Men of Mathematics [New York: Simon & Schuster, 1986], 477.) The Hawkings-edited anthology is entitled God Created the Integers: The Mathematical Breakthroughs That Changed History (Philadelphia: Running Press, 2005).
IX “Ichnography”—The Nude and Its Model
293
of “the real,” directly, in terms of physics. He sees no symbolism at work in it : “What distinguishes the post-Gutenberg methods of data processing from the old alphabetic storage and transmission monopoly is the fact that they no longer rely on symbolic mediation but instead record, in the shape of light and sound waves, visual and acoustic effects of the real.”41 With this assumption, he can mock Balzac’s project : Photo albums establish a realm of the dead infinitely more precise than Balzac’s competing literary enterprise, the Comédie humaine, could ever hope to create. In contrast to the arts, media do not have to make do with the grid of the symbolic. That is to say, they reconstruct bodies not only in a system of words or colors or sound intervals. Media and media only fulfill the “high standards” that […] we expect from “reproductions” since the invention of photography: “They are not only supposed to resemble the object, but rather guarantee this resemblance by being, as it were, a product of the object in question, that is, by being mechanically produced by it—just as the illuminated objects of reality imprint their image on the photographic layer,” or the frequency curves of noises inscribe their wavelike shapes onto the phonographic plate.42 For him is clear : finally, “of the real nothing more can be brought to light than […] nothing.”43 With the nineteenth-century concept of frequency, “the real takes the place of the symbolic,”44 and “literature defects from erotics to stochastics, from red lips to white noise. Marinetti’s molecular swarms and whirling electrons are merely instances of the Brownian motion that human eyes can only perceive in the shape of dancing sun particles but that in the real are the noise on all channels.”45 The end of the Gutenberg Era marks for Kittler the end of storytelling, because, as he puts it, once the real takes the place of the symbolic within the physics of electromagnetic spectrums, frequencies, and stochastic noise, time turns into “an independent variable” — a “physical time removed from the meters and rhythms” that could make harmony and music. Rather, such physical time “quantifies movements that are too fast for the human eye, ranging from 20 to 16,000 vibrations per second.”46 41 Geoffrey Winthrop-Young and Michael Wutz, “Translator’s Introduction: Friedrich Kittler and Media Discourse Analysis,” in Friedrich Kittler, Gramophone, Film, Typewriter, trans. Geoffrey Winthrop-Young and Michael Wutz (Stanford, CA: Stanford University Press, 1999), xxvii–iii. 42 Kittler, Gramophone, Film, Typewriter, 11–12; he quotes from Karl Philipp Moritz, “Die Hahnische Litteralmethode,” in Die Schriften in Dreissig Bänden, ed. Petra and Uwe Nettelbeck (Nördlingen: Greno, 1986), 1:157–58. 43 Kittler, Gramophone, Film, Typewriter, 15. 44 Ibid., 26. 45 Ibid., 51. 46 Ibid., 24.
294
coding AS literacy — Metalithikum IV
For both McLuhan as well as Kittler, media come to stand in for the Kantian forms of intuition, those “conditionings” supposedly innate to the human mind that, according to Kant’s transcendentalism, were the guarantors that everyone can intuit space and time uniformly, if only they have already learned to discipline their faculties of understanding and reason.47 Kant molded his forms of intuition according to notions of space and time informed from physics, not mathematics. Against the rationalism of Leibniz, for example, which cannot do without an idea of beauty that is harmonious and theological, mathematics had to be decoupled from theology for Kant : it should only be legitimate if its postulates can be made the object of physical experimentation. With this, Kant aligns closely with empiricist traditions. But against Newton, whose systematization of methods in physics, The “Principia” : The Mathematical Principles of Natural Philosophy, anchors his Axioms, the Laws of Motion, in a notion of absolute space that he attributed to the cosmos itself, Kant’s transcendentalism introduced a level of mediacy between thought and the real. This “mediacy,” however, is entirely distinct from the “mediacy” thematized with reference to “new media.” While the former notion of mediacy was uniform and objective because Kant simply attributed Newton’s prime cosmological assumptions (linear and reversible time, and Euclidean, three-dimensional [plane] geometry) to the human mind instead of the cosmos (as the Forms of Intuition). Mediacy in relation to new media, on the other hand, is new because it makes so-called nonclassical quantum physics its point of departure. With the crucial consequence that the notion of an objective and uniform process of mediation — arguably the key element in Kantian transcendental philosophy, as well as for every epistemology that commits itself to the critical tradition — has lost its very base. When everything happens “instantaneously,” and at “light speed,” how to maintain a critical distance to events then? How to “base” a notion of mediation that is manifold, variate, and unfolds in multiple linear sequences that all link up in a not entirely predictable manner, within the probability space of one linearity, rather than one that is schematic, uniform, and goes along a mechanical causal line? How indeed to do so once “mediality” forms into different strata because what constitutes it (be it the Kantian forms of intuition, the letters in language, the numbers in arithmetics, the forms in geometry) loses its neutral and transparent character — that character captured in the saying “Never trust the messenger if it is not a mechanism.” Contemplating a notion of the real vis-à-vis such mediacy, McLuhan dares to consider a nonapocalyptic reversal in the direction 47 Regarding the problem of aesthetics and judgment for epistemology at large, see Robert Harvey and Lawrence R. Schehr, eds., Jean-François Lyotard: Time and Judgment (New Haven, CT: Yale University Press, 2001); and Jean-François Lyotard, The Differend: Phrases in Dispute, trans. Georges van den Abbeele (Minneapolis: University of Minnesota Press, 1988 [1983]).
IX “Ichnography”—The Nude and Its Model
295
of progress that inheres the paradigm of modern experimental science, whereas Kittler (as well as Agamben, Baudrillard, and many others) considers a self-referential implosion of the real, and anticipates a novel kind of immediacy arising from a totalized notion of mediacy. How to Address the Tense-ness of Radioactive Matter in the Universe’s Instantaneity? So what about Serres’s la belle noiseuse? What about his idea of a kind of beauty that is not harmonious, not perfectly adequate or equal, but clamorous and querulous noise, a beauty that is universal, omnipotent, and yet “sexed” in the sense that it is “whole” only because it “desires” itself in all that it can be? A truth, a nakedness, whose beauty is pure only because it is vulnerable and can be embarrassed? Why hold on to the idea that truth must be beautiful, desirable, and natural? Serres too engages in making sense of a reality that is counted as universal, in a manner that whatever happens happens at light speed and its understanding depends upon mediation by spectra. For him, spectra render the real, so to speak. But rather than imagining the universe as a Grand Vacuum, a container of a natural balance (Newton) or a Grand Harmony, the substance of God (Leibniz), or the locus where History can complete itself by referring, dynamically, to nothing but itself (Kittler), Serres inverses the perspective. The universe is ill thought of as a container because it is expanding; thus a key passage in a recent lecture by him entitled “From Rotating Revolutions to an Expanding Universe.”48 Different from Kittler, he shares with McLuhan the view that mathematics works symbolically, not “immediately.” But if media are extensions of man for McLuhan, then this same relation is incomplete if we think of one as a function of the other. Rather, this relation must count as mutually implicative and reciprocal for Serres: one may regard media as extensions of man, but man, equally so, extends media. The real is real because it is mediate, for Serres, in the precise sense that if we want to consider a universe that is not only dynamic but also expanding, all relations must be thought of as mutually implicative and reciprocal. In all consequence then, Serres maintains that physics itself is “communicational.” For him, “information circulates universally within and between the totality of all existing things.” He elaborates: Bacteria, fungus, whale, sequoia, we do not know any life of which we cannot say that it emits information, receives it, stores it and processes it. Four universal rules, so unanimous that, by them, we are tempted to define life but are unable to do so, because of the following counterexamples. Crystal, indeed, 48 Michel Serres, “Information and Thinking” (manuscript of keynote address, conference of the Society for European Philosophy and Forum for European Philosophy, “Philosophy after Nature,” Utrecht, September 3, 2014), 1.
296
coding AS literacy — Metalithikum IV
rock, sea, planet, star, galaxy: we know no inert thing of which we cannot say that it emits, receives, stores and processes information. Four universal rules, so uniform that we are tempted to define anything in the world by them, but are unable to do so because of the following counterexamples. Individuals, but also families, farms, villages, cities, nations, we do not know any human, alone or in groups, of which we cannot say that it emits, receives, stores and processes information.49 The real then must count as a noisy totality of communicative circulation among all existing things. It is to this that the witty ruse he placed in the Great Story’s beginning responds — the ruse that universal nature itself is sexed and that the first act of transgression and pleasure (the disobedience to the father by eating from the forbidden tree) is natural and blessed; the ruse that keeps the beginning of the great story open in its development, indeterminate and yet natural, and contemporaneous to every generation anew. Isn’t this the essence of modernity? There is one key moment that Serres’s inversion depends on : light speed may well be “real time,” but it is not “instantaneity” or “immediacy” — rather, we must assume a universal “tense-ness” (Zeitlichkeit) proper to the totality of quantum-physical matter. Light speed then must be understood in relation to this tense-ness : it manifests the tense-ness proper to the totality of quantum-physics matter in its proper activity. The physical nature of the universe is neither static, mechanical, nor dynamic, it is radiating and active, which today’s science refers to with the term “radioactivity.” Galaxies are born from, and bearers of, radiation emitted from the activity of nucleosynthesis. From a quantum-physics point of view, this radiation is what we call “light.” In each of the myriad galaxies, light matter is emitted from a sun. And it is this radiation of light that contemporary physics depicts in the technical image of a spectrum. In the totality that is depicted as a (whole) spectrum, light is called white — the sum of all the colors it distinguishes according to variable frequencies of white light.50 Now, if real time refers to the tense-ness of light speed proper to the universe that is not only dynamic but also expanding, Serres insists that there must be a kind of storytelling that corresponds to such universality. There must be a kind of storytelling that “locates” itself in the peculiar tense-ness of this fourfold universal activity. For Serres, thinking itself is this storytelling : “What is thinking, in fact, if not at least carrying out these four operations : receiving, emitting, storing, processing information?”51 Thinking is all of the attributes philosophy has endowed 49 Ibid. 50 See Michel Serres and Nayla Farouki, eds., Le trésor: Dictionnaire des sciences (Paris: Flammarion, 1997). 51 Serres, “Information and Thinking,” 1.
IX “Ichnography”—The Nude and Its Model
297
it with in the past : judging, reasoning, understanding, conceiving, imagining, remembering, discerning, delineating, measuring, expressing, articulating, etc., but it never strives to master an object (or a subject matter, a theme) by revealing its bare identity. Thinking is storytelling for Serres because its dignity (power) consists in preserving and transmitting truth, not in possessing or subjecting it. She who is a masterful thinker, then, is she who knows how to masterfully not know what she preserves, transmits, and keeps in circulation. I will try to show that such storytelling, for Serres, is intimately tied up with painting : a spectrum is the totality of all colors — the Eigenvector, the generic characteristics of all colors. Thus the question becomes how to “paint articulately” the noisiness that is matter-in-terms-of-a-spectrum. If there can be a kind of storytelling here, it is because, unlike for Kittler, a spectrum counts for Serres as the “elementariness” of geometry, as constituted by symbolism and not immediate physical expression. Its form is, ultimately, mathematical. The spectrum is a topological homology in time, an apparatus, while a technical image that depicts a spectrum is a snapshot of this apparatus’s dynamics at a certain point in time. Images of spectra do not, properly speaking, represent anything specific, instead they facilitate the transmission and exchange of something arcane that is being conserved and invariant in circulation; they facilitate a “technical fiction” that conserves and transmits a “physical plot.” This is not merely a metaphorical manner of speaking, for light in today’s astrophysics indeed facilitates the exchange and circulation of energy quanta (light in quantum physics has particle-like properties because the “packages” [photons] in which light is discerned, measured, and depicted in spectral analysis are distinguished according to varying frequency rates that depend upon the “energy load” they “carry”). And energy is defined ultimately in no qualitative way at all, but solely as a quantitative invariant whose assumption allows for qualifying matter in its specific forms (“matter” as the Other of “light”).52 All one assumes to know about energy is that the total amount in the Universe is invariant — energy cannot be created nor can it decay.53 The storytelling Serres envisages, hence, must be considered as symbolic or mathematical storytelling, as storytelling that works according to what the information-technological paradigm of communication suggests, to which Serres reverts. But how can it be tied up with painting? Serres wrote an article entitled “Noise” on Balzac’s 1845 short story “Le chef-d’œuvre inconnu,” translated as “The Unknown Masterpiece.”54 I will try to elaborate on this relation between Serres’s kind of storytelling and painting by 52 See Richard Feynman, QED: The Strange Theory of Light and Matter (Princeton, NJ: Princeton University Press, 1985). 53 See Yvette Kosmann-Schwarzbach, The Noether Theorems: Invariance and Conservation Laws in the Twentieth Century (Vienna: Springer 2011). 54 Honoré de Balzac, “The Unknown Masterpiece,” trans. by Ellen Marriage, Project Gutenberg, http://www.gutenberg.org/files/23060/23060-h/23060-h.htm.
298
coding AS literacy — Metalithikum IV
discussing the plot of this story. This discussion itself will be “communicational” and “narrative” in the sense that it seeks to “actively” preserve the issue at stake in the plot “depicted.” “Actively” means that I will add something to how both of them retold that story; this is what each of them did as well. Balzac tells the story of two historical figures, painters, that both tried to tackle the same problem : if perfect beauty can or cannot be discerned from the relation between nakedness and the model in nude drawing or painting. But in his story of these two historical characters, Balzac “doped” the historical “data” to be documented by adding a fictional character, a third painter whom he calls Frenhofer, as a symbolic operator that acts upon and complicates the documented “plot” (of a real event) and that allows Balzac to dramatize that plot fictionally. With this “tactical move,” Balzac’s realist account turns into storytelling (rather than being documentary-like), and it raises a novel aspect from the historical “plot”; namely, the issue of a categorical difference between drawing / sketching (working with lines) and painting (striving to work with color alone). Serres, in turn, retells that plot, and how Balzac communicates it, by applying a tactical move once more : he in turn “dopes” the plot by endowing it with an aspect that neither Balzac nor the two historical painters raised. He introduces the element of a theoretical term from architecture — “ichnography.” Within the categorical term of ichnography, the depiction of that same plot (how beauty can be discerned from the relation between nudity and the model) comes to “conserve” and “transmit” again all that has been told, and then some more. Balzac’s interest was in how this can be enriched in distinctiveness by extracting a notion of drawing from painting (rather than interpolating a notion of painting from drawing). For this, he goes from Balzac’s planarity to voluminosity (he introduces the architectural terms for planning that keep the three dimensions distinct from one another by introducing the infinitesimal into each one separately — namely, ichnography, orthography, and scenography).55 In Serres’s account, the notion of ichnography is capable of establishing a contractual kind of writing, as we will see. Painting affords a kind of writing that cannot be reduced to any other form of writing, because its encryption constitutes a graphism that is not “whole” — it needs to be doubly articulated to be a graphism. It needs to be articulated in terms of form to correspond to the substance’s expression of what the form contains, and it needs to be articulated in terms of content to correspond to what the articulation of the form expresses.56 It is a kind of writing that literally inscribes “nothing,” by placing a signature whose subject 55 For a rendering of this classical triad into the paradigm of computational architecture, see Ludger Hovestadt, “Toward a Fantastic Genealogy of the Articulable,” in Domesticating Symbols: Metalithicum II, ed. Vera Bühlmann and Ludger Hovestadt (Vienna: Ambra, 2014), 46–93. 56 For the theory of double articulation see Louis Hjelmslev, Prolegomena to a Theory of Language (Madison: University of Wisconsin Press, 1961 [1943]).
IX “Ichnography”—The Nude and Its Model
299
does not, properly speaking, exist. This is why in Serres’s account the unknown masterwork is indeed a masterwork — because it is both at once “unknown” and “signed.” It “has” a master, but no master can “own” it : by leaving the trace of something unknown that is absent, the signature marks a void that is universal not in the sense of a Great Vacuum, but in the sense of a vault or Crypt. It is a writing capable of remembering what has not yet happened, and even what might not ever happen. It is a kind of writing that transmits between generations without needing the assumption of a linear order of descendance and sequentiality. With giving us this notion of ichnography Serres must not, as McLuhan does, spiritualize communication and announce a novel age of speech based on post-alphabetic presentism (the global village). And neither must he, as Kittler or Agamben do, totalize History and submit to it as the subject of an entirely generic kind of humanism. I will try to show in my retelling of the plot (if beauty can be depicted from the relation between nudeness and its model) that, with Serres, we can expect the dawn of an alphabetic absolute from exactly those developments that lead the former two to announce a post-alphabetical era. The storytelling Serres envisages, we said, must be considered as symbolic, or mathematical storytelling, as storytelling that works according to what the information-technology paradigm of communication suggests, to which Serres reverts. This mathematicness, this symbolism, Serres links to painting via this notion of ichnography. My point is that ichnography introduces a categorical aspect into how we can “paint,” how we can depict something entirely in terms of “color,” which links the canvas of a painting to the spectrality of light as color in its purity prior to the painting that takes a snapshot (a technical image) of this spectrality. Via the categorical aspect that ichnography introduces, both spectrum as well as painting are regarded as forms of writing in Serres’s peculiar “graphism” that is not “whole” without being “read”; a graphism that needs to be doubly articulated by both the writer and the reader; a graphism, hence, that is essentially contractual. Like a contract that expresses a mutually agreed assurance of what is not going to happen. In order to assure what is not going to happen, a contract tries to articulate all possible aspects of something the parties agree (by signing it) is not going to take place. All the while, and this distinguishes a contract from an order, the parties of a contract are not subjects to an external authority that is to be held responsible for guaranteeing that this “something” (which is not supposed to happen) be “represented” in adequate manner. A contract is signed if both parties withdraw from the stance that could claim legitimate authority over the other. In this sense, Serres’s theory of the “light speed” of “real time” that media reality is approximating with its electronic communication-technology infrastructures can be said to agree to what McLuhan and Kittler (and others) mean by characterizing our time as a post-alphabetic age. But we have to look carefully.
300
coding AS literacy — Metalithikum IV
In the paradigm referred to as the Gutenberg Galaxy, writing was meant to have an authoritative status firmly tied up with, and legitimated, via the role of an author in relation to her statements, counting on her authenticity and sincerity with regard to knowing how to render the representation of an object (of discourse) plainly, in uncorrupted, a-subjective manner. In science and philosophy, this author-driven legitimation framework manifests in argumentative discourse and in the technical precision of experimental practice. But in art, it manifests — more straightforwardly perhaps than in the latter two — in the attempted act of capturing in painting, drawing, or sculpture a model’s “neutral nakedness” — the very plot depicted and doped by Balzac and Serres in different manners. Instead of truth, it is nakedness that here ought to be called “neutral.” Just like an experimental scientist strives to capture truth in its nonbiased, uncorrupted quality, any artist is striving — against all odds — to encounter, to glance at, to capture and preserve, by drawing, painting, or sculpture, a model of purity in a manner that strips the pure off the model’s live and finite body. Such a successful act of capture would preserve beauty in its pureness. Isn’t that why we call nude paintings / drawings / sculptures, at least in German, by the term Akt? Nudity cannot be worn; nudity cannot be represented — just like “actuality” in the Greek sense of infinitive activity, energeia, that can never be referred to without imposing form upon it (de-fine it), and hence corrupt its infinitive-ness by putting it into proportion, by applying regularity and measure. New media theory (as opposed of the theory of mediation that forms the backbone of transcendental idealism) readily declares the very possibility of such an act of capture impossible. All acts of capture are mediated, either by aesthetic categories, history, or a cruelty of the real itself, alias History. Hence it is this very notion of a legitimate authority that is tied to an alphabetic order that Serres also wants to dispose of. But what characterizes Serres’s stance as unique is that he suggests replacing the concept of authority with a concept of mastership whose subject, however, is indefinite because it is never wholly present nor wholly absent.57 It is the subject of his novel humanism — a humanism whose dignity (power, nature) consists in how different generations succeed or fail in preserving their mark of distinction : the possibility for mischief, the possibility for blessed disobedience. If generations indeed build together on a pyramid 57 That is why the law must remain undecided in how to address this subject of Serres’s novel humanism. This aspect is worked out by Serres in his book The Natural Contract, trans. Elizabeth MacArthur and William Paulson (Ann Arbor: University of Michigan Press, 1995), where he makes the strong case that the fragility of the earth, as we begin to experience it in our concern for the planet’s climate, needs to be addressed primarily in the terms of law and philosophy together with logic and science — a constellation, he argues, that ecology does (can) not provide. See also my article, “Cosmoliteracy: The Alphabetization of Nature” (lecture manuscript, conference of the Society for European Philosophy and Forum for European Philosophy, “Philosophy after Nature,” Utrecht, September 3, 2014), http://monasandnomos.org/2014/09/08/on-michelserres-book-the-natural-contract-1990-cosmoliteracy-the-alphabetization-of-thenature-of-thought/.
IX “Ichnography”—The Nude and Its Model
301
of shared knowledge, as a popular way of thinking about science suggests, then the “mastership” that organizes the subject of this humanism consists in masterfully not knowing what is being kept safe by this structure of collective architecture whose beginning — arché — never ceases to happen in real time as long as this knowledge is considered to be universal knowledge in the sense discussed above — demanding obedience without submission, and embarrassed, humiliated, and exposed if being “served” in the submissive manner of false modesty that claims to merely represent it without contributing, by occluding its clarity or adding to it. For Serres, the pyramid of knowledge does not store a resource; rather, it is a crypt that keeps originality itself as the secretive well of a power of invention that can be sourced continuously, without ever growing distant in time. Thus, in my own retelling of the plot of the story, I will furthermore “dope” the way this plot can be told. I will attempt to endow Serres’s notion of ichnography with a grammatical case that is capable of addressing the locus in quo of the pyramid, the crypt, that is being built on the distributed and discrete base of ichnographical — architectonic — writing. I will call this grammatical case, “the case of the cryptographic locative.” Of this locative I want to postulate that it is capable of addressing, and hence articulating, the locus in quo where the plots of Serres’s Great Story are being preserved — that is, the locus in quo of knowledge. Grammatical cases can be seen as categories that organize the instantaneity of a “real time” that pertains to an alphabet — they articulate all possible relations that can be expressed in an alphabet-based language (the possessive, the dative, the nominative, the accusative, or whatever cases a language may distinguish).58 Cryptography now can be seen as articulating the space “in between” different “alphabets” in a “comical” way, not unlike light and colors articulate the space in between different things. Hence, we can imagine the totality of the cases that can be expressed by the grammatical categories as building a spectrum, just like we think about the totality of all colors as building a spectrum. The cryptographic locative then articulates this spectral mediacy of the totality of grammatical cases. It articulates this mediacy (the “nakedness” of pure grammatical relations) by (1) depicting the sum total of the possible cases (the topological homological invariances) they specify in “analogue” manner — i.e., in the technical image that depicts a spectrum where frequency amplitudes are the sole criteria of distinction; and (2) by establishing “digital” communication channels on the spectrum basis of this totality of all cases. Like this, the cryptographic locative attributes a locus to what is real without ever having actually happened and taken place. In other words, it demarcates traces of an encounter between the 58 There are languages in use today that distinguish as many as twenty-something different cases. See Louis Hjelmslev, La catégorie des cas: Étude de grammaire générale (Munich: Fink, 1972).
302
coding AS literacy — Metalithikum IV
real and the symbolic, and it is capable of preserving a kind of possibility that can never be fully known or exhausted. I would like to think of this grammatical case of the cryptographic locative as indexing what happens in the peculiar tense-ness proper to the radiating, emitting, and absorbing communicational activity of “real time” — the universal activity that leaves on our planet traces of some of all that happens “at the speed of light” in the galaxy that the earth belongs to. The Unknown Masterpiece: The Depiction of Nothingat-All Serres introduces his article “Noise” with the words: “The story I am going to tell happened in the beginning of the seventeenth century, a time of noisy quarrels whence came the body of reason, beauty, genius that we admire today.”59 But at the same time, Serres’s storytelling has nothing to do with keeping records of events: “The story I am going to tell and that Balzac tells could not have happened, never happened.”60 I would like to consider taking this setup for how Serres’s story is to be encountered literally; that is, under the assumption of an alphabetic absolute. With such consideration, I want to ponder the possibility of addressing the fictional in a particular manner that neither opposes it to the real, nor subsumes either to the terms of the other and hence effectively does not subject one to the regime of the other. My interest is moved by Serres’s statement that in this story, we can witness a meeting between the real and the symbolic. He challenges our imagination: “Who has ever seen a meeting between the real and the symbolic in the story?” Balzac did witness such a meeting, Serres claims in the continuation of his text; he can know this, he says, because of the manner in which Balzac signed his text. Let us first recall in broad strokes the plot of Balzac’s story. There are three painters : young Nicolas Poussin, the middle-aged court painter Franz Pourbus (whom Balzac calls Porbus) — both of whom were real seventeenth-century French painters — and Balzac’s invented older artist, Maître Frenhofer. Frenhofer visits Porbus at his lodgings, where he meets young Poussin as they are both arriving. Porbus lets both of them in, assuming on no particular grounds that Poussin was with Frenhofer. Frenhofer and Porbus realize only later that neither one of them actually knows Poussin. The conversation begins to ensue about Porbus’s latest work, a painting of the Virgin Mary, during which Frenhofer criticizes the painting for lacking life. When Poussin objects, the older artists grow aware of his anonymity and challenge him to prove his right to be in the studio with them by producing a sketch. This Poussin does in a manner that sustains their interest in him, and he is officially welcomed 59 Serres, “Noise,” 48. 60 Ibid.
IX “Ichnography”—The Nude and Its Model
303
into the context. To illustrate his own emphasis on life and movement, Frenhofer then applies his own artistic touch of color to Porbus’s Virgin Mary, making the figure appear alive as he had insisted he could. Later on, they discuss a painting by Frenhofer’s own master, whose name is Mabuse, and who is absent from their meeting. It is a painting of Adam. Frenhofer makes the same critique of his master’s painting as well, that it is lacking liveliness. Then he begins to talk about a painting by himself that he had been working on for ten years, and that no one had so far seen. Like Porbus’s own painting, it is an attempt at capturing perfect beauty in paint — beauty that is engendered without ever having been received in an act of conception : a Mary that will have been without ever actually being “here” or “anywhere” — i.e., beauty as pure nakedness, beauty in the temporal form of a future past that could only be real if it were capable of bracketing out the presence in a manner capable of preserving its actuality indefinitely, toward both past and future — in other words, a present tense that never actually happens. Pure nakedness, the painters well know, cannot possibly be embodied by a model that poses for a painting. Serres now stresses the generational setup of Balzac’s story, while “anchoring” all protagonists in one shared spatiotemporal “climate” : “Balzac depicts three painters, contemporaries and successors. It took place in bad times when stubborn men without any hope were keepers of the sacred flame, men who were certain that they had to keep it alive,” Serres tells us.61 Hence the continuity between the generations is established by “a secret flame” — all the protagonists knew that “they had to keep it alive.”62 Poussin is the young one, Porbus the adult one, Frenhofer the old one, and Mabuse, Frenhofer’s master, is absent. All of them are aspiring to achieve one and the same goal in their work, namely “to keep the sacred flame” without knowing how.63 All of them find inspiration in their models, who are also their partners in life. Poussin lives with Gillette, “a perfect beauty. Go to Greece or Turkey, go anywhere, you won’t find her match.”64 Porbus, the adult, lives with Marie, “an image that is alive in spots and not in others. A mixed set.”65 Frenhofer, the old one, lives with Catherine Lescault, “a courtesan, that beautiful noiseuse who does not exist.”66 All strive to keep the flame in taking their loved ones as a model for their painting. But : “The tree’s direction is one way for men, as the brush loses power as time goes by. For women, it is the other way as beauty wins its calm presence as time goes by. Time goes one way for the maker [facteur], the other way for the model. Nicolas, 61 Ibid., 48. 62 Ibid. 63 Ibid. 64 Ibid., 49. 65 Ibid., 49. 66 Ibid.
304
coding AS literacy — Metalithikum IV
while drawing, lives next to being itself, the old man, the creater, has lost it. Porbus is in the middle, uneasy, undecided, floating around. His picture fluctuates and doubts, it passes the river of time.”67 After this depiction, Serres stops and begins anew. “Let us try to forget the simplistic cascade in which what he makes visible in turn makes visible a picture that in turn makes visible what …” Serres invites his readers.68 What cascade? The three men follow each other, according to the order of Mabuse, just as priests are consecrated time after time, according to the order of Melchizedek. The three painters follow each other, according to the order of representation, the proper name of the dead man cannot fool us. All three have turned around to see their own pictures while, naked and forgotten, beauty cries behind them. As for the three women, they follow each other according to the order of being. Not according to the order of appearance but according to the scale of being.69 So how to begin anew? How to mobilize one’s doubt, strength to live up to one’s commitment that one needs to achieve (keep the sacred flame) without knowing how? “The tree of life comes out of the picture, just as the tree of representations, obviously, goes into it. Why these two times, these two directions, these two ladders, these two trees, do they form a cross? Is this a very old, very absurd way of thinking?”70 The story that Serres sets out to tell, and that he claims has happened in the seventeenth century, in the noisy quarrels of that time — while at the same time being a story that did not happen, and even more that could never have happened — introduces a manner of narration, of storytelling, that can do without these two times. It is a story of time in generational terms that does not mold the tree of life, iconically, into the form of a cross. In the picture, according to Serres, the tree of life and the tree of representations leave traces of an encounter. Traces in which, literally, nothing can be seen, because nothing is being depicted — “But sooner or later he’ll notice that there’s nothing on his canvas!” Poussin will comment when glancing at Frenhofer’s completed masterpiece in the end of Balzac’s story.71 And Frenhofer himself will despair : “I’m an imbecile then, a madman with neither talent nor ability. […] I’ve created nothing!”72 Nothingness cannot possibly be mastered according to Balzac’s story, hence Frenhofer cannot possibly identify with his masterpiece by seeing in it the completion he has achieved. Instead, he views it as a failure, destroys it together with his entire oeuvre, and dies that same night. 67 Ibid. 68 Ibid., 50. 69 Ibid., 49. 70 Ibid., 50. 71 Balzac, “The Unknown Masterpiece.” 72 Ibid.
IX “Ichnography”—The Nude and Its Model
305
By suggesting to take this “nothing-at-all” in a literal manner, do I not, in my reading of Serres, positivize what needs to be negated, for the sake of any ethics — if I may say so, a practice of keeping the sacred flame — that might once have been? The ethics we are looking for would have to be formulated in a strange tense that conjugates a kind of mightiness that will once have been without ever actually having been, as I specified earlier on. That is, an ethics, a form of life, or rather : the temporal mode of a form of life that cannot possibly be inferred from something that actually did happen. Are we not asking, thereby, for a practice that is, oddly, disembodied? In this peculiar story Serres narrates, which centers around a painting he claims capable of somehow capturing “a meeting between the real and the symbolic”73 — and this without being capable of actually depicting it — does Serres not lead us astray, leaving us behind somewhat lost, trying to grasp an empty center, dangerous and unsettling like the inner eye of a tornado, an empty center that swallows up and noisily distributes what appears to have been relatively peacefully at rest? Is it not a particularly violent destruction that I am trying to contemplate here? Thinking along these lines, we would be forgetting that this painting at stake, just as its painter — le chef-d’œuvre inconnu, the unknown masterpiece, and its fictional master (Frenhofer is the only character in the story that is entirely invented by Balzac) — exists only as a formulation that is fictional. Is fiction then that strange locus in quo that is capable of hosting as its “cases” formulations in that strange tense that conjugates a kind of mightiness that will once have been without ever actually having been? In other words, what would it mean to say that the character of fiction does not apply to mightiness itself — thereby distinguishing fictional mightiness as false pretense, as fake at best and a crime at worst, because of its impotence due to its character as invention, against a kind of nonsymbolic mightiness that must count as “real” and therefore “true” and powerful — but to the temporal tense of a symbolic mightiness in which the fake actually exerts real power? One cannot deny a sequential order of time, Serres seems to be saying, by foregrounding the generational setup of Balzac’s story. But its sequentiality does not follow directions : “The tree of life comes out of the picture just as the tree of representation goes into it.”74 Serres seems to maintain that we would be capable of rethinking time in neither a continuous nor fragmented, nor linearly progressing manner if only we begin to value (discern, estimate, rate), in our stories (narrations), a life of the fictional — of that which is invented or imagined in the mind, just as we value the liveliness of all things real. Balzac witnessed a meeting between the real and the symbolic, and he did so in the story. If we read this “in the story” as a fictional locative, then it will be a locative that is not empty of meaning but rather 73 Serres, “Noise,” 48. 74 Ibid., 50.
306
coding AS literacy — Metalithikum IV
one that can sustain any meaning. It would be a cryptographic locative, that is, because it is symbolic — empty neither in the sense of demarcating, nihilistically, the reality of a non-place, nor in the sense of a determinate and defined positively locatable location, a place of the negative. Such “emptiness,” I want to suggest, is the emptiness of a cryptological code that is pure capacity — relative strictly to the meaningfulness with which one is capable of endowing the symbolic any-structure of the meaning transmitted. A phonetic alphabet, like the Roman one, for example, can be viewed as such a code : it comprehends a finite stock of elements that are ordered in a particular sequentiality, the characters expressed by letters, and in terms of these letters all words that can in principle be uttered — meaningful once, now, in the future, or even never — can be expressed. There is a certain materiality to the utterances of articulated speech, and a distinction between literal and figurative speech, truth and fiction, argument and rhetoric, can be applied to them only retrospectively. In that sense, the formal character of the alphabet is that of a code system, just as the diverse and so-called probabilistic alphabets with which engineers are computing today, or the many phonetic alphabets that preceded the Greek one (which is usually referred to as “the first” phonetic alphabet in history).75 My claim then is that the cryptographic locative can express “nothingness” in “literal” manner, because the letters of the alphabets it uses are the atoms of a materiality of articulated speech — a materiality that presents itself in no form, a materiality that is furious, unorganized, yet not inarticulate, a materiality that Serres calls “noise.”76 A cryptographic locative cannot pos75 There have been “phonetic alphabets” — meaning scripts that do not provide inventories of things with the letter series they express, but rather a metrical system to note how one speaks about the things one strives to inventorize — as early as 2000 BCE. However, most of them wrote only in consonants, producing a kind of “extract-text” that can be read by many cultures even if the way they articulate and pronounce the read sequences of letters was so different that the people speaking it could not understand each other in speech — based on such scripts, however, they could in writing. For the political implications of different scripts, and the different literacies they produced, see Harold Innis, Empire and Communication (Toronto: Dundurn Press, 2007 [1950]). Still today, for example, the Arabic language struggles with its tradition as a pure consonant script. Mohammad’s prophecy has been recorded in the Koran in a consonant script, and already by the early Renaissance there were many different ways of reading the prophecy — giving rise to different Islamic cultures. See the article by Suleiman Mourad and Perry Anderson, “Rätsel des Buches: Zur Geschichte des Korans und der historischen Dynamik des Islams,” trans. Florian Wolfrum, Lettre International 106 (Fall 2014): 118ff. Greek phonetic script introduced for the first time the means to write down explicitly a manner of speaking (vocalization) that has not actually been spoken by any one people in particular, but that is a script applying vocals together with consonants, and that has been invented artificially in order to establish a common tongue that can be learned easily by all parties contracted in networks of trade relations in the Mediterranean area. With regard to the Greek vocal alphabet, see Innis, Empire and Communications, and Eric Havelock, Preface to Plato (Cambridge, MA: Harvard University Press, 1982 [1963]) for a discussion of how this prehistoric genealogy of the phonetic alphabet relates to the “mysterious” leap into new levels of abstraction produced and witnessed by the Greek culture in antiquity. 76 See Michel Serres, The Birth of Physics, trans. Jack Hawkes (Manchester: Clinamen Press, 2001 [1977]).
IX “Ichnography”—The Nude and Its Model
307
sibly be working within a scheme of representation because it calls for an infinite base, which, following Serres, we can learn to call “ichnography.” He seems to be telling us that it is the infinite base of an ichnography that is being narrated in fiction, and that constitutes fiction as a locus in quo where the real and the symbolic can meet. Let us now pursue this line with greater care. The term “fiction” comes from the Latin fictionem, “a fashioning or feigning.” It is a noun of action from the past participle stem of fingere, “to shape, form, devise, feign,” originally “to knead, form out of clay,” from PIE *dheigh-, “to build, form, knead,” and also from the Old English source in dag, “dough.”77 Since the late sixteenth century, fiction also demarcates “prose works of the imagination” in distinction to dramatic works of the imagination. From that same time onward, there is also a legal sense of the word, according to which law was characterized as “fiction.”78 Related words include the latin fictilis, “made of clay, earthen,” as well as fictor, “molder, sculptor,” as well as (ascribed to Ulysses) “master of deceit,” drawn from fictum, “a deception, falsehood, fiction.” What strikingly distinguishes the notion of “fiction” from that of “illusion” is, as we can see in this genealogy of the term, that it was used in a sense that could perhaps be characterized as “uncritical” : different from a fiction, an illusion makes plain that it operates within the realm of the apparent, and hence presumes, for its very identity, a certain distance and mediacy related to the faculty of understanding, and this faculty’s capacity for judgment. Such mediacy is inherently problematic in relation to fiction, because fiction does not operate within a representational framework. This is exactly the point Serres makes so strongly in his narrative mode of “storytelling.” Let us carefully and slowly try to understand how this might work. The masterpiece painting around which the plot in our story unfolds is Fernhofer’s painting of his imaginary mistress, Catherine Lescault, also called “the beautiful noiseuse.” This painting “is not a picture,” Serres tells us, “it is the noise of beauty, the nude multiple, the abundant sea, from which is born, or isn’t born, it all depends, the beautiful Aphrodite.”79 Beauty, in its pure nakedness, is neither to be seen in a woman, a female God, nor in a feminized reification of nature that would characterize physics in its objectivity. Such beauty can only be imagined in status nascendi, born from the foam of a noisy sea, as the fictional impersonation of the anadyomene : “We always see Venus without the sea or the sea without Venus, we never see physics arising, anadyomene, from metaphysics.”80 The schema of associating an active principle, form, or intellect that imposes itself upon receptive and nurturing nature, is 77 Online Etymology Dictionary, s.v. “fiction (n.),” http://www.etymonline.com/index. php?term=fiction&allowed_in_frame=0. 78 Ibid. 79 Serres, “Noise,” 54. 80 Ibid.
308
coding AS literacy — Metalithikum IV
thwarted in Serres’s account. Considering the fictional as distinct from the illusionary, he must not see in form a schema or outline of the true that needs to be substantiated — filled with materiality — in order to constitute knowledge. Rather, form itself is a figuration of the unknown rising as the anadyomene : form is “information that is phenomenal,”81 and it “arises from chaos-white noise.”82 He continues : “What is knowable and what is known are born of that unknown.”83 Serres refers to “that unknown,” the anadyomene, also as “chaos-white noise” — with that, he separates that unknown from an unknown that would merely host the impossible as the negative of the possible, or the improbable as the negative of the probable. In the unknown, Serres considers that “there is nothing to know.”84 I want to suggest that (1) if we consider Serres’s understanding of a story as the locus where the real and the symbolic can meet, then (2) we can reason and make sense of this “nothing” as something neither positive nor negative, but (3) as the any-capacity proper to an alphabet that constitutes a cipher. What I would like to read into and extract from Serres’s text is that the question of “mediacy” can be approached in a different manner once we can develop a less counterintuitive and less disturbing idea of (1) such “nothingness” that is, essentially “anythingness”; (2) its communicability into “somethingness” through encryption; and (3) the “originality” of the “secret somethings” that are being sourced from such a symbolic nature as “nothingness/anythingness.” We can develop such an idea from looking at how mathematics deals with the zero. My assumption thereby is that the zero in mathematics entails all the problems we have encountered with regard to the nothingness that Fernhofer has painted in Balzac’s story, that nothingness of which Serres, in his reading of Balzac’s story, insists (against Balzac) marks the completion of the unknown masterwork, not its failure. So what is a mathematical “cipher”? The notion designates, on the one hand, the zero in mathematics, and, on the other, it is a generic name for numerical figures (as Ziffer is in German). Let’s begin with how we refer to the zero. Of course we have an encoding for it with our symbolic notions of numbers. This may sound rather unspectacular, but we need to consider more precisely what it entails. We have in mathematics, or more precisely in algebra alone, an intermediate level of notational code and ciphering between “notational signs” and what they “indicate.” This intermediate level is introduced because algebra operates in abstract symmetries (equations) : algebra is the art of rendering what terms a formula (an equation with unknowns) is being expressed in into the mappings of possible solutions for the 81 Ibid. 82 Ibid. 83 Ibid. 84 Ibid., 48.
IX “Ichnography”—The Nude and Its Model
309
unknowns. From a mathematical point of view, the mappings rendered by the articulation of a formula (an equation) are varying expressions of one and the same thing — while that “one and the same thing” itself remains “absent.” Neither of the articulable expressions of the terms (articulated in how the terms of the equation are factorized, partitioned) is ever capable of expressing explicitly and exhaustively all at once whatever it may be that is being articulated in a formula (the “identity” expressed in an equation). There is a constitutive level of mediateness involved, which never lets the mathematician forget that what one seeks to express by stating its identity in terms of a formula must be considered as being of a vaster extension than any one discretion of its symbolic expressions can ever be. In other words : a function is always derived from an equation that has been rendered solvable. We can conceive of this “rendering solvable” as “mediation” that is peculiar to the relation of algebraic “idempotency” and its capacity to express “identity” inversely. And we can conceive of any version of algebraically articulated “identity” as the symbolic establishment of a tautological relation in a manner that is not “absurd” — precisely because of this tautological character that expresses one and the same thing differently. Like the allegorical elephant in the room full of blind people eagerly describing to each other what they perceive to be “present,” the algebraically articulated “identity” becomes more and more distinguished and rich in qualities as the quarrel of “getting it right” goes on. Every claim, if it is to persuade, has to establish a code that can be shared. Now in what sense can we say that every code is constituted by a “cipher”? The establishment of a code requires a projection space in which a structure is doubled up and mirrored around a neutral point, such that a fixed order of reference can be assigned between the doubled-up structures. Cipher is another word for this neutral point, which we commonly call the zero. A code is always participating in the game of encryption. An easy example to illustrate the cryptographic or cryptological relation between a code and a cipher are codes for encrypting texts.85 One takes a set of finite and ordered elements, in this case the alphabet, duplicates it, and mixes up the order of the elements in the duplicate version. Perhaps one uses another notation system like numbers or figures, or perhaps one may also decide to introduce further elements to the duplicate version that are not contained in the duplicated one in order to raise the difficulty of “breaking the code” — that is, in figuring out the structure of the transformations applied between the two. The establishment of a code depends upon a place-value grid or frame within which it is possible to locate and correlate the positions occupied by values. This allows for remaining undecided with regard to 85 It depends on how we treat the relation, whether primarily analytically as in cryptology, or primarily synthetically as in cryptography.
310
coding AS literacy — Metalithikum IV
the substance of the value, or algebraically, the one partition scheme that determines judgments (prime parts, Ur-teile). Thus considered, “values” have an essentially cryptic character — one that can only be clarified by giving “figure” and associating a “face” to their cryptic character as we learn to “enfamiliarize” and “decipher” it. I put decipher in quotation marks to highlight that here (as in the allegorical space with the elephant and the blind people), we are speaking about a mode of deciphering that has to invent the code that makes that very decipherment possible — a kind of deciphering that does not hack or intrude into a “secret,” but one that renders communicable what we might perhaps best call “an arcane regularity” — a regularity that remains arcane, even while being rendered communicable, sharable, public.86 As one comes to “master” such regularity, one literally “masters nothing,” in a manner in which “nothingness” must not be addressed in either positive or negative terms. We have to understand the secret at stake in a sense that is chemico-physical, as a secretion, from the Latin secretionem, “a dividing, separation, a setting apart.”87 In other words, the secret is not something initially clear, pure, or plain, whose possibility of discretion has been rendered occult, difficult, exclusive. What Serres suggests in his reading of “The Unknown Masterpiece” as the beautiful querulent is that such assumed purity, clarity, or plainness is, in fact, initially noisy — a mixture of heterogeneous factors, factoring in something that can never be known exhaustively and as a whole. A secret in that sense turns into a well or source that is, essentially, public : no one can control all the articulations of how the secret circulates that can be “sourced,” set apart and rendered communicable, by learning to master its well — which, for Serres, is nothingness as primary noisiness.88 With this, we come close to the second genealogical lineage of the notion of the cipher, one which departs from and builds upon the first one (cipher as zero) : in number theory, the cipher not only stands for the zero, but also for the numerical figures as they are expressed in the terms of a common base like the hexadecimal number system, or today the decimal number system. Such positional systems are organized in what is today called logarithmic tables — a term introduced by John Napier in the seventeenth century, expressing what he called “ratio-numbers,” or numbers put in 86 See my article “Arché, Archanum, Articulation: The Universal and Its Characteristics,” in Bühlmann and Hovestadt, Domesticating Symbols, 112–77. 87 Online Etymology Dictionary, s.v. “secretion (n.),” http://www.etymonline.com/index. php?term=secretion&allowed_in_frame=0. 88 This manner of thinking strikes me as so interesting because it suggests the counterintuitive or at least apparently paradoxical idea that there might be a kind of mastership that, through privacy, produces and renders distributable public goods — commons — rather than accumulating them and claiming them as private property, on the grounds that one (more so, or differently so) masters it. See also my article “Articulating a Thing Entirely in Its Own Terms or What Can We Understand by the Notion of Engendering?,” in EigenArchitecture: Computability as Literacy, ed. Ludger Hovestadt and Vera Bühlmann (Vienna: Ambra, 2013), 69–127.
IX “Ichnography”—The Nude and Its Model
311
proportionate notation, from logos, proportion, and arithmos, number. The decision with regard to which base the proportionality is set up characterizes the notion of numbers as a particular code. It is within algebraic number theory that the positional logic of such notational systems itself is being thematized, in a manner that in the nineteenth century usually took the form of placing numbers on one infinite line — the so-called number continuum. Richard Dedekind and Giuseppe Peano have introduced a general procedure of how to identify numerical domains as number classes embedded and nested both within each other as well as within that continuum (the rationals, reals, integers, etc.). The application of this procedure (called the Dedekind Cut) requires further and further levels of relative abstraction attributed to the algebraic symbols in whose bonds or relations numbers are now being expressed — numerical values are here subjected to symbols used as jokers, as placeholders with a “naked” or “pure” capacity to render countable an any-meaning that might not even yet be articulated. Algebraic symbols are at work in identifying the positional logics of these purely symbolic domains, up to the situation we have today where number theory is understood by many as the very object of cryptology / cryptography / cryptoanalysis rather than as part of natural philosophy, as Frege, Russell, Whitehead, Husserl, and others have regarded the advent of Universal Algebra.89 Today, on an ordinary basis (in all electronic things and infrastructures) there are entirely abstract numerical bodies at work that are called “fields” in English,90 as well as a great diversity of abstract constructs that build upon them — with beautiful names such as “rings,” “lattices,” “sheafs,” and so on. In the perspective outlined here, these “names” of “algebraic things” (symbolic “things”) name secretions of nothingness — secrets rendered communicable because they are extracted from the inverse of what Western philosophy has been centering around for more than two millennia, namely the fantastic inception of the idea of universal, eternal, enduring, and persisting essentiality — that is, the notion of universal substance.91 89 Whitehead introduced this term to express that from the point of view of mathematics there is a multiplicity of systems of symbolic reasoning that cannot be decided in terms of supremacy on the basis of mathematical consistency criteria alone. See Alfred North Whitehead, Treatise on Universal Algebra with Applications (Cambridge: Cambridge University Press, 1910). 90 The term “field” is a rather unfortunate and, arguably, even misleading translation from the German term Zahlenkörper, with which Dedekind introduced these symbolic numbers. The translation is unfortunate because the notion of the field suggests that no local organization differentiates one against another; fields are subject to the uniform forces of electromagnetism where all “locality” is but a function to this uniformness. The term “body of numbers” on the other hand puts all its emphasis on a certain “autonomy” or “self-maintenance” of such a local organicity. 91 Especially interesting contemporary studies in relation to this: François Laruelle, Principles of Non-Philosophy, trans. Anthony Paul Smith (London: Bloomsbury, 2014); as well as Jean Luc Nancy’s interest in a notion of “exscription,” e.g. in “Exscription,” in The Birth to Presence, trans. Brian Holmes et al. (Stanford, CA: Stanford University Press, 1993), 319–40; and “Corpus,” in ibid., 189–207.
312
coding AS literacy — Metalithikum IV
If number theory could give us an inverse of universal substance instead of its axiomatic elements, as Frege, Russell, Whitehead, Husserl, and others were trying to establish92 — would that not help in coming to terms with those developments in nineteenth- and twentieth-century science that so trouble modernity’s grand idea of a Natural Philosophy? I am referring of course to all the issues already discussed in relation to the notion of “mediacy” and “media” : (1) to the centrality of “radioactivity” in physics, and its counterintuitive understanding of a quasimateriality of invisible light, or more precisely, the interactivity among particles in their emission and exchange of light that contains energy; (2) the therewith associated “birth and death” of countless galaxies in an expanding Universe in astrophysics; (3) the depiction and technical control of such radiating activity via technical images called “spectra”; and (4) the spectrum-based, quantum-physic “substrate” of our contemporary form of technics in communication and computation.93 Let us return to the plot of the story. We have already seen that Frenhofer’s masterpiece is characterized as depicting nothing-at-all. More concretely now, what does it in fact depict? “‘The old fraud’s pulling our leg,’ Poussin murmured, returning to face the so-called painting. ‘All I see are colors daubed one on top of the other and contained by a mass of strange lines forming a wall of paint.’ ‘We must be missing something,’ Porbus insisted.”94 The “secret” is not something initially clear, pure, or plain, whose possibility of discretion has been rendered occult, difficult, and exclusive, as Porbus and Poussin consider (“‘There’s a woman under there,’ Porbus cried”).95 What Serres suggests in his reading of “The Unknown Masterpiece” is that such assumed purity, clarity, or plainness is, in fact, initially “noisy” — a mixture of heterogeneous factors, factoring in something that can never be known as whole. But what, then, did Frenhofer depict? How could he possibly paint noise as noise? By producing a “fake” painting, a painting that lacks an original. “The Unknown Masterwork is a fake. It happens in a placeless space, is signed by a nameless author, is told in a timeless time. No, there is nothing beneath, not even a woman.”96 And Serres continues to spell out how he thinks of the unknown that he understands Frenhofer to 92 See the lesser-known and early writings of Edmund Husserl in his dissertation Beiträge zur Theorie der Variationsrechnung (1882) as well as his habilitation Über den Begriff der Zahl: Psychologische Analysen (1887); Gottlob Frege, Die Grundlagen der Arithmetik: Eine logisch mathematische Untersuchung über den Begriff der Zahl (1884); Bertrand Russell’s dissertation An Essay on the Foundations of Geometry (1897); Alfred North Whitehead’s A Treatise on Universal Algebra with Applications (1898); and Ernst Cassirer’s Descartes’ Kritik der mathematischen und naturwissenschaftlichen Erkenntnis (1899). 93 As a great overview and introduction into these topics for the layman, I suggest referring to the respective articles in Serres and Farouki, Le trésor. 94 Balzac, “The Unknown Masterpiece.” 95 Ibid. 96 Serres, “Noise,” 48.
IX “Ichnography”—The Nude and Its Model
313
have painted : “If the masterwork is improbable or impossible it is not unknown and there is nothing to know.”97 But if Serres’s reading maintains that this very masterpiece is indeed a masterpiece, because it depicts beauty stripped from any model that could “wear” it, instantiate or represent it, beauty in pure nakedness, beauty as unknown beauty, then these characterizations counter his argument, or don’t they? If the masterpiece is declared impossible or improbable, then it would not be unknown — because the impossible is merely the negation of the possible, and the improbable is the negation of the probable. Both are statements uttered from the stance of the always already initiated, for whom there can be no genuine secret in the chemico-physical sense introduced above in which there can be nothing to know. For according to this sense of the unknown as a genuine secret(ion), there must always and still be something new to know, as Serres adds to his critique on impossibility and improbability as frames in which to refer to the unknown that Frenhofer has painted. “Or else : is there still something new to know now?” he asks.98 But if neither a model, nor a frame in whose terms we might refer to the kind of Unknown Serres seems to be talking about, then what? Are we not at a hopeless loss with such purport? “The picture that is discovered at the end of the story is the ichnography,” we are told by Serres — the ichnography, with a determinate article. But how can Serres’s proposed resolution, that of ichnography, mean something different from a frame of reference? Let us attend to the full passage that Serres continues with : “The picture that is discovered at the end of the story is the ichnography. The beautiful noiseuse is not a picture, is not a representation, is not a work, it is the fount, the well, the black box, that includes, implies, surrounds, that is to say buries, all profiles, all appearances, all representations, the work itself.”99 The term “ichnography” is usually rendered into English as “groundwork,” or “ground plan,” and into German as Grundriss. It is a term that plays a crucial role in architectural theory ever since the first theoretical treatises on architecture (that we know of) had been composed by Vitruvius in the first century BCE. It never comes alone, but always in association with two complementing terms : those of orthography and scenography. All three are terms that refer to particular kinds of draftings that help the architect to learn, develop, and refine building as a practice (or even as an art). In technical terms, the orthography means plans that elevate the schemata of the ground plan into upright position (depicting the voluminosity of the building in profile), and scenography means plans of the multiple views on a building in profile. The German terms are respectively Grundriss, Aufriss, and Seitenriss. I mention 97 Ibid., 48. 98 Ibid. 99 Ibid., 54.
314
coding AS literacy — Metalithikum IV
this because the German terms, unlike the English ones, hold on to a distinction that keeps the practice of the draftsman, and hence the timelessness of geometry, separate from the dynamics that unfold in time as is inherent to the notion of the “plan.” This is an important distinction, because it helps to understand that there has been a dramatic element in architecture ever since it has been theorized : scenography introduces storytelling and a quasi-rhetorical aspect of expression to building as a practice. There is a tension at work within architecture that is not unlike the one in philosophy between rhetoric and argumentation, whose vectors rotate around that big idea called Truth. Is there in architecture then also a kind of “truth” at stake? It surely couldn’t be the same as in philosophy, it seems. But then, on the other hand, from the first treatises on architecture, it was all about a building’s “adequateness” or “proportionality” — a temple’s adequateness to the gods that are being worshiped; a villa’s adequateness to the social and political power of the master whose oikos (property) it is to accommodate; an aqueduct’s adequateness to its purpose (transporting water); and perhaps the most immense “task” to be fulfilled by architecture, namely to match a city’s adequateness in con-forming to “the” order of “the” cosmos. The three different kinds of drafting, serving the architect to refine her able-ness as an architect, also introduce thereby a contractual dimension into the power relations that organized the practice of “building in adequate and proportionate” manner. They each come with different kinds of categories that all allow to differentiate, discrete, compare, and argue about the “worth” of particular buildings via recourse to the work of the architect as draftsman. Thus, without necessarily being very familiar with the corpus of architectural theory, we can easily imagine the disputes about what exactly was meant by ichnography, orthography, and scenography (as well as the relations between them that could be derived from these attributed meanings together with the network of consecutiveness that results from those relations). It doesn’t seem to be overstressing the point to say that these three terms capture the invariant “topic” of architectural theory. Architectural theory encrypts and encodes its own “identity” in the terms of these “categories” — not at all unlike metaphysics, which has been doing the same with the philosophical categories.100 In Serres’s account, the Unknown Masterpiece is “not a picture, is not a representation, is not a work, it is the fount, the well, the black box, that includes, implies, surrounds, that is to say buries, all profiles, all appearances, all representations, the work itself” — it is “the ichnography,” the crypt of the arcane source of all secrets that can be articulated. This is what Poussin and Porbus don’t expect to see in the painting. They “run 100 This arguably holds at least until the twentieth century, with Gottfried Semper and his notion of “style” in architecture perhaps as a (provisionally?) last rearticulation of this conceptual legacy in an attempted systematic manner.
IX “Ichnography”—The Nude and Its Model
315
toward the canvas, move away, bend over, right and left, up and down, they look for the habitual story-line, the usual scenography. And they stand so as to see an oblique profile. As if by chance, they shall have a spot where a straightform will appear. Scenography, orthography. And they look, as is their wont, for a space where there is a phenomenon, a space and an incarnation, a cell and knowledge. A representation. And thus, they do not see the ichnography.”101 Because there is no habitual story line depicted they too look for something that lies buried — “’There’s a woman under there,’ Porbus cried”102 — but they look for it as if there would have to be “a space where there is a phenomenon, a space and an incarnation, a cell and knowledge.”103 But Frenhofer’s painting “is not a picture, is not a representation, is not a work,” Serres tells us, “it is the fount, the well, the black box, that includes, implies, surrounds, that is to say buries, all profiles, all appearances, all representations, the work itself.” The ichnography is the crypt of the arcane source of all that can “secrete” only insofar as it must be deciphered from all profiles and perspectives — there is no continuous mapping from orthography and scenography to ichnography. “Once again, what is this ichnography? It is the set of possible profiles, the totality of all the horizons. Ichnography is what is possible, or knowable or producible, it is the fount of phenomena. It is the complete chain of the metamorphoses of the marine god Proteus, it is Proteus himself.”104 With his insistence that “the ichnography” be “the totality of all the horizons,” where no continuous mapping from the phenomena (profile and perspective, orthography and scenography) to the ground (foundation or reason, ichnography) is possible, Serres relates Balzac to Leibniz. “Balzac saw the ichnography. I think he figured out that he had seen it. Since he signed his name to it.”105 I will come back to this role of the signature in a moment. In contrast to Balzac, Serres continues, “Leibniz never saw the ichnography. He undoubtedly demonstrated that it was invisible. He was aware of it, he demonstrated that it is unknowable.”106 And furthermore : “Leibniz drowns everything in the differential and under the innumerable thicknesses of successive integrations. The mechanism is admirable. No one ever went as far in rational mastery, even into the smallest nooks and crannies. The straight direction of reason that must turn away from this chaos is the ascent of these scalar orders. The path is ahead, it is infinite, the perfect geometrizing remains inaccessible. It is divine, it is invisible.”107 Porbus and Poussin 101 Ibid. 102 Balzac, “The Unknown Masterpiece.” 103 Serres, “Noise,” 54. 104 Ibid. 105 Ibid. 106 Ibid. 107 Ibid., 55.
316
coding AS literacy — Metalithikum IV
followed the path that Leibniz had thought infinite, Serres maintains. “Having broken in, they contemplate the divine work of geometry without understanding.” Why? “Because they expected another picture, one that would have been like an extrapolation, part of the chain of forms. The last, the first representation, why couldn’t it be a representation too?”108 But “ichnography is not harmony, it is noise itself.”109 Leibniz’s system turns around “like an iceberg” in Serres’s purport of an unknown that is “the beautiful noiseuse […] beauty denuded of her appearances, of the dress of representation.”110 Like Leibniz, Serres too is after an infinite base. Yet it “cannot be structured by rigorous and lucid reason. It is immersed in white noise, in the mottled clamor of the confused.”111 The totality of the rational is not itself rational, Serres maintains.112 And further, the culminating phrase : “Balzac paints the vision that is the opposite of divine architecture.”113 The Signature of the Unknown Masterpiece But once again, how shall such painting be possible? How can Serres claim that “Balzac saw it, knew it”?114 Indeed, how can he? “I can show that he saw it. I can really show that he figured out that he had known it: since he signed it.”115 We have to come back now to this crucial notion of “signature,” and the role it plays in relation to the architectonic dimension of a “contract” the architect enters as the “draftsman.” For it is this very dimension, the contract the architect enters, that secularizes the role of the architect in the precise sense of this word: the secular means “living in the temporality of the world, not belonging to a religious order.”116 The unknown as the fount of the possible that Serres purports allows the architect, as well as the geometer, to preserve, within the contract that is the contract of the draftsman, the possibility for disobedience. For Serres, the spectrum — the totality of all colors, the canvas of the successful completion of a masterpiece (in Serres’s understanding of mastership whose master is the subject of his novel humanism), is the element of geometry — it is metaphysics, and not physics. It is the crypt of physics, physics as encrypted reality of all that is “mediate.” Geometrizing was the inaccessible object of metaphysics and still is. White noise is geometrizing. A field of inquiry thought closed is open. The noisy, anarchic, clamoring, mottled, striped, 108 Ibid., 56. 109 Ibid. 110 Ibid. 111 Ibid. 112 Ibid., 56. 113 Ibid. 114 Ibid., 55. 115 Ibid. 116 Online Etymology Dictionary, s.v. “secular (adj.),” http://www.etymonline.com/index. php?term=secular&allowed_in_frame=0.
IX “Ichnography”—The Nude and Its Model
317
streaked, variegated, mixed, crossed, piebald multiplicity is possibility itself. It is a set of possible things, it can be the set of possible things. It is not strength, it is the very opposite of power, but it is capacity. This noise is the opening. The Ancients were right to think chaos a gaping abyss. The multiple is open and from it is born nature always being born. We cannot foresee what will be born of it. We cannot know what is in it, here or there. No one knows, no one has ever known, no one will ever know how possibilities co-exist and how they co-exist with a possible relation. The set is criss-crossed with possible relations.117 Physics as encrypted reality of all that is “mediate” is physics as that which is computable. It is important to see that computable solutions — encrypted algebraic “identities” — do not stand for something, they are no representation. The articulation of a formula resolves the involved terms (their factorization) into mappings (functions) that can stand in for rather than stand for. It is true, they demarcate a case, because they are inferred from a generalization, but they do not demarcate a case by representing it; rather, they demarcate a case categorically, by depicting the syntax of a function according to whose rules we articulate the terms of an equation. My point is that we can think of their categorial demarcation of a case according to the grammatical case of the locative. They demarcate a case whose place is “nowhere” — but this “nowhere,” being a function to “somewhere,” is locative rather than representative. They stand in for the unknown parts and aspects of that which has been articulated in a formula — not unlike in language, where words stand in for whatever absent thing they may present to our minds when we depict the sense of words. These mappings can stand in for their own “original,” so to speak — that is, they can articulate “the original” as an unknown, as something not mastered, because they articulate “the original” in a tautological manner (in the form of an equation). This does not need to be seen as an absurdity. The mappings rendered by the articulation of a formula (an equation) are varying expressions of one and the same thing — while that “one and the same thing” remains absent. Neither one of the articulable expressions of the terms (articulated in how the terms of the equation are factorized, partitioned) is ever capable of expressing explicitly all that is to it at once. In other words : that which is being expressed is of a vaster extension than any one discretion of its possible symbolic expressions can ever be. I shall explain what I mean. What is ichnography? What is this masterwork where the term “master” [chef] means less a unique and rare success than it does capital, stock, fount, I mean ichnography? Well, the Greek term ichnos means footprint. Moving 117 Serres, “Noise,” 56.
318
coding AS literacy — Metalithikum IV
toward the canvas, they saw, in a corner of the canvas, a bit of a naked foot that arose from the chaos of colors, tones, and vague shadings, a kind of form-less fog; it was a delicious, living foot! They stood there in complete admiration in front of this fragment that had escaped from the unbelievable yet slow and progressive destruction. The foot appeared there like the torso of some Venus sculpted in marble from Paros, a Venus arising from out of the rubble of a city in flames. Here then is the signature with the very name of ichnography. The beautiful noiseuse is the flat projection.118 We can see from this how encrypted expressions always have a “transcendent” referent. Their power consists in “presenting” this transcendent referent symbolically while leaving it absent, just like words are capable of evoking something absent into presence. We can regard a cipher (an alphabet) as a symbolic body of a self-referential relation whose identity is being articulated, not represented — yet articulated in a split, linked, double, and parabolic manner, or more precisely, in a symbolic manner :119 neither form nor content, neither substance nor expression can be considered without reference to each other. They stabilize each other rather like planets in the galaxy of a solar system than by occupying schematic positions that would be thought of as existing prior to the birth of a particular solar system. The way that they refer to each other constitutes natures (in the plural) of the universe — the universe being, according to contemporary astrophysics, galaxies that differ in “kind” but not in “nature.” The astrochemical elements are considered by today’s science as the products of nucleosynthesis (the sun), and they are the main “referent” of whatever is organized in the technical “format” of a spectrum : what is being measured in an spectrum is the frequency rates of different types of light emitted by the sun (solar radiation).120 All, in such a manner of thinking about the universe, is universal in character. And as such, Serres maintains, it is essentially noisy, or in status nascendi, anadyomene, as he says, physics born from metaphysics.121 Serres chose a mythical manner of formulating here, but there is a sense to what he is saying that is empirically supported, and we can decipher it from his insistence that geometry depicts white light. If as nonexperts we turn to a thesaurus of modern science, we can read that the white spectrum depicts all that moves at light speed; all that moves at light speed is of universal nature, in the sense that it is matter in its subparticle “state.” 118 Ibid., 55–56. 119 Literally “that which is thrown or cast together,” from assimilated form of syn-, “together” + bole, “a throwing, a casting, the stroke of a missile, bolt, beam,” from bol-, nominative stem of ballein, “to throw.” Online Etymology Dictionary, sv “symbol (n.),” http://www.etymonline.com/index.php?term=symbol&allowed_in_frame=0. 120 Cf. the respective articles in Serres and Farouki, Le trésor. 121 Serres, “Noise,” 54.
IX “Ichnography”—The Nude and Its Model
319
Isn’t this what Serres calls “metaphysics” — that which “secretes” all that is sound and solid, as if out of the foam that is left behind by the furious clamor of incandescent and radiating matter (a sun)? Let us recapitulate. The nature of the universe for Serres is secretive communication. Knowledge of the universe’s nature consists in knowing how to keep its secretions secret, by building reduced models, crypts, that strive to duplicate it such that there can be communication — literally, “a sharing with, a making common”122 — of the bare beauty of universal nature through its models, the crypts. While modeling, building the crypt, is a kind of contractual architecture (a contract whose basis is the work of draftsmen) that proceeds in terms of symmetry (the object agreed upon in the terms of a contract is articulated algebraically, tautologically, and what is agreed upon is the inverse of the thus articulated object — as far as the parties can imagine it). The communication of such bare beauty that can only be modeled, on the other hand, must proceed in terms that are asymmetrical. This is why I have suggested that the practice of modeling is an act of comic dramatization (it has to deal with incommensurate magnitudes). The asymmetrical communication that models afford in turn affords the nature of the universe to be universal; that is, capable of descending and branching off in all sorts of directions. Such asymmetrical communication affords a universe that is expanding, but in no preset manner. It is important that keeping the secret in a crypt requires asymmetrical communication — or else there would have to be a Master Code(x), and those who serve its law would have to keep the channels of communication “safe” such that the Master Key could be shared solely among those initiated to that master code, while excluding whoever is not. Those who keep the secret then would not articulate Universal Nature, rather they would act as Universal Nature’s representatives. Within Serres’s narrative, instead of a Master Code(x), we have Code in whose terms the totality of all colors (a white spectrum) has been depicted. And this “code” is not a “codex.” Rather than referring to the universal nature as (immediate) law, by duplicating the authority of universal nature in order to claim to be acting as its representative, the code at stake refers to universal nature only mediately, in the terms discernible from a spectrum. A code that has thus been depicted (as a spectrum, a painting of the ichnography) carries the signature of someone who serves that law by obeying it without submitting to it. Because the subject of such a signature has to be authenticated as one who obeys the (unknown) rules of things themselves. Someone like that acts disobediently, comically, toward all official representations. 122 From the Latin communicationem (nominative communicatio), noun of action from the past participle stem of communicare, “to share, divide out; communicate, impart, inform; join, unite, participate in,” literally “to make common,” from communis. Online Etymology Dictionary, s.v. “communication (n.),” http://www.etymonline.com/index. php?term=communication&allowed_in_frame=0.
320
coding AS literacy — Metalithikum IV
But could there possibly exist such a signature, for its subject could not possibly be “one” or “whole,” or could it? Wouldn’t such asymmetrical communication require of the subject of such a signature to be of a split personality? A symbolic persona? An animal whose sex is to be universal? If Serres dopes Balzac’s story by introducing into it the notion of “ichnography,” I want to dope Serres’s story by introducing into it the notion of “a public key signature.” The subject of such a signature indeed is a “split” subject, a “sexed” subject that desires and is never fully “whole”; it is, on the one hand, “anyone,” and on the other hand it is “me.” Let us see the principle behind it : Public-key cryptography, also known as asymmetric cryptography, is a class of cryptographic algorithms which requires two separate keys, one of which is secret (or private) and one of which is public. Although different, the two parts of this key pair are mathematically linked. The public key is used to encrypt plaintext or to verify a digital signature; whereas the private key is used to decrypt ciphertext or to create a digital signature. The term “asymmetric” stems from the use of different keys to perform these opposite functions, each the inverse of the other — as contrasted with conventional (“symmetric”) cryptography which relies on the same key to perform both.123 With this, we could inverse our usual perspective, and consider that all “text” be, naturally so, ciphertext; encryption then doesn’t obscure “plain text,” rather plain text is what “secretes” from ciphertext. Whatever message any private key can unlock from a message transmitted in ciphertext that is being transmitted distributively, and signed by a public key signature, would be strictly private. Such decipherments, then, appear to be plaintext — but the plainness of such a decipherment is but that of a model. The apparent plaintext that is contained in a ciphertext can only be articulated “authentically” by placing it in the locus where that peculiar mightiness of a possible future past (will have been) can be conjugated. We can refer to this locus by ascribing the practice of cryptography its own grammatical case, the case of a locative. The locus of a cryptographic locative is fictional, but that doesn’t mean that it is an illusion. Quite differently, the locus addressable by the grammatical case of a cryptographic locative is the territoriality of the subject of Serres’s novel humanism. Fictitiously, it builds a reduced model of universal knowledge, a model that is official not because it represents a lawful regularity (with lesser or greater authority) but rather because it serves the law by helping 123 Wikipedia, s.v. “Public-key cryptography,” http://en.wikipedia.org/wiki/Public-key_ cryptography (last modified March 20, 2015). For an accessible introduction see the online lecture by Raymond Flood, “Public Key Cryptography: Secrecy in Public,” held at Gersham College, London, November 11, 2013, online at https://www.youtube.com/ watch?v=I3WS-5_IbnM.
IX “Ichnography”—The Nude and Its Model
321
to keep the secret that is the essence of universal knowledge. If the subject of a public-key signature is humanity at large, which guards its own nature and origin in the care with which it articulates the reduced models — the plaintexts, the private because deciphered “message” — of the ciphertext (universal nature as it manifests in all things existent and / or object to thought), then this subject never ceased to become what it already is. Let us recapitulate : what an alphabetic absolute and its ichnographic bases — the Crypts — would oblige a researcher to is modeling. But the relation models maintain to ideas is not one that would “realize” them. The authenticity of models does not depend on their capacity to represent. Rather, it depends on their obedience to the laws of things themselves, laws that can be deciphered only after they have been encrypted, laws whose statements are ultimately arcane. The obedience that makes a model authentic is an obedience that doesn’t develop strength and concentrate power; but it still produces capacity. It develops the capacity to source phenomena : “ichnography is what is possible, producible, knowable.”124 This capacity is the very opposite of power and strength,125 for it is capacity in dealing with sums and products of infinite terms. Every model generalizes. But if the Genus is a spectrum rather than a common denominator, then the discretion of “data” points must be rationalized and proportionalized discretely and fictitiously, and data “points” must be treated as many-valued indexes into numerous possible encryptions of the ichnography : the set of all possible profiles, the totality of all horizons. Every model informs a genus and is informed by a genus. How so? Because the genus is a sum of infinitely many terms (the genus as a spectrum) only because the model is universal in kind. With regard to the universality of its kinds, the genus can be considered real without ever being born or existent. A model’s kind is universal, selfsufficient, and hence also circular, but actively so : it strives to complete itself in comprehending all that it encompasses. Hence the model is not only kindred but also sexual (the symbolic “markedness” that endows the model with “inclinations” [desire and potency]). But it is the nature of this sexuality to be modal. The contingencies and necessities that determine a model can do so in n amount of manners — constrained only by the ichnography. In other words, as a model is conceived, the sex of its universal kind is omnipotent and undecided. It is an “organ” of the universal kind. The genus is what specifies models — what limits their strength in developing a capacity that is the very opposite of power. Every model generalizes. But if the researchers that raise them are committed to the alphabetic absolute, the models continue to maintain an intimate relation with the singularity of ideas regarding the great secret that is the universe’s omnipotent nature. These ideas are singular in how 124 Serres, “Noise,” 54. 125 Ibid., 56.
322
coding AS literacy — Metalithikum IV
they demand to be encountered intimately, with pride and grace, in a play of seduction and conquer that never strives to possess the secretive sense — precisely because as a secret sense it is “private.” An encounter between the real and the symbolic, between the generic and the singular, is possible if the plots of stories are told in the cryptographic locative where no one can find their ways by being shown “the right path.” Our researcher committed to the alphabetic absolute learns to masterfully not know the meaning of this sense. This obliges her to assume : 1 That the design of models is always pre-specific and that it needs to focus on the witty and polite eloquence in which the model is articulated, such that it is then capable of raising the wealth of that in what the specific is richer than its genus : namely differences. 2 That the genus of universal kinds exists only in the conjugatable tense-ness proper to a fictional locus; the genus is the “temporalizer” of “real-time,” that is, “reality at the speed of light.” This theme of the summation of infinite terms has indeed been central in the philosophical discussions that accompany the modernization of science; that is, the attempt to decouple science via a natural philosophy from its theological background. Let us now attend to this theme by tracing how it has been dramatized, and by attempting to foreground — at least in a tentative, indexical manner — some aspects of the larger sociopolitical implications at stake thereby. THEME one, PLOT two: THE SUMMATION OF INFINITE TERMS IN SERIES Science, Liberalization, and the Absolute I will base the tracing of this theme here by treating an article written by Eberhard Knobloch entitled “Beyond Cartesian Limits: Leibniz’s Passage from Algebraic to ‘Transcendental’ Mathematics” as a particular ciphertext.126 At stake is not an evaluation of Knobloch’s own argument or interest pursued in his article; rather, the points he makes, and how he lines them up, is treated in my article in a thesaurus-like manner as indexes to larger themes. My “decipherment” of his “articulation” attempts to trace and communicate these larger themes. So what is at stake? “This article deals with Leibniz’s reception of Descartes’ ‘geometry,’” Knobloch tells us,127 and he specifies his own point of departure for discussing this reception by exposing five notions he holds as fundamental for Leibnizian mathematics : “calculus, characteristic, art of invention, method, and freedom.”128 Based on what these 126 Eberhard Knobloch, “Beyond Cartesian Limits: Leibniz’s Passage from Algebraic to ‘Transcendental’ Mathematics,” Historia Mathematica 33, no. 1 (February 2006): 113–31. 127 Ibid., 113. 128 Ibid.
IX “Ichnography”—The Nude and Its Model
323
notions entail, according to Knobloch, “Leibniz criticized Descartes’ restriction of geometry to objects that could be given in terms of algebraic (i.e., finite) equations.”129 He explains : “The failure of algebra to solve equations of higher degree led Leibniz to develop linear algebra, and the failure of algebra to deal with transcendental problems led him to conceive of a science of the infinite. Hence Leibniz reconstructed the mathematical corpus, created new (transcendental) notions, and redefined known notions (equality, exactness, construction), thus establishing ‘a veritable complement of algebra for the transcendentals’ [here Knobloch cites Leibniz again] : infinite equations, i.e., infinite series, became inestimable tools of mathematical research.”130 So let me suggest a model of how what these indexes draw together might be unpacked, and then begin with a kind of cryptographically inferential storytelling. The model is this : (1) at stake was the legitimization of working with infinite series in mathematics; (2) the grounds or reason with reference to which such “legitimization” was attempted by Leibniz was methodological, not cosmological (as was, arguably, the motive for Descartes’s trust in analytical geometry); (3) the point where this difference between Leibniz and Descartes manifested was the metaphysical question of how the identity of an object can be determined by mathematical description, hence, in their respective doctrines of substance : Descartes’s Dualism, and Leibniz’s Monism; (4) this difference, hence, is one that concerns the status of the subject (Descartes) or subjectivity (Leibniz) rather than one that would concern that status of the object (Descartes) or objectivity (Leibniz) only. Let me begin with my storytelling then : if for Leibniz, the intelligible identity of an object can be discerned by infinitary means (his “infinite equations”), then thought itself is considered as actively creative — human thought, for Leibniz, is capable of participating in divine intellection because it is akin to that power. Descartes’s cogito, on the other hand, is not akin to divine intellect — it is its creature (that is why he insists on the discretion of intelligible identity by “finite equations”). For Descartes, Leibniz’s view is presumptuous and immoderate, but so is Descartes’s view to Leibniz (Leibniz is outraged that for Descartes, his own mind was the limit of science).131 I would like to suggest that the point of departure between the two positions can be found in the respective role they attribute to “the absolute” — the absolute understood as that for which there can be no reason, nothing causal, and hence that which is truly free. The term literally means “unrestricted; complete, perfect,” and also “not relative to something else,” from the Latin absolutus, the past participle of 129 Ibid. 130 Ibid. 131 Knobloch cites Leibniz: “Descartes’s mind was the limit of science” (ibid.).
324
coding AS literacy — Metalithikum IV
absolvere, “to set free, make separate.”132 So when Knobloch specifies that Leibnizian mathematics was based on five fundamental notions including “freedom,” this is, arguably (and at least in this point), not something in which Leibniz differs much from Descartes. Only, freedom for Descartes still fell into the domain for which theological authority alone was responsible : an individual can be set free, absolved, as a member of the Christian community; this is why science, for him, shall have no dealings with the infinite.133 This, for Leibniz, is not so straightforward. Between Descartes (born 1596) and Leibniz (born 1646), Europe underwent the immense torture of the Thirty Years’ War, which coincided with the spreading of disease and famine, during which 30 percent of its entire population died. On the one hand, and considered in a more abstract manner, the theological conflict between the Catholics and Protestants was responsible for this catastrophe. This conflict regarded, essentially, the character of Christian freedom and the authority of the clerics to absolve (set free) the members of the spiritual community. But considered in a more pragmatic manner, of course, this conflict was instrumentalized for the political project of rearranging the distribution of political power on the continent. Both Leibniz and Descartes were looking for political freedom through mathematical, rationalist science. Yet in the political landscape where Leibniz was making his living as a diplomat, he was in the service of whichever powerful monarch happened to value science, while Descartes had made his living as an esteemed and praised servant to the project of reformation. Where Descartes’s liberation consisted in linking up mathematics with physics, thereby exposing physics as inherently value-free, thus liberating nature from its status as a play toy of authorities with their competing theological claims on the cosmos — whose nature one supposedly studies in physics — Leibniz’s liberation was no smaller in scope, but it was oriented differently. He began to link up mathematics with theology, thereby liberating spiritual capacity for quickness from its disputed status between frivolity and diabolic possession as a play toy of authorities with their competing political ideas, thus suggesting that divine justice is manifest in nature.134 It seems important to remember 132 Online Etymology Dictionary, s.v. “absolute (adj.),” http://www.etymonline.com/ index.php?term=absolute&allowed_in_frame=0. 133 See for example Jules Vuillemin, Mathématiques et métaphysique chez Descartes (Paris: Presses Universitaires de France, 1960). Significantly, this point is also what was later turned against Descartes’s philosophy by the Jesuits, who banned his writing by setting it on the index of prohibited books in 1663, on the accusation that his linkage between algebraic geometry and physics left not space for God in nature. See also Gábor Boros, René Descartes’ Werdegang: Der allgütige Gott und die wertfreie Natur (Würzburg: Königshausen & Neumann, 2001). 134 This, again, can be seen as the point of divergence between Leibniz and Descartes — Nature for Descartes was free of value, whereas for Leibniz, it was the expression of uncorrupted value. See Leibniz’s Théodicée (1710), Spinoza’s Tractatus Theologico-Politicus (1670), as well as Wittgenstein’s continuation of this line in his Tractatus Logico-Philosophicus (1918).
IX “Ichnography”—The Nude and Its Model
325
here that among the great traumas abetted by the war was the outbreak of a major wave of witchcraft persecutions all throughout Europe.135 Both were afraid, we can imagine, of the arrogance of human reason vis-à-vis the infinite and divine; and both tried to provide a place for the absolute to be accommodated with respect to mathematical science. With a context painted thus, I can now articulate my model of the plot in Knobloch’s article in a new version and suggest that (1) Descartes’s faith credited geometrical constitutions as the sine qua non reference for his general method (mathesis) that seeks to respect the absolute through an attitude of “reservation”; and (2) Leibniz’s faith credited algebraic computation as the methodology (mathesis universalis) to respect the absolute through “discretion.” Let me try to substantiate this speculative model by accumulating data in support of it. Two Kinds of Mathesis: General and Universal Algebraic computation as a methodology can treat an object in infinitary manner. It can discrete an identity whose mathematical formulation as an equation may involve the summation of infinite series — at the cost of regarding the mathematical description of an object as an interpretation (an “exegesis”) of divine law’s manifestation in nature. As such, it cannot stand on its own. Science that proceeds in this manner cannot be set apart from theology — but the idea of God, as well as that of Scripture, is thereby altered. An algebraic computational description is to be understood as the conservation of an articulation whose original judgment has never explicitly been spoken or sentenced; its articulation is implicit in the singular and infinitesimally differential character of all things natural. Hence Leibniz’s mathesis requires a novel script, which he calls a characteristica universalis. Such a characteristica is not an alphabet of vowels and consonants. It is a script entirely free from the postulation of being the descendent and heir of an original and supposedly pure language that had once been spoken, at some point distant in time (an Adamitic Language);136 his characteristica is to introduce the novel script of a language of the universe that never ceases to originate the statements it captures and conserves.137 Surprisingly so, Leibniz’s spiritual conservatism is no less “modernist” 135 This wave of prosecution was eventually silted with the help of the Jesuit Friedrich Spee’s influential book Cautio Criminalis seu de Processibus contra Sagas Liber (Juridical Caution and Concerns against the Witch Trials), first published anonymously in 1631. 136 See Umberto Eco, Die Suche nach der vollkommenen Sprache (Munich: Beck, 1995 [1993]). 137 This is how Serres can say, “The idea of an order through fluctuation [the dynamic order as it can be discerned by infinitesimal calculus] is not simply a new idea, it is the idea of novelty/news itself, it is its definition.” I. Prigogine, I. Stengers, S. Pahaut, and M. Serres, Anfänge: Die Dynamik — von Leibniz zu Lukrez (Berlin: Merve 1991).
326
coding AS literacy — Metalithikum IV
than Descartes’s analytical constructivism — both invent and engender technical devices that are to reform and liberalize the oppression of universal science by provincial and feudal authorities; but Descartes’s devices are geometric measuring tools, and Leibniz’s devices are tools that support arithmetic “measuring tools.” Descartes’s mathesis may well use characters of a novel kind, but it doesn’t require a novel script. It doesn’t aim at articulating and expressing nature immediately in its dynamics (by bypassing any assumed geometric constitutions of nature); rather, it aims at representing nature in general forms. Descartes’s mathesis continues to put its faith in Euclid’s geometric elements, and it regards the algebraic usage of the alphabet as a neutral and transparent “auxiliary device.” For Descartes, the order of nature is built of general finite forms, that is why we can measure and represent their differences by his algebraic, analytic methods that allow one to do geometry with numbers and characters. For Leibniz, on the other hand, it is the methodology of how one can discern nature that is algebraic, not simply the method. And nature, the object of such a methodology, is not mediately constituted by a supposed static order. Leibniz regards nature in the implicate and infinitesimally differential terms of Divine Law whose sentences are formulated in the novel script he was trying to find (theodicy). He did not regard nature anymore in analogy to a cosmology (which would claim to be the pure interpretation of divine order and the laws that constitute it, hence supporting the idea of linear progress rather than Leibniz’s intertwined and complex foldings). For Descartes, mathematical description of a natural object was possible only in the form of an arithmetic construction of the geometric constitutions. Hence, nature had to be free of value for him — a finite and mediate realm between God and Knowledge. The object that can be described mathematically had to be ascribed the finite character of res extensa, extension that fits in the coordinated space, rooted in the (theological) absolute as its point of origin (the zero point of the spatial coordinates). Such extension cannot possibly be infinite — otherwise no objective knowledge would be possible at all. For Leibniz, on the other hand — who must have refuted Cartesian skepticism as a pious hope, and the two substance metaphysics he developed in response to this skepticism as arrogant (he accuses Descartes of having set the limits of science by taking reference to his own mind) — wanted to use mathematics also for giving characterizations of an object’s intelligibility, and this intelligibility was granted by an object’s individuality (indivisible, continuous character) as it persists through time. For Leibniz this objective individuality cannot be grasped by the coordination of an object’s extension in space alone, it needed the complement of an extension in time; hence he needed to interpret the characters applied in algebra in a manner that supports a novel script.
IX “Ichnography”—The Nude and Its Model
327
Let us return more closely now to the points Knobloch makes in his text, and use them as the reference points upon which we can project these ideas such that we can ponder their plausibility.
cartesian limits Knobloch’s first paragraph is entitled “From the Theory of Equations to Linear Algebra.” He argues that in the eyes of Leibniz, algebra as applied by Descartes suffered from two imperfections : that the algorithmic solution of the general algebraic equation of nth degree was still unavailable, and that the geometric interpretation of algebraic equations did not suffice to comprehend transcendental problems in geometry. Transcendental problems in geometry refer to problems where irrational values like the number pi are involved, hence values where it is unknown if they are finite or infinite. Leibniz believed, like most of his contemporaries (as Knobloch tells us)138 that the algorithmic solution to the general algebraic equation of nth degree was well within reach — yet as it turned out, it was not until Gauss, in the mid-nineteenth century, that this could actually be asserted, namely in the so-called fundamental theorem of algebra;139 but this assertion was not a simple victory, it was also a disappointment. For on the one hand, the algorithmic solvability of the general algebraic equation of nth degree (provided the solutions are allowed to range over the complex number domain) was possible, but on the other hand, such algorithmic solvability introduces a degree of arbitrariness into the results of computed calculations that is proportional to the greatness of the exponent (the fundamental theorem of algebra states : algorithmic computations for an n-dimensional problem space — an equation of nth degree — operate within a solution space that accommodates n solutions). Regarding the second of the abovementioned “imperfections” attributed by Leibniz to Cartesian analytic geometry, namely its insufficiency in comprehending transcendental problems in geometry, Leibniz himself had made important contributions. In order to treat transcendental problems in geometry, Leibniz extended the Cartesian move to treat geometric elements in numerical terms, and suggested an algebraic treatment of number theory itself. Like this, he could find a way around what in geometry is the problem of the inaccountability of working with elements whose arithmetic number value expresses an irrational value — a problem that features in several guises throughout the history of mathematics, first and most famously perhaps with regard to the irrational number that sums up 138 Knobloch, “Beyond Cartesian Limits,” 114ff. 139 See Harel Cain, “C. F. Gauss’s Proofs of the Fundamental Theorem of Algebra,” online at the website of the Einstein Institute of Mathematics, Hebrew University of Jerusalem, http://math.huji.ac.il/~ehud/MH/Gauss-HarelCain.pdf.
328
coding AS literacy — Metalithikum IV
the value of a square’s diagonal, namely the square root of 2. Leibniz maintained that it is not irresponsible to compute with such numbers in symbolic notations — as long as they cancel each other out in the final form a system of equations takes before it yields the function that will compute the result. With this suggestion, Leibniz applied the same kind of thinking that he used for inventing his calculus, where he allowed the impossible quantities of infinitesimals — Leibniz called them “fictitious quantities” — to feature in the equations (as long as they cancel each other out throughout the process of resolution). To put it drastically : the ratio between two voids, 0 / 0, does make a “difference” in Leibniz’s mathematics that can qualify and describe dynamic change — even though this difference is entirely symbolic (it can be expressed in neither negative nor positive terms). We will see in a moment how this apparent absurdity is acceptable to Leibniz on the empirical grounds of its applicability in statistical problems in mechanics that only Lagrangean analytical mechanics finally managed to systematize roughly one hundred years later, in the eighteenth century.140 These problems involve a distributed “cause” that factors in a particular physical effect that can be observed and studied, for example in the curves produced by spirals. Let us look at the way Knobloch expresses the difference between Descartes and Leibniz with regard to geometry :141 while Leibniz accepted Descartes’s axiom that “exactness implies geometry,” he rejected the other axiom of Descartes, namely to restrict geometry to analytical lines. But what must we imagine counts as an analytical line for Descartes? The curves of spirals are Knobloch’s example of a non-analytical line.142 Descartes maintained that no exact description of such curves is possible, because “they depend on two motions that must be considered independent from one another. Human beings are not able to give their determined proportion.”143 Knobloch specifies that such curves would have to be described by “divine art, by means of an intelligence whose distinct thoughts are realized in time intervals which are smaller than any arbitrarily given time. This, he [Descartes] thought, did not apply even to angles.”144 There is one particular aspect where Leibniz took
140 See Isabelle Stengers’s first volume of her “Cosmopolitics” project for an elaborate account of the development from Cartesian mechanics via Leibnizian dynamics and its assumption of a vis viva, which transformed throughout the development of thermodynamics from metaphysical registers into physical ones, culminating in Schrödinger’s concept of negative entropy as a measure of biological life — as well as volume 2 (The Invention of Mechanics: Power and Reason) and volume 3 (Thermodynamics: The Crisis of Physical Reality). Isabelle Stengers, Cosmopolitics I (Minnesota: University of Minnesota Press, 2010 [1997]). 141 Knobloch, “Beyond Cartesian Limits,” 116ff. 142 Ibid., 117. 143 Ibid., 116. 144 Ibid.
IX “Ichnography”—The Nude and Its Model
329
issue with Descartes’s notion of exactness,145 namely the latter’s confusion of trochoids with spirals and helical curves. A trochoid is the curve described by a fixed point on a circle as it rolls along a straight line.146
Such curves counted for Leibniz as analytical curves, while for Descartes they did not. Leibniz’s infinitesimal calculus introduced time as a conservative magnitude that is capable of animating (temporalizing) the spatial dimensions of Cartesian constitutions. This combination of conservation and constitution marks the turn from mechanics to dynamics in physics. It eventually manifests itself in the distinction between force and energy.147 The aspect that keeps being confused in this profiled view of the two mathematician-philosophers who both worked analytically, is that Descartes was reducing analytics to geometry, while Leibniz was breaking analytics open (symbolizing it, encrypting it) toward an arithmetics that operates with symbolic numbers. For Descartes, this gesture would have been presumptuous, arrogant, and inexact. Yet Leibniz’s calculus can map movement into an empty form and compose with its sequences (Integrals and Differentials) of which each is characterized by variable values, just like Descartes’s geometry can project formal extension into a neutral container space (Coordinate Space) and 145 See Henk J. M. Bos, Redefining Geometrical Exactness: Descartes’ Transformation of the Early Modern Concept of Construction (Vienna: Springer, 2001). 146 See Eric W. Weisstein, “Trochoid,” MathWorld — A Wolfram Web Resource, http:// mathworld.wolfram.com/Trochoid.html. The image depicted here is taken from Wikipedia, s.v. “trochoid,” http://en.wikipedia.org/wiki/Trochoid. 147 Leibniz began to speak of the vis viva in distinction to the mechanical forces, but only in the beginning of the nineteenth century was the term “energy” introduced by Thomas Young instead of Leibniz’s vis viva. See Jennifer Coopersmith, Energy, the Subtle Concept: The Discovery of Feynman’s Blocks from Leibniz to Einstein (Oxford: Oxford University Press, 2011).
330
coding AS literacy — Metalithikum IV
build analytical forms with its variable elements. As Knobloch explains, “Descartes adhered to a mathematically fixed, closed, static realm of geometry, while Leibniz adhered to a mathematically changing, open, dynamical realm of geometry in which the classification of lines depends on our current knowledge.”148 Knobloch also mentions a “dialogue” by Leibniz that is worth mentioning and discussing here (I put dialogue in quotation marks because it features Leibniz with a fictitious interlocutor) : “But you will say, then we have realized the quadrature of the circle. I deny that we have a quadrature as desired. You say : ‘By means of the circular trochoid, a moving curve, we have exactly a straight line which is equal to the circumference of the circle.’ ‘Yes,’ Leibniz answered, ‘if exactly, then certainly also geometrically. But not only a geometrical but also an analytical quadrature of the circle is required.’”149 The problem of the quadrature of the circle is the classical locus where mathematicians have attempted to find a geometric measure for an infinite thought to actually exist (such as in the attempts to prove the existence of God mathematically). I read Leibniz’s discussion here, in his imaginary dialogue, as a statement against the interpretation of his mathematics in the service of such a project. Yes, he agrees, his treatment of a moving curve must be seen as a geometrically exact quadrature of a circle, but this cannot be taken as a proof for anything beyond the consistency of the mathematical description, he maintains, because unlike Descartes, Leibniz did not restrict the mathesis of his analytical procedure to geometrical lines; he invented linear algebra, a method of calculation where geometry is subjected to permutations. His critique of Cartesian finite geometry is that it deprives science of a necessary aid — the usage of symbols for quantities that had to count (for a particular time being, at least), as fictitious quantities. “For Leibniz the boundary between geometric and nongeometric lines is not fixed once and for all. It can happen that a nongeometrical line becomes geometrical, hence a way of describing it is found […] and that a nonanalytical line becomes analytical.”150 The necessary aid of which Leibniz speaks is the symbolic manners of expressing infinite series as mere auxiliary constructions in the service of a science of invention. Wouldn’t that be a good definition for what remains invariant — the secret that is being kept — in the variate forms of analytics? Algebra in the Service of Parabolic Invention But how to produce such auxiliary constructions? Leibniz proceeded by attempting an arithmetic quadrature of the circle. This involves an algebraic treatment of numbers—the packaging of numbers in the notations of symbolic code, such that they support the algebraic 148 Knobloch, “Beyond Cartesian Limits,” 118. 149 Cited in ibid., 116. 150 Ibid., 118.
IX “Ichnography”—The Nude and Its Model
331
procedures of substitution and elimination of factors in the resolution of an equation. “Consider the problem of dividing a ratio into a ratio of two irrational numbers,” Leibniz proposes.151 This has been an important aspect for learning how to compute curves that plot the dynamics of growth, and it has eventually cast off all remainders of meditative contemplation when a properly mathematical conception of growth was found that could be attributed, as a dynamic potential, to the base of a logarithm. The computation of growth dynamics has eventually become of central interest with the evolution of thermodynamics and system theory. But as far as Leibniz is concerned, his interest regarded neither (primarily) a science of biological or economic growth, but above all what he called “a new algebraic art” (novae artes algebraicae).152 Leibniz’s “new algebraic art” was for him at one and the same time an art of invention and a universal scientific method for the study of nature. The study of nature, we must recall, is the “mathematical exegesis” of the judgments of divine justice as they manifest in nature. In both aspects, it owes its exactness to geometry, but also in both aspects, it owes its universal character and its growing capacity for clear and distinct thought to a symbolic and crypto-analytic treatment of the infinite — an algebraic treatment, as his term “new algebraic art” suggests. With this, he broke with the largely unchallenged philosophical doctrine that subjects arithmetic combinatorics either to syllogistics, or to geometry (i.e., as Descartes did). For Leibniz, geometry provided methods to ponder infinitely the order of the Universe; for him, this order was neither static nor schematic. From his point of view, geometry did not provide an inventory of elementary forms, with which the postulated syllogistic truths of vocal-alphabetical script could be scrutinized in relation to the empirical manifestation of divine judgments in nature. His aspiration was not to measure the infinite. The infinite was to him a source for the creation of intelligible signs, signs that are not meant to be derived from a distant Original Language (vocal language) witnessed by tradition. The signs that can be created, so Leibniz imagined, are the signs of a symbolic language that is not meant to be — or to ever have been — spoken at all; its originality never ceases to be actual. Such is, for Leibniz, the nature of his universal characteristica, a symbolic language that can host any syntax, any vocabulary, any morphology that might be meaningful — now, in the past, or in the future. The rules of combination in his art of invention are an algebra of active thought, not syllogistic rules that would apply to how ideas can be represented to the mind via notions. The infinite gives birth to transcendental problems in geometry, those problems that allow the 151 Cited in ibid. 152 Cited in ibid., 122; originally from G. W. Leibniz, Sämtliche Schriften und Briefe, Reihe 7, Band 3, 1672–1676, Differenzen, Folgen, Reihen, ed. S. Probst, E. Knobloch, and N. Gädeke (Berlin: Akademie Verlag, 2003).
332
coding AS literacy — Metalithikum IV
pursuit of rational intellection to prosper more and more toward universality. Everything this depends on gravitates around the question of how sums of infinitely many terms can be computed. Knobloch emphasizes two convictions that guide Leibniz’s ideas of how rationality is involved with the infinite :153 1. There is no exception to Euclid’s common notion: the whole is greater than any part. 2. The same rules hold in the domain of the infinite as in the domain of the finite (law of continuity). Another way to put the same question is whether there is (exists) a whole that is represented by the sum of an infinite series. But what is an infinite series for Leibniz? He assumes at first, and later finds assertion for this assumption, that some irrational numbers can be expressed in the terms of a series of rational numbers.154 Even when he calculated parabolic areas by subtracting one infinite series from another, as Knobloch elaborates, Leibniz took the result of this computation to mean (1) that “this is quite wonderful and demonstrates that the sum of the series 1, 1/2, 1/3, … is infinite,” and then (2) that “this argument leads to the conclusion that the infinite is not whole and only a fiction. For otherwise, a part would be equal to the whole.”155 This treatment of the infinite is an engagement with the fictitious — yet for Leibniz, the fictitious is not the negative or the opposite of the real. Thus the fictitious for Leibniz is real in the sense that it triggers effects, even though it does not, properly speaking, actually exist. Infinite terms are summed up, but the totals of these sums are fictitious — this does not mean that they are less real, but it does mean that they are not representations of “wholes.” In a third version of our theme, I want to ponder the question of whether and how we are still today Leibniz’s contemporaries with regard to how we deal with the problem of how infinite terms can be summed up. Let me briefly outline a postulatory matrix of six vectors with which I would like to consider this question. 1. Mathematics at large—including pure (or theoretical) as well as applied mathematics—is currently not a general focus for intellectuals. 2. The millennia-old and inherited understanding of mathematics as the art of learning (intransitively so: learning in general, 153 Knobloch, “Beyond Cartesian Limits,” 122; cf. also Mark von Atten, Essays on Gödel’s Reception of Leibniz, Husserl and Brouwer (Vienna: Springer, 2015), here ch. 2 “A Note on Leibniz’s Argument Against Infinite Whole,” 23–32. 154 “There cannot be any doubt that some series are equal to irrational numbers though they consist of rational numbers. This must be investigated.” And so he did, which led his research to an arithmetic quadrature of the circle in the series pi / 4 = 1 – 1/3 + 1/5 – 1/7 + …; see G. Ferraro and M. Panza, “Developing into Series and Returning from Series: A Note on the Foundations of Eighteenth-Century Analysis,” Historia Mathematica 30 (2003): 17–46; and Knobloch, “Beyond Cartesian Limits,” 123. 155 Cited in Knobloch, “Beyond Cartesian Limits,” 124.
IX “Ichnography”—The Nude and Its Model
333
3.
4.
5. 6.
not learning something in particular) has largely been stripped of its spiritual character; in its flirtation with the unbound, the infinite, the immense and colossal, mathematics has been stigmatized as pathologically immoderate, and its technical-artistic character has been broken up and subjected to techno-logical regimes that are to watch over and guarantee mathematics’ entirely pragmatic services to its diverse fields of application. Some of these techno-logical regimes proceed in the age-old pursuit that relates mathematics to learning,156 but they subject this learning to an objective, they specify its constitutive intransitivity (not to be aimed at something in particular) to particular goals, aims, and values.157 Objectivity, thereby, cannot be considered as free of subjective interests; it becomes instrumental for competing interests in mobilizing their respective support; this instrumentalism needs to be complemented with the virtual and spiritual mode of an expectationalism. There is an agoratic reality to objective interests that characterizes them as possibly incommensurate. In response to this, among intellectuals an increasing uneasiness is registered toward a certain capitalization of knowledge (keyword: Corporate Science) that results from the predominance of applied science or, intimately related, “techno-science.”
THEME one, PLOT three: NAMING THAT OF WHICH WE KNOW NOTHING
We Are Leibniz’s Contemporaries So, having outlined our postulatory matrix of the larger context, are we still Leibniz’s contemporaries, having difficulties sorting out how the domain of mathematics is being governed in the interplay between algebra, geometry, arithmetics, logics, and poetics? Before global politics of science took this turn toward applications in the postwar era of the mid-twentieth century, intellectuals were not ignorant of the very logics behind those symptoms that we can empirically register today (vectors 1 to 6 above). While the trust in a certain sobriety that applied mathematics (science) promises against the danger of ideas is more than well understandable today, after the services that abstract reasoning has delivered 156 For example in the research on neural networks and computational agency. Matteo Pasquinelli has aptly characterized such “teleology” in “The Eye of the Algorithm: Cognitive Anthropocene and the Making of the World Brain,” November 5, 2014, http://www.matteopasquinelli.com/eye-of-the-algorithm/. The article was published in German in Springerin, October 2014. 157 Perceptive to this, Pasquinelli also exposes algorithms as functions of capital. See Matteo Pasquinelli, Gli algoritmi del capitale (Verona: Ombrecorte, 2014).
334
coding AS literacy — Metalithikum IV
to political programs of social cleansing in fascist nationalisms, a certain tragedy inherent to this promise was perfectly lucid already then: Herman Weyl, for example coined an important phrase that captures the entanglement of scientific values and salvational / utopian (de)(il)lusions that is irreducible even in the sheer pragmatism of applied science. He wrote: “In these days the angel of topology and the devil of abstract algebra fight for the soul of every individual discipline of mathematics.”158 Even if we refrain from relating to the domain of mathematics at large, and instead withdraw theoretical ambitions positivistically to particular regimes of application, a hegemonic competition about identifying the larger good, the meaningfulness and worth of intellectual efforts, remains virulent and cannot be neutralized. Or perhaps it is even worse, and the selfimposed deprivation from making sense of mathematics’ intellectual power in its entanglement with the Christian narrative of the fall of humanity from innocence and paradise, or in its entanglement with the Enlightenment narrative of emancipation and peaceful cosmopolitics in the overcoming of mankind’s animality, by acquiring a fully human nature through education, has not relaxed but, contrarily so, stirred up and initiated the attributive “localization” of — forgive me for putting this so bluntly — “where the evil might reside” that apparently inhabits, unsettles, and seduces an unchecked intellect. Considering ourselves Leibniz’s contemporaries with these questions helps to sober up blind, violent, yet impotent activism. Algebra’s Scope of Infinitary Discretion Let us refer once again to Knobloch’s article, which he starts with a quotation by Paul Valéry that praises algebra: “Is there a more divine human idea than naming that of which we know nothing? I can engage that of which I am ignorant in the constructions of my mind; I can turn an unknown something into a component of the machinery that is at work in my thinking.”159 It is difficult to imagine a better way to express the scandal of algebra. Its claimed aspiration, as formulated by Valéry, is to be able to name that of which we know nothing; but now, to name is to address something in its identity. How could that possibly be done without imposing upon it an identity that cannot be its genuine one? Algebra claims nothing less than to be able to “identify” a something without subjecting the infinity of its impredicativity to a finite order. There is a classical problem to which Knobloch turns when attempting to work out Leibniz’s critique on Descartes, namely the problem of the possibility of measuring a circle. In the history of mathematics, there is a long tradition of thinking of the infinite through the figure of the circle because the 158 Hermann Weyl, “Invariants,” Duke Mathematical Journal 5, no. 3 (1939): 489–502. 159 “Quelle idée plus digne de l’homme que d’avoir nommé ce qu’il ne sait point ? Je pus engager ce que j’ignore dans les constructions de mon esprit, et faire d’une chose inconnue une pièce de la machine de ma pensée.” Paul Valéry, Alphabet (Paris: Librairie générale française, 1999 [1904/05]), here cited in Knobloch, ibid., 113; my translation.
IX “Ichnography”—The Nude and Its Model
335
circle can never be exhaustively measured. A circle comprehends everything it encompasses — it is therefore the one element that is central to theoretical geometry. It can provide a common compass between things that appear as incommensurate or immense, nonmeasurable, because it expresses self-reference as a relation, and is capable of handing us this relation as a device to operate with. In mathematics, the circle is the stepping stone to theoretical geometry: what counts as the “Thales moment” in the history of mathematics refers to Thales coming up with his famous theorem that states how quantities might be extended or diminished to infinity while keeping their ratio as well as their proportionality. The theorem postulates a “logos” as “a manner of speech” that articulates an identical relation through the totality of its possible variations. Such a relation of identity is called an “invariance” because it collects all combinatorial manners of how variants of this self (whose identity is being determined by the invariance formulated in the theorem) might relate to themselves. Without determining his own “self,” or that of the pyramid that he wanted to measure, Thales’s insight was to draw one circle around himself and one around the pyramid, and to postulate that theoretically everything that is comprehended by one circle (encompassing himself whatever he might be) must be commensurate with what is comprehended by the other circle (encompassing the pyramid, whatever it might be).160 Hence he could establish a space of similarity where the sun’s effects on either one of these “impredicate selves” could be counted upon as being in proportion to each other — a proportion that pertains to the mediacy of the abstract locus established by the two circles, is not a proportion that is rooted in an immediate cosmic order (as it arguably characterizes many readings of Plato’s Timeaus).161 Thales established a 160 See Michel Serres, “Was Thales am Fusse der Pyramiden gesehen hat,” in Hermes II: Interferenz, trans. Michael Bischoff (Berlin: Merve Verlag, 1992 [1972]), 212–39; also my article “Arché, Arcanum, and Articulation.” 161 Many readings leave unthematized that both of Plato’s spheres, the corpuscular inner circle of becoming as well as the ideal outer circle of being, have the same elementary “materiality” — this aspect is crucial because it provides for the possibility that the cosmos can at all be “counted” (otherwise it would not be a cosmos, an order). It is an “elementary materiality” of a partitioned whole, and I suggest to call it “numerosity.” I am citing from Benjamin Jowett’s translation, provided by MIT Classics (http://classics.mit.edu/Plato/timaeus.html; np). In Plato’s account, both circles that make up the cosmic animal are engendered by the Demiurg out of the soul of the universe, which he assumed that was made up of three irreducible elements: the same, the other, and essence. He mingled these “into one form, compressing by force the reluctant and unsociable nature of the other into the same.” Plato continues that “when he had mingled them with the essence and out of three made one, he again divided this whole into as many portions as was fitting, each portion being a compound of the same, the other, and the essence.” This partitioned whole “he took and stretched it and cut it into two,“ and “crossed, and bent [it] such that the ends meet with ends.” Like this, two intertwined circles are created, an inner and an outer, and we are told that the Demiurg had them revolve around the same axis. The motion of the outer circle is called the Same, and the motion of the inner circle that of the Diverse. To the outer circle belong the intelligible forms that characterized Being, and to the inner belong the visible and corporeal bodies that characterized Becoming.
336
coding AS literacy — Metalithikum IV
logos of similarity whose metrics is operative in the efforts of describing, representing by naming; this peculiar logos is not itself descriptive or representational to anything external to what it can account for immanently to itself. It is a metrics that discretes and renders commensurable. Such a logos of “speaking mathematically” constitutes the symbolic auxiliary structure that supports and enables “logistic mobility,” in the sense that it mobilizes all that pertains to the particular logos it constitutes — i.e., to all that can be reasoned within the grounds of such a logos. The relation of identity established thereby — identity as an invariance — is “autogenic,” “self-engendering,” of all that might once be recognized as being comprehended by this identity within the scope of infinitary discretion. It is an impredicate identity, and its impredicativity cannot simply be grasped as the negative of predicativity. Bertrand Russell introduced these terms in relation to the problem of “completed infinities,” as it emerged from a conflict between Cantor’s introduction of the transfinite on the one hand, and, on the other, the interest in exploiting this theory of the transfinite for the development of a logics that can be foundational for mathematics at large. Such a logics, again, was imagined by Frege as a kind of logos in the sense I’ve used this term above — it was the logics of a Begriffsschrift (a Concept Script).162 The idea thereby was that we might treat every word theoretically in this “Thales gesture” : a concept gives the definition of the word, but this definition ought to be considered as a circle (i.e., as infinitely comprehensive). As such, it can accommodate all one might ever find out about the reference of words. Such a script would do away with literacy, poetics, the rhetorical power of speech, and articulation; it would capture and conserve bare truth values of words. The promise of such a concept script was threefold : (1) to abstract from the role of the alphabet and its mystical, mythological, and theological character that so preoccupied late nineteenth-century philology in its quest for reconstructing a truly original language that could accommodate all the vernaculars;163 (2) to break the link between 162 Gottlob Frege, Begriffsschrift: Eine der arithmetischen nachgebildete Formelsprache des reinen Denkens (Concept Script: A Formulaic Language of Pure Thought Modeled According to Arithmetics, 1879). This contrasts strongly with the plotting of the same theme by Frege’s contemporary George Boole, who did not depart from the idea of one homogenous arithmetic, as Frege arguably does, but tried to come to terms with arithmetic “locality” in relation to symbolic sets of numbers (fields, rings, ideals, etc). Boole’s plot was entitled An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities (1954). In light of our computational paradigm today, for which Boole’s Algebra plays a constitutive role on the physical hardware level for the control of electrical power, it is important to remember that what electro-engineers call “Boolean Algebra” today has not all that much to do with Boole’s Algebra as a philosophical topos (which for Boole it was). See Theodore Hailperin, “Boolean Algebra Is Not Boole’s Algebra,” Mathematics Magazine 54, no. 4 (September 1981): 172–84; as well as Walter Carnielli, “Polynomizing: Logic Inference in Polynomial Format and the Legacy of Boole,” available at ftp://logica.cle.unicamp.br/pub/e-prints/ vol.6,n.3,2006.pdf 163 See Umberto Eco, Auf der Suche nach der Vollkommenen Sprache.
IX “Ichnography”—The Nude and Its Model
337
meaning and oral traditions, in favor of an ever incomplete universal meaning — such meaning can hardly come from a particular oral tradition without introducing aspects of racism and fascism into philosophy and science; (3) to liberate (or at least to relax) the pursuit of knowledge (and with it the political role of science) from its theological entanglements with eschatological ideas (as it pertains to the idea of scientific progress that constitutes modern values at large).
“Nature Is There Only Once”: The Promise of a General Metrics By modern values I mean here the principle discrediting the authority of traditions as a means to allow for their coexistence. The strong distinction between culture and nature grows from these values—the famous statement by Ernst Mach, that nature is there only once (“Die Natur ist nur einmal da”)164 captures the stance of many physicist-philosophers between the last decade of the nineteenth century and the first three decades of the twentieth century.165 To make a long story very short, Russell’s problem of impredicativity, that of the possibility of a “completed infinite,” threatens to thwart this threefold promise of a script that does not accommodate, conserve, and host the meaning and customs of oral traditions, but one that only accommodates facts of universal value. Such a script would have been a truly modern technique of how to conserve and exchange knowledge without participating in a hegemonic struggle between different customs that would, inevitably, select and impose particular cultural values over others on arbitrary and contingent grounds. Tied to the idea of such a script was that it could establish a reality of facts adequate for developing international law based on universal human rights and universal customs of conduct. With such a script we would have a cultural technique with which the circle of conquest and revolution, expansion and seclusion, might be broken and bring at one distant point in the
164 Ernst Mach, “Die ökonomische Natur der physikalischen Forschung,” lecture, Imperial Academy of Sciences, Vienna, May 25, 1882. The entire paragraph goes as follows: “Wollten wir der Natur die Eigenschaft zuschreiben, unter gleichen Umständen gleiche Erfolge hervorzubringen, so wüßten wir diese gleichen Umstände nicht zu finden. Die Natur ist nur einmal da. Nur unser schematisches Nachbilden erzeugt gleiche Fälle. Nur in diesem existiert also die Abhängigkeit gewisser Merkmale voneinander.” In English translation (my own): “If we wanted to attribute those properties to nature, which yield the same results given the circumstances are the same, we wouldn’t know how to find these same circumstances. Nature is there only once. Only our schematic representation produces cases that are comparable. It is only in the latter that a certain causal dependency between cases exists.” 165 I owe much of the perspective I am outlining here to a talk given by quantum physicist Françoise Balibar at the Society for European Philosophy’s conference “Philosophy after Nature” in Utrecht in 2014. Her paper features Mach’s statement as a title, and explores the particular notion of precision that originates in this context, and that is so much at odds with the physics of quantum science.
338
coding AS literacy — Metalithikum IV
future global peace for a cosmopolitical world society.166 Physics, with its central axiom that there is only one nature, came to play the role of a paradigm in this context. But the conflict exposed by Russell with the problem of a completed infinity insists in Mach’s own formulation : for it remains indifferent about what such uniqueness might mean. Does it mean that nature is a singular phenomenon? Then how can one possibly account for the regularity that supports the pursuit of science in the first place, and whose mastership allows for the development of all the technics around which civilizations evolve, prosper, or collapse? If the uniqueness of nature means that there is nothing that compares to it, then how can it be “one,” how can one account for the uniformity and homogeneity that is to distinguish things natural from things cultural? This problem — how Nature’s uniqueness can be reconciled with the postulate of the existence of natural laws assumed as responsible for the regularities that can be discerned and conserved from studying nature — was treated by recourse to the analysis of functions as analysis based on calculus.167 Crucial for this understanding of uniqueness was the concept of single-valued functions as one-to-one or one-to-many mappings. In the theory of functions, eindeutig (unique) means that the character of whatever is qualified as eindeutig is uniquely or unambiguously determined. According to the then-prevalent view, “the concept of function constitutes the general schema and model according to which the modern concept of nature has been moulded in its progressive historical development.”168 By making this import from mathematics to physics, something crucial happens : what in the former is a mode of determination (Bestimmungsweise is Riemann’s term for this) is being epitomized and — illegitimately so — attributed to the thing itself that is being determined in this particular mode (namely, in this case, that of single-valued functions). The hope for a manner of conserving universal meaning, with all its apparently entirely secular and political implications, which Russell saw threatened by the problem of a completed infinite, and which he strived to render solvable by introducing the distinction between predicativity and impredicativity, depends upon the transparency and neutrality between the (mathematical) manner of determination of a natural thing in terms of functions, and the attribution of this determination to the thing itself. In short, it depends upon the propagation of metaphysical univocity masked as mathematical 166 It is no coincidence that these same decades (late nineteenth to mid-twentieth century) were also when theories of societies were formulated, and where sociology appeared with its claim as Universalwissenschaft (universal science) based on the aspiration that it can mediate between the positive (natural) sciences and the fields of hermeneutic and cultural knowledge. 167 Ernst Cassirer, Substanzbegriff und Funktionsbegriff: Untersuchungen über die Grundfragen der Erkenntniskritik (Berlin: Verlag von Bruno Cassirer, 1910). 168 Ibid., 27; my translation.
IX “Ichnography”—The Nude and Its Model
339
uniqueness. As Françoise Balibar, from whom my interest in Mach’s dictum here is crucially inspired, noted somewhat dryly : “Univocity has migrated from a characteristics of some mathematical objects (functions of a certain type) to Nature itself.”169 After a pause she continued : “One hundred years later, this sounds silly.” First, Balibar elaborates, “we know now that functions only form a class of mathematical objects, a very restricted one for that matter, associated to the ideas of number and quantity.” And secondly, she continues, “not only mathematics have changed but physics too. It has evolved by enlarging its mathematical toolbox to vectors, quaternions, tensors, matrices, numbers of all kinds, geometrical objects, n-dimensional spaces, etc., for which unique determination is not an issue.”170 Symbolisms and Modes of Determination In mathematics, these novel tools had already been invented and formalized in the nineteenth century — they had been available for Mach, Frege, Russell, and company. They found application mainly in the novel fields of electro-engineering and operations research, and a little later in information science. It is a significant symptom that this entire branch of nonclassical mathematics has largely been ignored by the guild of physicist-philosophical intellectuals devoted to developing a social theory for modern society. With referring to these mathematical concepts as nonclassical, I borrow another distinction from Balibar, who used that term to indicate how “nonclassical” determinations, as opposed to “classical” ones, make no reference to geometry at all; or more precisely, they do not refer to geometry in a way that is not already mediated by the manner of determining the elements and axioms that constitute a particular geometry. She rightly points out that this distinction originates in nineteenth-century mathematics, specifically so in Riemann’s introduction of a notion of multiply extended manifolds in his habilitation paper Hypothesen, welche der Geometrie zugrunde liegen (1854), as well as in Boole’s algebra with which, from his early work in The Mathematical Analysis of Logic (1947) to his later An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities (1854), he generalized algebra from “being conversant to numbers and quantities” only, and raised it to a status of being conversant to symbols in general. I call it, a bit unjustly perhaps, a significant symptom because the claim of Nature’s uniqueness, without considering how to account for valuing planar geometry (Euclidean coordinated 169 I am citing from my own notes taken during her lecture in Utrecht. Balibar’s manuscript will be forthcoming in Rosi Braidotti and Rick Dolphijn, eds., Philosophy after Nature (expected 2016). 170 Ibid.
340
coding AS literacy — Metalithikum IV
into three dimensions and one point of origin by Cartesian geometry) above any other of the projective geometries that can be formulated with — from a purely formal point of view — equivalent consistency, is strictly at odds with the values of modern, secular science and the latter’s emancipation from metaphysics and theology. Russell himself was deeply aware of this, as his PhD dissertation from 1899 testifies. It was clear to him that the aspired universality of statements that characterize nature in her uniqueness depends upon the indisputable universality of a metrics. Speaking of manners of mathematical determination contradicts this possibility. It introduces into mathematical determination problems of a similar kind to those that pertained to linguistic determination, and that led in the domain of the latter to this dangerous quest of an original language that — if not a straightforward Adamitic language — then in any case a language with “objective” claims of being superior above all others that are merely derivative to it. It is clear that together with such a language, a people conversant in that language could also claim “objective” superiority of their culture over any other. So a generic universality of metric geometry (geometry as providing such a universal metrics) was what Russell set out to defend against the topological geometries developed by Riemann (and Grassmann’s Exterior Geometry).171 The threat he saw resides in the loss of scalability that would result from the abstinence from a universally valid metrics. Topological geometry constitutes metrics, and it does so in many different ways; measured quantities, hence, cannot simply be scaled up from the local to the global, or inversely, derived from the global and applied to the local. Russell’s supervisor in this research was Whitehead, with whom Russell later collaborated in a joint project of establishing first principles for mathematics, their Principia Mathematica (1910–13). Whitehead too was well aware of the dilemma around metric geometry and topological geometry. But he considered all aspects on the level of their algebraic formulation — he considered that the topological geometries are simply algebraic articulations of geometry (of which planar geometry was one species). He was one of the first to aim at systematizing all the nineteenth-century findings in algebra of non-single valued mathematical objects (vectors, matrices, scalar products, algebraic integers, etc.). He collected them under the aspect of “universality” — only, and in this he differed substantially from Russell, he attributed universality to algebra (not to a particular geometry and its metrics). The term universal algebra was in fact coined by Whitehead’s book A Treatise on Universal Algebra 171 Hermann Grassmann, Die lineale Ausdehnungslehre (Leipzig, 1844); and its English translation A New Branch of Mathematics, trans. Lloyd Kannenberg (Chicago: Open Court, 1995). See also Hermann Grassmann, Geometrische Analyse geknüpft an die von Leibniz erfundene geometrische Charakteristik (1847), http://quod.lib.umich.edu/ cgi/t/text/text-idx?c=umhistmath;idno=ABN8108.
IX “Ichnography”—The Nude and Its Model
341
from 1898.172 His aim in this study was to present “a comparative study of the various Systems of Symbolic Reasoning.” Those Systems of Symbolic Reasoning, as Whitehead calls them, had been looked upon “with some suspicion” by mathematicians and logicians alike — as he puts it: “Symbolic Logic has been disowned by many logicians on the plea that its interest is mathematical, and by many mathematicians on the plea that its interest is logical” (vi). In short, for Whitehead it was not clear how the characters of a script can be distinguished depending on whether they are used in descriptions of things cultural (language based) or in descriptions of things natural (mathematics based). The superiority of nature’s uniqueness over any culture in particular depends on this distinction. The broad reservations among analytical philosophers against Whitehead’s move (apparently) backward, namely from epistemology to metaphysics, is surely related to this. Developing a new metaphysics lends itself hardly to mobilize shortterm activism, and it arguably contradicts the view that societies can be “planned” and “theorized” purely pragmatically — something Russell, in distinction to Whitehead, appears to have believed in.173 Psycho-political Struggle around the Cardinality and Ordinality of Sums (Totals) This may sound like an ivory-tower problem, and reading something subjective about Whitehead’s and Russell’s personal commitments to politics out of it may sound like an illegitimate and far-fetched culmination, an attempt to make something irrelevant appear more relevant. But if we consider the characterization given in the classic textbook The Development of Mathematics (1950) by E. T. Bell, we find a lucid translation of what I see to be at stake. Bell writes: “Cayley’s numerous successes, quickly followed by those of the prolific Sylvester, unleashed one of the most ruthless campaigns of totalitarian calculation 172 This difference also explains how Russell and Whitehead parted ways as philosophers after the crisis of the “completed infinite” was explicit (the problem of impredicativity). While Whitehead returned to the legacy of metaphysical philosophy and attempted to engender a metaphysics of processes and multiply dimensional extension — completely at odds with modern values in the eyes of many — Russell did not dare to look for a different manner of how the same hopes might be supported. This also came to be the point of conflict between Russell and his younger disciple, Ludwig Wittgenstein. The latter expressed at one point the problem pertaining to a complete infinite, in relation to a logics or a script of purely universal values, by challenging his readers: “Ask yourself whether our language is complete; — whether it was so before the symbolism of chemistry and the notation of the infinitesimal calculus were incorporated in it; for these are, so to speak, suburbs of our language. (And how many houses or streets does it take before a town begins to be a town?) Our language can be seen as an ancient city: a maze of little streets and squares, of old and new houses, and of houses with additions from various periods; and this surrounded by a multitude of new boroughs with straight regular streets and uniform houses.” Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe (London: Wiley-Blackwell, 1991), 16. 173 See Bertrand Russell, The Scientific Outlook (London: Routledge 2009 [1954]).
342
coding AS literacy — Metalithikum IV
in mathematical history.”174 Arthur Cayley and James Joseph Sylvester were leading mathematicians in the field of so-called quantics, the study of algebraic form. An algebraic form comprehends a quantity that remains invariant throughout the application of particular transformations. The quantity to which a particular combinatorics applies would be an invariant quantity. The form or concept of such a quantity can only be given as an algebraic form (as a group or ring ranging over a particular field or module). An invariant quantity is not the same as a constant quantity: a constant is expressed explicitly by one particular value (as, e.g., the speed of light in a vacuum) hence (from a purely operational, formal point of view) it must be understood as one manner of determining a quantity of which we can say that it is invariant. Of such a quantity, we cannot give an explicit value. For example, one particular invariant quantity that physics cannot dispose of is energy. While the assumption underlying dynamics as a science today is that the total amount in the universe cannot be altered, no one could actually give a particular value that expresses how much this total amount is supposed to be. It is the distinctive mark of algebraic form, and the invariant quantities they express, to render a diversity of solutions that are all equally possible within one particular solution space. The solution space, thereby, is mathematic and cannot be directly identified with the physical situation we seek to describe mathematically. In short, to speak of invariants as constants necessitates a metrics according to which the value can be determined. To put it polemically: it would be Russell’s stance, not Whitehead’s. So let us think closer about why quantics, the study of algebraic forms, might have been associated by Bell with “one of the most ruthless campaigns of totalitarian calculation in mathematical history.” Bell continues: Such misdirected foresight was not peculiar to the algebra of quantics in mathematics since 1850. In the accompanying theory of groups, for example, especially permutation groups, there was a similar panic. Once the means for raising unlimited supplies of a certain crop are available, it would seem to be an excess of caution to keep on producing it till the storehouses burst, unless, of course, the crop is to be consumed by somebody. There have been but few consumers for the calculations mentioned, and none for any but the most easily digested. Nevertheless, the campaign of calculation for the sake, apparently, of mere calculation did at least hint at undiscovered provinces in algebra, geometry, and analysis that were to retain their freshness for decades after the modern higher algebra of the 1870s had been relegated to the dustier classics.175 174 E. T. Bell, The Development of Mathematics (New York: Dover, 1992 [1950]), 429–30. 175 Ibid.
IX “Ichnography”—The Nude and Its Model
343
Bell describes how abstract, symbolic Algebra appeared like an “undiscovered continent” on the horizon. Those who pushed the application of the symbolic method without dedicated political or economic commitment were “adventurers,” whom Bell calls “illegitimate Kings” striving for “profit.” Masses of young mathematicians were recruited, he writes, who mistake the “kingdom of quantics” for the “democracy of mathematics.” Quantics was the name of the branch that studied algebraic forms before it turned into a general theory of invariances. Leading algebraists were accused of mobilizing “totalitarian” regimes of calculation, by recruiting mathematicians for theory without applications and use. If we recall the decision communicated by the French Academy of Science shortly after the French Revolution, that the classical mathematical problems like squaring a circle should no longer be credited within institutional science because they consume the workforce of mathematicians for metaphysical interests, it seems understandable that the attractiveness of studying algebraic forms for their own sake (intransitively so, to make a link back to our earlier discussion of how mathematics ceased to be the non-teleological art of learning) was perceived as an offense or even deceit of Enlightenment values. In effect, it was stigmatized a political threat. While this is well understandable in a situation where societies undergo unprecedented change through industrialization, and where guidance for this process is most urgently needed, it nevertheless seems somewhat inconclusive to accuse algebra of “totalitarianism.” Let us consider how the idea of a totality relates to the one term that is at the heart of the impredicativity problem : the idea of a completed infinite. A total is a sum. It is an arithmetic concept and it relates to the operation of addition. But depending on how we think about the status of the mathematical symbols that are being “summed up,” a “total” is something very different in kind. Cantor’s distinction of how we can clarify our notion of numbers in his transfinite mathematical universe, that between ordinality and cardinality, is helpful to see how a “total” can be different in kind. We can think of the symbols being added up (1) as placeholders for cardinal values, i.e., as indexes to something like the corpuscular or the magnitudinal aspect of Cantor’s countable, transfinite universe. In the terms of such a sum, the totality of a completed infinite would refer to a constant quantity. On the other hand, (2) we can think of those symbols as placeholders for ordinal values — i.e., as indexes to the immaterial and multitudinal aspect of Cantor’s countable Universe. In that case, the completed infinite of a total would refer to an invariant quantity. A completed infinite can be thought of as an invariant or as a constant, and they differ in kind, so we claimed. But how so? I would like to come back to the one aspect in science that is indisputably treated as an invariant : energy. The laws of physics, since the advent of thermodynamics, make one fundamental assumption,
344
coding AS literacy — Metalithikum IV
namely that the amount total of energy in the Universe be invariant. Energy cannot be created nor can it be consumed. What can change, form, and / or dissolve is the organization of how energy is compartmentalized, captured, and stored in the chemical and climatic metabolism of the elements. No assumption needs to be made regarding an explicit figure this total of energy in the universe is thought to amount to. Rather, it is an assumption that provides a proportionality for making calculations with respect to subsystems within the universe — like those pertaining to a solar system, or more narrowly to the ecosphere of planet Earth, for example. For those subsystems, all calculations depend on naming an explicit value for the stocks of energy that are being traded and transformed within such a dynamic system. The assumption of such constant values as defining the limits of the system is what allows identifying systems in the first place — as an epitomized ideality. We can take this latter as an illustration of a completed infinite in the terms of cardinals, and the former as an illustration of a completed infinite in the terms of ordinals. The amount total as a universal invariance in terms of ordinals is an indispensable hypothesis that can never be verified by finite, empirical means; any attempt to represent its validity would be presumptuous in the literal sense of this word176 — it would, illegitimately so, foreclose and take for granted that which can be named “algebraically” (in the sense of Paul Valéry), but for which no one measure exists. But the assumption of this hypothesis is a cornucopia out of which can be engendered numerous manners of ordering, and hence also of counting, the relationally nested infinities (totals) that stratify topoi of relative locality and globality. The Presumptuousness of Universal Measure The locus classicus for thinking about this is the so-called continuum problem in mathematical number theory. We can refer to its illustration of real numbers as an infinitesimally continuous line as an attempt to picture the idea of nested infinities: each number class comprehends an infinity — for one, the double infinities of the integers (to the negative and to the positive), and then the orthogonal infinities of the real numbers that are spacing out between any single one of the integer segments. It is an illustrative case for our context, because the two founding intellectuals of modern set theory, Richard Dedekind and Georg Cantor, each explained their respectively attributed meaning to the number class concept, and the mathematical idea of sets, with recourse to the continuum problem. 176 From the Latin prae, “before” + sumere, “to take,” meaning “the taking of something for granted” as is attested in English from c. 1300. Online Etymology Dictionary, s.v. “presumptuous (adj.),” http://www.etymonline.com/index.php?term=presum ptuous&allowed_in_frame=0.
IX “Ichnography”—The Nude and Its Model
345
First steps toward an idea of sets came with Dedekind’s research into algebraic number theory. “In the context of his work on algebraic number theory,” José Ferreiros accounts in his Stanford Encyclopedia article on the early beginnings of set theory, “Dedekind introduced an essentially set-theoretic viewpoint, defining fields and ideals of algebraic numbers. […] Considering the ring of integers in a given field of algebraic numbers, Dedekind defined certain subsets called ‘ideals’ and operated on these sets as new objects. This procedure was the key to his general approach to the topic […] Thus, many of the usual set-theoretic procedures of twentieth-century mathematics go back to his work.”177 We must not go into details here, but we can see at once that what inspired the controversial illustration of how sets are thought to establish a transfinite realm where they nest among each other, all comprehended and accommodated in the one exhaustive line of real numbers imagined as an infinitely capacious continuum, was an idea quite different in character. It was that of thinking about the integers as a ring rooted in a field of algebraic numbers. The character of this image is discrete, not continuous; discrete here meaning that the domain of the rational numbers that the integers establish is only partial and not coextensive to a “rationality” that would pertain to the “natural numbers” as such. In effect, numerical calculation is confronted by multiple “rationalities.” The A rationality provides a common denominator, and the promise of “rationalization” is that it makes things measurable and comparable without qualifying them in subjective manners. We have seen earlier how the advent of algebraic geometries in topology threatened to thwart the salvational hope for a one universal metrics applying to the singularity of nature (in distinction to having to achieve such a metrics by “mutual acculturation”). Dedekind’s algebraic number theory forms complicity with topology in thwarting hopes in this salvational promise. Together with topology, algebraic number theory appears like the fall from what Cantor imagined as a paradise (recalling David Hilbert’s famous words that “no one shall drive us from the Paradise which Cantor has created for us”). With it, we not only have to deal with many geometries (in topology), but also with many arithmetics (in distinct algebras or, as Whitehead called them, systems of symbolic reasoning). Discrete Intellection of Invariances vs. measuring the continuity owed to constant values We can see in Dedekind and Cantor the two archon minds in pursuit of philosophies respective to the two different kinds of completed infinities, one in terms of ordinals, one in terms of cardinals. The one in 177 José Ferreirós, “The Early Development of Set Theory,” in Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta (Winter 2012), http://plato.stanford.edu/archives/ win2012/entries/settheory-early/.
346
coding AS literacy — Metalithikum IV
terms of ordinals, that whose total amount figures as an invariance and can be named only algebraically (i.e., in encrypted manner, not “immediately” or “naturally”), is related to what in philosophy is called, since Leibniz, the assumption of an “actual” infinity. An actual continuum cannot be translated into the illustration of a continuous line; it can be expressed only in a discrete illustration like that of a ring whose consistency depends on a symbolically prepared ground, a specific field of algebraic numbers. For an actual infinity, the relation between problems and their solutions is not absolute (either solvable or not). Instead, the mathematical formulation of a problem forms, together with the identification of a particular domain over which the solutions ought to range (which we could call a particular rationality), a particular solution space. Such a solution space can be of diverse capacity, depending on the complexity and level of abstraction that informs and is considered in the formulation of a problem. On the other hand, if we assume an infinity completed in terms of cardinals, whose total amount can be expressed as an explicit value (i.e., as measured, with no need of being rendered denominable and decipherable by encryption), we must not deal with a diversity of competing solution spaces, each of which can be — and must be — discreted in numerous ways. Instead, we find ourselves in one real space that determines with quasi-material force what formulation of a problem is solvable or not. Let us return now to Ferreirós’s portrait of the disputes between our two protagonists — let’s call them Dedekind the interpretant of the oracle’s voice (an algebraist computing how the reality of judgment can be postponed by “dis-cipherment” [Verrätselung]), and Cantor the constructivist prophet (a geometer of paradise). “In late 1873, came a surprising discovery that fully opened the realm of the transfinite. In correspondence with Dedekind, Cantor asked the question whether the infinite sets N of the natural numbers and R of real numbers can be placed in one-to-one correspondence. In reply, Dedekind offered a surprising proof that the set A of all algebraic numbers is denumerable (i.e., there is a one-to-one correspondence with N).”178 Both were thinking about the set of all sets, the universal set. But Cantor wanted that set to be “real,” not “actual” (as Dedekind does), and he wanted the reality of this most powerful set to be coextensive, in cardinal terms, with the set of natural numbers. Dedekind, on the other hand, ascribing a discrete actuality to any one of the conceivable infinities (sets), also wanted this most powerful set to be coextensive with the set of natural numbers, but in the manner of ordinal values. For him, the universal set was the total of all sets of algebraic numbers; it was the power set of the actual infinities that algebraic number sets are capable of comprehending. Hence, he also called his universal set a totient, highlighting thereby its operational and ordinal character. For 178 Ibid.
IX “Ichnography”—The Nude and Its Model
347
Dedekind, the universal set comprehends “the totality of all things that can be objects of my thought.”179 Set theory for Dedekind is a means of discreting (from the German ermessen) the materiality of ideality, we could perhaps say. For Cantor, on the other hand, it is, inversely so, a means of measuring (from the German vermessen) the ideality of materiality. Philosophically speaking, we have very different kinds of humanisms here. Dedekind is famous for his saying : “We are of divine sex [Wir sind göttlichen Geschlechts] and without doubt possess creative power not merely in material things (railroads, telegraphs), but quite specially in intellectual things.”180 Cantor’s humanism, on the other hand, would insist vehemently that humankind does not share the same sex as the divine — or rather, that the divine is omnipotence without sex, whole and self-sufficient — while human beings do not possess creative powers at all. Humans are the Divine’s creature, its toy or possession. Human thought can only reproduce the reasoning of divine order, it cannot actively participate in it. This is different for Dedekind. For him, the intellect is coextensive with the universe (the universal set). For Cantor, the totality of the divine intellect’s creation — Nature, which is there only once — is coextensive with the universe (the universal set, the set of all sets). For Cantor, nature is vaster than what the human intellect can grasp; for Dedekind, the collective intellect of all that is of the divine sex is vaster than what actually manifests in physical form, and vaster than any individual can ever grasp. For him, mathematics is still the art of learning, not the technique of how we can know. Some decades later, Kurt Gödel expressed in his own manner what seems to be the same problem : “Either mathematics is too big for the human mind,” he suggested, “or the human mind is more than a machine.”181 Surely what Gödel meant was a mechanical machine, not a thermodynamic one or a quantum-physical (electronic) one; but this would be a discussion that leads us astray here. The sole point I wish to make by mentioning it is that much of the strong feelings for or against technics — whether we adore it and see in it a manifestation of natural truth, or whether we hate it and see in it mere manipulation and impoverishing degeneration from natural truth — it all depends on how we think about the nature of number and the nature of intellect. So considered, the difference between the algebraist and the geometer is a theological one. That is why Cantor’s attempt to prove Dedekind wrong, by means of mathematics, makes such a strong point, but not about either one of them being objectively “right” or “wrong.” Ferreiro explains : “A few days later [after Dedekind sent him the proof that the 179 Richard Dedekind, Was sind und was sollen die Zahlen? (Braunschweig: Vieweg, 1888). 180 Richard Dedekind, “Brief an Weber,” in Gesammelte Mathematische Werke, vols. 1–3, ed. R. Fricke, E. Noether, and Ö. Ore (Braunschweig: Vieweg, 1930–32), 3:488–490; my translation. 181 Cited in David Bergamini, Mathematics (New York: The Life Science Library, 1963), 53.
348
coding AS literacy — Metalithikum IV
set of all algebraic numbers is denumerable], Cantor was able to prove that the assumption that R is denumerable leads to a contradiction. To this end, he employed the Bolzano-Weierstrass principle of completeness mentioned above. Thus he had shown that there are more elements in R than in N or Q or A, in the precise sense that the cardinality of R is strictly greater than that of N.”182 The strong point of Cantor’s attempt to disqualify Dedekind regards the power that abstractions hold over how one thinks about what it means to think. And this is a problem at the very heart of the idea that nature, in its uniqueness, can be purified against culture and the latter’s diversity.
182 Ferreirós, “Early Development of Set Theory.”
IX “Ichnography”—The Nude and Its Model
349
image references WHAT MAKES THE SELF-ORGANIZING MAP (SOM) SO PARTICULAR AMONG LEARNING ALGORITHMS? — TEUVO KOHONEN p. 25: Teuvo Kohonen i. ELEMENTS OF A DIGITAL ARCHITECTURE — LUDGER HOVESTADT p. 31: The First Six Books of Euclids Elements, Oliver Bryne (1847), http://publicdomainreview. org/collections/the-first-six-books-of-the-elements-of-euclid-1847/ · p. 32: Jacques Ozanam, Konstruktion einer Strecke, http://commons.wikimedia.org/wiki/File:Fotothek_df_tg_0003359_ Geometrie_%5E_Konstruktion_%5E_Strecke_%5E_Messinstrument.jpg · p. 36: Image source: Thomas Stanley, The History of Philosophy (1701), online: http://2.bp.blogspot. com/_uSDC36W8_wk/R6_KhJHMz5I/AAAAAAAAAIM/9XFaKdffsNY/s1600-h/textsecret227. jpg · p. 57.1: Laurence Shafe, Veduta di Paestum (2010), http://www.shafe.uk/home/art-history/ classical_tradition/classical_tradition_slides_15-10-2003/veduta_di_paestum/ · p. 57.2: Image source: Filippo Coarelli, https://quadriformisratio.wordpress.com/2013/07/01/romaquadrata/ · p. 58: D. Herrmann, Satz des Pythagoras, http://de.wikipedia.org/wiki/Claudius_ Ptolemäus#/media/File:Satz_ptolemaeus.png · p. 65.1: Basilica San Piero a Grado, Pisa, http:// upload.wikimedia.org/wikipedia/commons/e/ee/Basilica_San_Piero_a_Grado,_Pisa,_interno. jpg · p. 65.2: St Mary Vault, Luebeck Germany, http://upload.wikimedia.org/wikipedia/commons/e/ e8/Germany_Luebeck_St_Mary_vault.jpg · p. 67: Notre Dame de Paris (1345), http://upload. wikimedia.org/wikipedia/commons/7/7d/Notre-Dame-de-Paris_-_rosace_sud.jpg · p. 68: Image source: http://217.160.164.18/typo3-dom/index.php?id=17313 · p. 69.1: Pytolemy's astrology, Giordano Ziletti (1575), http://www.hps.cam.ac.uk/starry/ptolemyastrologylrg.jpg · p. 69.2: Klaudios Ptolemaios, Codex Seragliensis (ca. 100), http://www.hs-augsburg.de/~harsch/graeca/Chronologia/S_ post02/Ptolemaios/pto_gcod.html · p. 73: Image source: Pélerin (1505), http://www.webexhibits.org/ sciartperspective/i/raphael5_diagram_small.jpg · p. 74: Image source: Basilica of Santa Maria Novella, http://static.panoramio.com/photos/large/27409205.jpg · p. 75: Image source: http://fotothek. biblhertz.it/bh/a/bhpd30167a.jpg · p. 80.1: Bramante, Santa Maria Presso San Satiro, Milano (1476), https://s3.amazonaws.com/classconnection/518/flashcards/6496518/jpg/21594-bramante_santamaria-presso-san-satiro-14BE026A5D42AE3B612.jpg · p. 80.2: Image source: http://www.studfiles. ru/html/2706/5/html_ho9NnbmMAS.tYi5/htmlconvd-7FwvtX_html_m7fa42a9b.jpg · p. 81: Image source: Borromini, https://s3.amazonaws.com/classconnection/426/flashcards/6371426/png/ screen_shot_2014-10-15_at_33903_am-14914CBA5AD4D36A4BC.png · p. 87: Isaac Newton, http://en.wikipedia.org/wiki/Isaac_Newton#/media/File:GodfreyKneller-IsaacNewton-1689. jpg · p. 88: Gottfried Wilhelm Leibniz, http://www.essentiallifeskills.net/images/leibniz. jpg · p. 89.1: Carl Friedrich Gauß, http://de.wikipedia.org/wiki/Carl_Friedrich_Gauß · p. 89.2: Tripleright triangle on a sphere, http://www.math.cornell.edu/~mec/tripleright.jpg · p. 89.3: Venn diagram, http://numb3rs.wolfram.com/412/images/VennDiagram.gif · p. 90.1: Cotton Mill, http://www. newstatesman.com/sites/default/files/styles/fullnode_image/public/blogs_2015/03/cotton_mill. jpg?itok=Ypi1byne · p. 90.2: Joseph Nicéphore Niépce, Point de vue du Gras (1826), http://img27. fansshare.com/pic121/w/nic--phore-ni--pce/1200/8523_view_from_the_window_at_le_gras_ joseph_nicephore_niepce.jpg · p. 92.1: Richard Dedekind, image source: http://de.wikipedia.org/ wiki/Richard_Dedekind · p. 92.2: Stereographic Projection in 3D, http://upload.wikimedia.org/ wikipedia/commons/8/85/Stereographic_projection_in_3D.png · p. 92.2: Symmetry Group, http:// westongeometry.pbworks.com/f/1364870853/symmetry-group.jpg · p. 93.1: La rotonde de la Villette, Paris (1788), http://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Barrière_Saint-Martin. jpg/280px-Barrière_Saint-Martin.jpg · p. 93.2: Opera House Paris, cross-section model, https:// baketravelandrun.files.wordpress.com/2010/12/img_4273.jpg · p. 109: Le Corbusier, Villa Savoye in Poissy, France (1931), http://webs.ono.com/chiisita/Villa_Savo.jpg iII. PRE-SPECIFIC MODELING: COMPUTATIONAL MACHINES IN A COEXISTENCE WITH CONCRETE UNIVERSALS AND DATA STREAMS — VAHID MOOSAVI FIG. 1: Vahid Moosavi · Fig. 2: Vahid Moosavi · Fig. 3 (left): Image source: Walter Christaller, Die Zentralen Orte in Süddeutschland (Jena: Gustav Fischer, 1933), here from: http://purestform.tumblr.com/ post/3460146694/central-place-theory-walter-christaller-via · Fig. 3 (right): Image source: http:// www.spacesyntax.com · Fig. 4: Image source: Francis Rebaudo et al., “Agent-Based Modeling of Human-Induced Spread of Invasive Species in Agricultural Landscapes: Insights from the Potato
350
coding AS literacy — Metalithikum IV
Moth in Ecuador,” Journal of Artificial Societies and Social Simulation 14, no. 3 (2011): 7. · Fig. 5: Image source: R. Marñon et al., “The Dynamics of Circular Migration in Southern Europe: An Example of Social Innovation,” ESD Working Paper Series, vol. 1 (2011), online: http://www.rafamara.com/ blog/wp-content/uploads/2010/12/system-dynamics-Circular-Migration.png · Fig. 6: Image source: http://2.bp.blogspot.com/-8QCW6OvefwI/Td04LHeRBTI/AAAAAAAACiY/vEpb8iQlfKY/ s1600/red+circle.jpg · Fig. 7: Image source: http://commons.wikimedia.org/wiki/File:Fourier_ series_square_wave_circles_animation.gif · Fig. 8: Image source: http://commons.wikimedia.org/ wiki/File:Dedekind_cut_sqrt_2.svg · Fig. 9: Vahid Moosavi · Fig. 10: Vahid Moosavi · Fig. 11: Image source: http://commons.wikimedia.org/wiki/File:Rational_Representation.svg · Fig. 12: Image source: http://commons.wikimedia.org/wiki/File:Dedekind_cut_sqrt_2.svg · Fig. 13: Vahid Moosavi · Fig. 14: Vahid Moosavi · Fig. 15: Vahid Moosavi · Fig. 16: Vahid Moosavi · Fig. 17: Vahid Moosavi · Fig. 18: Vahid Moosavi · Fig. 19: Image source: L. Ermann et. al., “Towards TwoDimensional Search Engines,” (2011), .arXiv:1106.6215 · Fig. 20: Vahid Moosavi · Fig. 21: Vahid Moosavi · Fig. 22: Vahid Moosavi · Fig. 23: Image source: James Hughes et al., “Quantification of Artistic Style Through Sparse Coding Analysis in the Drawings of Pieter Bruegel the Elder,” Proceedings of the National Academy of Sciences 107, no. 4, (2010):1279-83. · Fig. 24: Vahid Moosavi iV. SOM. SELF. ORGANIZED. — ANDRÉ SKUPIN P. 170: André Skupin ViII. GICA: GROUNDED INTERSUBJECTIVE CONCEPT ANALYSIS. A METHOD FOR IMPROVED RESEARCH, COMMUNICATION, AND PARTICIPATION — TIMO HONKOLA FIG. 1–18: Timo Honkola et al.
image references
351
SERIES EDITORS Prof. Dr. Ludger Hovestadt Chair for Computer Aided Architectural Design (CAAD), Institute for Technology in Architecture (ITA), Swiss Federal Institute of Technology (ETH), Zurich, Switzerland Dr. phil. Vera Bühlmann Laboratory for Applied Virtuality at the Chair for Computer Aided Architectural Design (CAAD), Institute for Technology in Architecture (ITA), Swiss Federal Institute of Technology (ETH), Zurich, Switzerland Layout and Cover Design: onlab, D-Berlin, www.onlab.ch Copyediting and Proofreading: Max Bach, Sophie Chapple, Leah Whitman-Salkin Printing and binding: Strauss GmbH, D-Mörlenbach Typeface: Korpus, binnenland (www.binnenland.ch)
Library of Congress Cataloging-in-Publication data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the German National Library The German National Library lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in other ways, and storage in databases. For any kind of use, permission of the copyright owner must be obtained. This publication is also available as an e-book (ISBN PDF 978-3-0356-0639-3; ISBN EPUB 978-3-0356-0649-2).
© 2015 Birkhäuser Verlag GmbH, Basel P.O. Box 44, 4009 Basel, Switzerland Part of Walter de Gruyter GmbH, Berlin / Boston Printed on acid-free paper produced from chlorine-free pulp. TCF ∞ Printed in Germany
ISSN 2196-3118 ISBN 978-3-0356-0627-0 987654321 www.birkhauser.com