195 61 9MB
English Pages 278 [279] Year 2021
“Behavior and Culture in One Dimension provides an engaging, highly readable exploration of the foundational role of one-dimensional patterns or ‘sequences’ in the origin and evolution of complex systems on earth, among them, living systems. These one-dimensional sequences (including, for example, RNA, DNA, linguistic sequences, computer code) serve to organize, harness, and control (three-dimensional) physical systems so that the systems exhibit functional, organized activity at larger spatiotemporal scales including ecologically relevant scales. The book should attract a variety of readerships including popular science readers and students of evolution, ecological science, and language. For language scholars, such as myself, it offers a unique, persuasive, and impactful perspective on the kind of thing that language is, offering valuable insight on how language use in the world (written as well as spoken) can do the work it does. I highly recommend this book.” Carol A. Fowler, Haskins Laboratories, U.S.A. “Over the past 60 years Howard Pattee proposed foundational ideas for understanding the nature of life. With spectacular clarity and force his former student Dennis Waters examines and extends Pattee’s work to produce a vibrant framework for thinking how the physical/biological world constructs life itself. Get ready to think and then think again. This book is true scholarship in its finest form.” Michael Gazzaniga, SAGE Center for the Study of Mind, University of California, U.S.A. “Dennis Waters’ Behavior and Culture in One Dimension explores the implications of a deceptively simple idea—the concept of a sequence—and shows how much of the complexity of the biological and human world is dependent on it. With DNA at one end of his account and written language at the other, he shows how sequences have played midwife to the emergence of complex life and human civilization.” Terrence Deacon, University of California, U.S.A. “Behavior and Culture in One Dimension pursues the bold and intriguing claim that DNA, language, and computer code are not simply metaphorical allies. Waters builds the case that systems of linear sequences have properties in common that allow them to constrain activity in three dimensions. He’s after a universal organizing principle that is independent of the embodiment of the sequence—human language, animal communication, behavior by parasites, bacteria, and civilizations are all in his sights. The neglect of language has long been seen by cognitive science as the Achilles heel of ecological psychology. An approach to language that respects the ecological emphasis on natural law is sorely needed and that is very much what Waters provides.” Claudia Carello, University of Connecticut, U.S.A.
BEHAVIOR AND CULTURE IN ONE DIMENSION
Behavior and Culture in One Dimension adopts a broad interdisciplinary approach, presenting a unified theory of sequences and their functions and an overview of how they underpin the evolution of complexity. Sequences of DNA guide the functioning of the living world, sequences of speech and writing choreograph the intricacies of human culture, and sequences of code oversee the operation of our literate technological civilization. These linear patterns function under their own rules, which have never been fully explored. It is time for them to get their due. This book explores the onedimensional sequences that orchestrate the structure and behavior of our threedimensional habitat. Using Gibsonian concepts of perception, action, and affordances, as well as the works of Howard Pattee, the book examines the role of sequences in the human behavioral and cultural world of speech, writing, and mathematics. The book offers a Darwinian framework for understanding human cultural evolution and locates the two major informational transitions in the origins of life and civilization. It will be of interest to students and researchers in ecological psychology, linguistics, cognitive science, and the social and biological sciences. Dennis P. Waters received his Ph.D. from Binghamton University in 1990. He became a publishing entrepreneur, founding technical news services like GenomeWeb.com. After retiring, Waters continued his Ph.D. research, how onedimensional patterns of DNA, language, and code guide the three-dimensional world. He is a visiting scientist at Rutgers University.
Resources for Ecological Psychology A Series of Volumes Edited By Jeffrey B. Wagman & Julia J. C. Blau [Robert E. Shaw, William M. Mace, and Michael Turvey, Series Editors Emeriti]
Global Perspectives on the Ecology of Human-Machine Systems (Volume 1) Flach/Hancock/Caird/Vicente Local Applications of the Ecological Approach to Human-Machine Systems (Volume 2) Hancock/Flach/Caird/Vicente Dexterity and Its Development Bernstein/Latash/Turvey Ecological Psychology in Context: James Gibson, Roger Barker, and the Legacy of William James’s Radical Empiricism Heft Perception as Information Detection: Reflections on Gibson’s Ecological Approach to Visual Perception Wagman/Blau A Meaning Processing Approach to Cognition: What Matters? Flach/Voorhorst Behavior and Culture in One Dimension: Sequences, Affordances, and the Evolution of Complexity Waters For more information about this series, please visit: https://www.routledge.com/ Resources-for-Ecological-Psychology-Series/book-series/REPS
BEHAVIOR AND CULTURE IN ONE DIMENSION Sequences, Affordances, and the Evolution of Complexity
Dennis P. Waters
First published 2021 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 Dennis P. Waters The right of Dennis P. Waters to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this book has been requested ISBN: 978-0-367-70839-9 (hbk) ISBN: 978-0-367-70329-5 (pbk) ISBN: 978-1-003-14821-0 (ebk) Typeset in Bembo by Apex CoVantage, LLC
CONTENTS
Preface
xi
Introduction: Sequences, Sequences, and Sequences 0.1 0.2 0.3 0.4
1
1
The World in One Dimension 1 The Linguistic Model in Biology 3 Shoulders to Stand On 5 What to Expect From This Book 7
The Problem of Sequentialization
12
1.1 Our World as Sequences See It 12 1.2 Persistence of One-Dimensional Patterns 14 1.3 Sequences, In and Out of Time 15 1.4 Sequences are History 18 1.5 Collapsing Three Dimensions to One 21 1.6 Low-Energy Sequences 23 1.7 Matter Matters, Except When It Doesn’t 25 1.8 Laws are Not Rules, and Vice Versa 27 1.9 Sequential and Self-Referential 30 1.10 Sequences Made Explicit 33
2
The Emergence of Constraint 2.1 Sequences at the Dinner Table 39 2.2 Meet the Constraint 40 2.3 Alternatives and Decisions 43
39
viii
Contents
2.4 2.5 2.6 2.7 2.8 2.9
3
The Grammar of Interaction 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
4
6
87
Where Does the Body End (and the World Begin)? 87 Puppet Masters and Their Puppets 89 The Affordances of Extension 91 Social Constraints and Affordances 93 Situating the Grammar of Interaction 95 How Does Language Acquire Children? 99 The Grammar of Extension 102 The Extended Genome 105
The Grammar of Abstraction 5.1 5.2 5.3 5.4 5.5 5.6
60
Interactions, Ordinary and Specific 60 The Folding Transformation 62 How the Improbable Becomes Probable 65 Molecular Pattern Recognition 67 Enzymes, Robots, and Perception 69 All That the Environment Affords 71 Affordances Great and Small 73 The Eye of the Enzyme 76 Genes Go to Pieces 79
The Grammar of Extension 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8
5
Classification, and Then Reclassification 45 Sequences are Invisible Boundaries 47 Constraining Individuals in a Collection 49 Levels of Description and Control 51 Loosely Coupled and Nearly Decomposable 54 Hierarchical Patterns in Sequences 56
111
Freedom’s Just Another Word 111 Shape Shifting Constraints 114 Blue- and White-Collar Sequences 117 When Is a Sequence Not a Sequence? 120 Who Constrains the Constraints? 123 The Grounding of Abstraction 126
The Conundrum of Replication 6.1 What Is Self-Replication? 133 6.2 Von Neumann and the Problem of Complexity 135
133
Contents
6.3 6.4 6.5 6.6 6.7 6.8
7
ix
Turing’s Pristine Sequences 137 Enter the Self-Reproducing Automaton 139 Theory vs. Reality in Self-Reproduction 142 Lessons Learned From von Neumann 144 Description, Construction, and the Central Dogma 146 The Central Dogma of Civilization? 148
The Threshold of Complication
155
7.1 What Makes a Threshold? 155 7.2 And Before DNA There Was? 156 7.3 Living by RNA Alone 158 7.4 Entangling the Central Dogma 161 7.5 A Civilized Threshold 163 7.6 Get It in Writing 165 7.7 Stability, Random Access, and Self-Reference 167 7.8 What Does the Thermostat Say? 171 7.9 Hand Me the Pliers 174 7.10 Complication Great and Small 176
8
The Institution of Sequences 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8
9
184
Of Constraints and Chimeras 184 The Fallibility of Constraints 188 A Recipe for Learning 191 Configured by Memes 194 The Microbial Sharing Economy 196 A Network Lovely as a Tree 198 Constraints, Institutionalized 201 The State of Sequences 203
The Continuum of Abstraction
212
9.1 Universal at Both Ends 212 9.2 The Universality of One-Dimensional Patterns 216 9.3 Feeling Special? 218
Appendix: Just Enough Molecular Biology A.1 Why Mess With Molecules? 221 A.2 One-Dimensional Patterns in Biopolymers 221 A.3 DNA Replication: Complementary Base-Pairing 223
221
x
Contents
A.4 A.5 A.6 A.7
Transcription: DNA to Messenger RNA (mRNA) 225 The Genetic Code: Mapping RNA to Amino Acids 226 How Translation Happens: Transfer RNA (tRNA) 228 Constructing the Finished Protein: Ribosomes 229
References Index
231 259
PREFACE
It’s not uncommon for new Ph.D. graduates to turn their dissertations into books, but very few wait three decades to do it. In my case I think the wait was justified. When I graduated in 1990, many of the fields that inform Behavior and Culture in One Dimension were in their infancy. Steven Pinker and Paul Bloom were just planting the f lag for the revitalized study of language evolution.1 Richard Dawkins and Jared Diamond had just organized the first conference comparing evolution in biology and the social sciences.2 The Human Behavior & Evolution Society was less than two years old. High-throughput genomics technology was about to reveal the complexity of how sequences of DNA orchestrate the living world. Hardly anyone had heard of bioinformatics or affordances or the RNA world. All of this subsequent research has made for a better book. I never finished high school and dropped out of college to pursue a career in media, which led me to New York, where a chance visit to a newsstand introduced me to the work of Howard Pattee at what is now Binghamton University. I paid him a call, which convinced me at age 30 to drop what I was doing (or not doing) and move to upstate New York to resume my education. While in the Ph.D. program at Binghamton I started a marriage, a family, and a small technical publishing business. These have kept me busy in the intervening decades. Howard has remained a friend and mentor throughout. This leaves me with 30+ years’ worth of thank-yous, which I shall try to compress into a small space. The biggest go to the three individuals who read every word of early drafts, sometimes more than once, and encouraged me to continue. They are Howard Pattee, Carol Fowler, and Laura Waters, my wife. Others have read and commented on various sections. They are Laura Hyatt, Tom Neubert, Bernadette Toner, Joel Pisetzner, and Hannah Waters, my daughter. For many years of helpful conversation, I thank Claudia Carello, Michael Turvey, Bill Mace, and Bert Hodges of the Center for the Ecological Study of
xii
Preface
Perception and Action at the University of Connecticut. Likewise, to the Biosemiotics and Distributed Language communities, especially Joanna RączaszekLeonardi, Stephen Cowley, Sune Steffensen, and Marcello Barbieri. For answering questions and other helpful interactions I thank Thomas Hackl, Robert Boyd, David Hughes, Edward Holmes, Morten Christiansen, Penny Chisholm, Sarah Tishkoff, Christopher Chyba, Bernd-Olaf Küppers, Dick Lipton, Charles Bennett, and David Gifford. For early career advice, thanks to Ted Cloak, the late David Rindos, the late Donald Campbell, and Steve Straight. And as always, warmest appreciation for my Pattee Ph.D. program colleagues from long ago: Michael Kelly, Peter Cariani, and Eric Minch. For logistical support, I am grateful to Richard Hunter, Lena Struwe, Paul Schindel, and Masha Finn. And for unwavering belief in my entrepreneurial instincts, which have given me the freedom to pursue this research, I thank Michael Ridder. My friend and agent Susan Cohen has tolerated hours of head-scratching conversation about what to do with a book like this. Thanks to Carol Fowler, Jeffrey Wagman, and Julia Blau for suggesting the book for the Resources for Ecological Psychology series, and to the team at Routledge: Alex Howard, Cloe Holland, Lucy Kennedy, and Ceri McLardy. My family has been loving and supportive through it all. My wife Laura (for our entire marriage) and my offspring Hannah, Jacob, Emily, and Jonah (for their entire lives) have heard me muttering on and on about sequences and constraints. Their patience is a marvel of the world. The spark for this work undoubtedly came from my grandfather, Chester C. Waters, who died when I was seven years old, and of whom I have only the vaguest memory. Nonetheless, when I was a young teenager I received part of his library, which included most of the foundational works of the General Semantics movement, plus many volumes of their ETC. journal from the 1940s and 1950s. I never became a card-carrying general semanticist,3 but reading this material at a young age got me curious about how language works—I mean how it really works—and curious I have remained. As I write this, a modest sequence of 29,903 bases of RNA known as SARSCoV-2 has spread rapidly around the world, overriding in its wake many of the sequences that govern the everyday functioning of our civilization. A central theme of this book is that certain sequences have the power to reclassify other sequences, sometimes to momentous effect. SARS-CoV-2 is an example I wish I did not have to offer.
Notes 1. Pinker & Bloom 1990 2. Gibbons 1990 3. OK, to be honest I did have lunch with Neil Postman once and did have correspondence published in ETC ( Waters 1979).
INTRODUCTION Sequences, Sequences, and Sequences1
0.1 The World in One Dimension The recent discovery of thousands of planets orbiting stars in our galaxy points to the likelihood of billions and billions of such planets across the universe. The odds are getting better that Earth is not the only planet that is physically and chemically capable of supporting something like life as we know it. But should we find “life” elsewhere in the universe, how would we know it? How would we conclude that some bag of chemicals is actually alive? This book will argue that the answer is to be found in one-dimensional patterns, the linear arrangements that we call sequences. On Earth, sequences of DNA guide the functioning of the living world, sequences of speech and writing choreograph the intricacies of human culture, and sequences of code oversee the operation of our literate technological civilization. The persistence and diversity of life and civilization are made possible by one-dimensional patterns orchestrating three-dimensional activity. Physics and chemistry play crucial roles as well, but sequences are something special. They operate according to their own rules, and those rules have never been fully explored. It is time for them to get their due. Back in the day, of course, our Earth was like all those other planets, capable of supporting life but not yet doing so. It was a stark place. Volcanoes erupted. Continents collided and mountains emerged. Water f lowed and winds blew. Sediments deposited and oceans evaporated. As with every planet we know about, the behavior of the prebiotic Earth could be described and explained by the primeval processes of physics and chemistry following their universal and inexorable laws. Nothing to see here, move on. What changed, then? When the first sequences arrived on Earth, probably in the form of RNA molecules, forces of nature like gravity and electromagnetism
2
Introduction
continued without interruption, and matter and energy were neither created nor destroyed. So, in one sense nothing changed. But sequences did guide matter and energy to become organized into things like cells and cities and computers, harnessing and channeling the forces of nature to maintain that organization and elaborate upon it. And in that sense, everything changed. Which brings us to today. As literate humans, you and I occupy a complex habitat overseen by three kinds of sequences. One-dimensional molecular arrangements of RNA, DNA, and protein enable the diversity of the living world, sequences of sounds in speech and characters in text allow human culture to f lourish, and patterns of zeros and ones in computer code make our high-tech civilization possible. Geophysical events like eruptions, hurricanes, earthquakes, tsunamis, landslides, and f loods are still with us, but they are not what makes our planet interesting. Most of the time we attend to a world colonized and orchestrated by sequences. What, then, are these sequences that have profoundly changed Earth’s environment? How did linear patterns of tiny molecules allow life to emerge from a background of ordinary physics and chemistry? How did sequences of vibrations in the air, marks on paper, and voltages in silicon allow civilization to emerge from a background of Tennyson’s “Nature, red in tooth and claw?” These are the questions that animate this book. Fundamental to advancing our understanding is a simple but profound question asked by biophysicist Howard Pattee: “How does symbolic information actually get control of physical systems?”2 It is easy to envision a one-dimensional arrangement of molecules or letters or zeros and ones, but it is not so easy to envision how these patterns actually get control of the physical world, how they guide matter to behave one way rather than another.3 A fertilized egg packs into a one-dimensional pattern of DNA much of the three-dimensional information needed to make a chicken. How is it possible to get three dimensions of chicken out of a single dimension of genome? Further, the sequences of instructions that you need to assemble a bookcase are made of the physical substances of ink and paper, but no detailed study of the ink and paper will help you understand the meaning of the sequences. Something more is needed for the symbolic information (the instructions) to actually get control of a physical system (you). On Earth or any other planet, we have learned that life does not require new laws of physics or chemistry, just some special, idiosyncratic infrastructure. In essence we start with physics and chemistry and we get biology for free. Further, human culture does not require new biological principles, just more special but different infrastructure. We start with biology and get civilization for free. In each case the special infrastructure that makes everything possible is built around the one-dimensional arrangements we call sequences. The goal of this book is twofold. First, I would like to persuade you that sequences are indeed different and worthy of study in their own right. To
Introduction
3
create a unified account, I will show that sequences of DNA, language, and code have at least as much in common with one another as they do with the ordinary world of physics and chemistry. Second, I will offer a preliminary sketch of how they do what they do, their rules of operation. This will require an interdisciplinary march through physics, molecular biology, linguistics, anthropology, computer science, and other related disciplines. I have found this march exhilarating and I hope you will too.
0.2 The Linguistic Model in Biology That different kinds of sequences share common properties is not an idea new to science, but it is an idea that has never been fully developed by scientists. It has had its greatest impact through a powerful metaphor which has animated the field of molecular genetics from its inception in the middle of the 20th century. Molecular biologist and Nobel laureate François Jacob calls this metaphor the linguistic model in biology, the recognition that the linear pattern of the DNA molecule is like a text, with an alphabet, punctuation, a one-dimensional layout, and an internal grammar.4 “One measures the value of a model by its operational efficiency,” Jacob says, “and in this sense, the linguistic model has played a long role in recent progress in genetics.” This is evident, writes cognitive scientist Steven Pinker, “in the reliance of genetics on a vocabulary borrowed from linguistics. DNA sequences are said to contain letters and punctuation, may be palindromic, meaningless, or synonymous, are transcribed and translated, and are even stored in libraries.”5 There exists a substantial but scattered literature in which biologists, linguists, and social scientists have pointed out and commented upon the linguistic model in biology.6 However, to date no one has produced a unified account, one that identifies and explains the organizing principles of systems of sequences. Here are a few samples of what researchers have said: C. H. Waddington (evolutionary biologist, 1960): “The analogy often made between the DNA chain and writing can be used in reverse; and language can function as we are used to thinking that DNA does.” John Platt (biophysicist, 1962): “The analogy between genetic informationtransfer and a complex instruction-manual would seem to give us a coherent and fairly accurate schema for relating various kinds of phenomena.” Clifford Geertz (anthropologist, 1966): “Though the sort of information and the mode of its transmission are vastly different in the two cases, this comparison of gene and symbol is more than a strained analogy of the familiar ‘social heredity’ sort. It is actually a substantial relationship.” Roger Masters (social scientist, 1970): “One means of isolating the underlying similarities between the genetic code and speech, without erroneously
4
Introduction
overstating their identity, is to specify the broad class of systems to which both human and DNA languages belong.” Roman Jakobson and Linda Waugh (linguists, 1979): “Obviously we are not yet in a position to explain this salient correspondence, as long as for linguists the origin of language and, similarly, for geneticists the genesis of life remain unsolvable problems.” John Maynard Smith (evolutionary biologist, 2000): “Linguists would argue that only a symbolic language can convey an indefinitely large number of meanings. I think that it is the symbolic nature of molecular biology that makes possible an indefinitely large number of biological forms.” Wolfgang Raible (linguist, 2001): “Both systems have to cope with the same fundamental problem: how can a one-dimensional medium transmit the information required for the construction of three- (or even more-) dimensional entities?” None of these scholars has taken what to me is the obvious next step, as Masters puts it, to “specify the broad class of systems to which both human and DNA languages belong” (emphasis mine).7 Before we can tackle this, however, we must first acknowledge that such a broad class of systems exists, that sequences of DNA in the cell and sequences of text in human culture are not two different things. Rather, they are two different examples of one sort of thing: a system of sequences. As Jacob writes, “two hitherto separate observations can be viewed from a new angle and seen to represent nothing but different facets of one phenomenon.”8 For sequences, this book is that new angle. The linguistic model in biology is not the only metaphor out there, however. DNA sequences have been usefully compared not only to language but also to code, to the sequences of zeros and ones in Turing Machines, computer software, and information storage.9 David Baltimore (biologist, Nobel laureate, 1984): “DNA could easily be seen as the embodiment of principles first realized in the late 1930s by Alan Turing and enshrined in the Turing machine, which responds to signals encoded on a tape.” Charles Bennett and Rolf Landauer (physicists, 1985): “A single strand of DNA is much like the tape of a Turing machine.” David Gifford (computer scientist, 1994): “As we begin to understand how biological systems compute, we will identify a naturally occurring universal computational system.” David Searls (computational biologist, 1997): “Biological transformations can be seen as analogous at a number of levels to mechanisms of processing other kinds of languages, such as natural languages and computer languages.” Nick Goldman (computational biologist, 2013) and colleagues: “DNA [is] an attractive target for information storage because of its capacity for
Introduction
5
high-density information encoding, longevity under easily achieved conditions and proven track record as an information bearer.” I don’t wish to waste time arguing whether DNA is or is not really a language or a cell is or is not really a computer. Rather, I think it will be more productive to look at genes, human language, and computer code for what they unquestionably are: sequences of interchangeable physical elements whose linear arrangements somehow manage to orchestrate the three-dimensional behavior of objects in the world. We can readily see their differences, but what properties and behaviors do they have in common? How can we produce a unified account of their function and evolution? What is the new angle?
0.3 Shoulders to Stand On This book relies upon the work of researchers in many fields, but a small group are key influences worthy of specific introduction. One companion throughout will be Howard Pattee, not least because he was my Ph.D. advisor.10 Pattee trained as a physicist at Stanford University and his early career was focused on x-ray microscopy.11 In the late 1950s, his research interests expanded into biophysics and theoretical biology, and he helped to found the biophysics program at Stanford. Since then Pattee has been working to clarify the question of how living systems differ from the physical world from which they arose, how, as he says, molecules become messages.12 He has assembled a kit of conceptual tools for answering this question by focusing on the simplest case: the role of sequences in guiding the operation of the cell.13 This book exists because I found his toolkit to be equally useful in explaining how sequences have come to orchestrate human behavior and choreograph our literate technological civilization. In Chapter 1 I open Pattee’s toolkit to demonstrate how the behavior of sequences differs from the behavior of the physical world: time, space, matter, and energy. In Chapter 2 you will see how an idea he borrowed from classical physics—the concept of constraint—helps to explain the operation of sequences, and how constraints are organized into control hierarchies. In Chapter 3 I focus on Pattee’s view that the enzyme molecule is the simplest case of interaction, of pattern recognition and behavioral response in the living world. Among other biologists, Richard Dawkins, Francis Crick, Sydney Brenner, Jacques Monod, and Carl Woese are important inf luences. In his classic book The Selfish Gene, Dawkins provides the first gene’s-eye view of natural selection,14 which I hope to generalize into a sequence’s-eye view of life and civilization. His book The Extended Phenotype provides a framework for thinking about the grammar of language and human behavior in Chapter 4.15 Dawkins also brings us the meme, which will be taken up in Chapter 8. Besides winning the Nobel Prize for figuring out the structure of DNA, Crick thought deeply about the role of sequential patterns in living systems.
6
Introduction
He will appear at many points, and his Central Dogma of molecular biology will play an important role in Chapter 6.16 Brenner, another Nobel laureate, also contributes several important concepts, especially underdetermination in biological systems, what he calls don’t-care conditions. Woese’s and Nobel laureate Monod’s observations on the organization of the living world can be found throughout. Two psychologists, James Gibson and David Olson, are worth special mention. Gibson’s book The Ecological Approach to Visual Perception and its concept of affordance are introduced in Chapter 3, and the affordance turns out to be the key to explaining how grammar works; language is a constraint on perception and behavior.17 Olson’s book The World on Paper and his other studies of literacy are central to Chapter 7’s discussion of the evolution of complexity,18 as are anthropologist Jack Goody and his books like The Logic of Writing and the Organization of Society.19 The works of two computer science polymaths, John von Neumann and Nobel laureate Herbert Simon, also play a valuable supporting role. Von Neumann’s Theory of Self-Reproducing Automata is at the heart of Chapter 620 and Simon’s work on hierarchies and complex systems permeates the book, and stands out in Chapter 2.21 Among philosophers, David Hull stands out with the essential concept of the interactor in Chapter 3 and Kim Sterelny, another philosopher of biology, provides us with a salient definition of interaction. John Searle’s book The Construction of Social Reality helps to explain abstraction and regulation in Chapters 5 and 8.22 Linguists Morten Christiansen and Nick Chater make important contributions to the discussion of language evolution in Chapter 4, as does neuroanthropologist Terrence Deacon. Linguist Charles Hockett’s design features of language come up throughout the book as we try to make sense of the differences between the rules that govern sequences and the laws that govern the physical world. Other fields have too many researchers to call out by name, but the book would not have been possible without the many contributors to the fields of cultural evolution, genomics, astrobiology, bioinformatics, parasitology, the RNA world, economics, and animal communication. Looking ahead, Chapter 1 shows how sequences differ from the physical world and why they need to be studied in their own right. Chapter 2 introduces constraints, which are how sequences actually get control of physical systems, and hierarchies, both physical and functional. Chapters 3, 4, and 5 show how complex behaviors can be governed by sequences through the internal organization of the sequences themselves, what we call grammars. Chapter 3 introduces the grammar of interaction, Chapter 4 the grammar of extension, and Chapter 5 the grammar of abstraction. Chapter 6 takes on the question of replication, showing that copying sequences and expressing sequences logically must be distinct processes. Chapter 7 tackles
Introduction
7
the evolution of complexity, showing how sequences guided both the emergence of life and the emergence of civilization. Chapter 8 demonstrates how sequences constrain populations and institutions, and Chapter 9 offers a simple model for visualizing the emergence of abstraction.
0.4 What to Expect From This Book Behavior and Culture in One Dimension is interdisciplinary, and it makes claims, especially about human language, that many readers will find provocative. It also omits topics that some readers might reasonably envision. Thus, I would like to give you a sense of what to expect before you get started. You may not agree with my decisions, but at least they will be explicit. Let’s address in turn interdisciplinary research, provocative claims, and overt omissions. Interdisciplinary research is risky business. It entails importing technical concepts from many specialized fields and then tying them together, often metaphorically. Settling on the right level of detail is tricky. How much molecular biology is necessary to make a point? How much is sufficient to satisfy relevant experts that I have done my homework? A psychologist might be put off by more molecular biology than is needed, while a molecular biologist might be put off by omission of the nuances of the field. In this, the book can be at once too scholarly for some and not scholarly enough for others. This challenge is baked into all interdisciplinary research, and the more interdisciplinary the research, the more prominent the challenge. You can judge for yourself whether I have struck the right balance. If you need a quick review of the basics of molecular biology, visit the Appendix. The book is animated by one big, provocative claim, one grand unified theory, that sequences of DNA in the cell and sequences of text in human culture are not two different things. Rather, they are two different examples of one sort of thing. But what sort of thing is that? How can we characterize this thing independently of its two exemplars? Answering these questions, unpacking the details of this one big claim, requires further subsidiary claims, which in turn can branch into even more claims. In navigating this tree of argument, some readers may accept the big claim while reserving judgment on the rest. Others may be intrigued by a specific claim that relates to their field, while still thinking that the overall project is f lawed. I hope you will find this book sufficiently laden with new angles and novel arguments in many fields that you will encounter a few things to latch onto. As noted earlier, the obvious parallels between genetics and linguistics date to the earliest years of molecular biology in the 1950s. Still, some scientists may consider this all to be just a metaphor. To these I would reply that the role of metaphor in discovery has been extensively studied by philosophers of science and found to be of great value.23 Other scientists may consider Jacob’s linguistic model in biology to be valid, but still a tired metaphor, trivially true but conceptually empty. Indeed, it could reasonably be asked why, if several generations
8
Introduction
of researchers have noted this metaphor, no one has taken it much beyond the “isn’t this interesting?” stage. Why the question has remained underdeveloped for so long I do not know, but I can speculate. How sequences do what they do, how one-dimensional patterns of symbols actually get control of physical systems, is understood better at some levels than at others. At the highest level, that of human engineering, we can explain in physical terms how sequences of zeros and ones govern the behavior of computers and robots. At the lowest level, in the cell, we can explain in biochemical terms how sequences of DNA are transcribed and translated into three-dimensional proteins that guide metabolism and build cellular structure. But extrapolating from this concrete knowledge into the vast sequential world in between, the human cultural world of speech, writing, and mathematics, is challenging. I can readily see how fixating on this question could be career-limiting for almost any researcher in almost any discipline. As David Hull writes, “officially we are all supposed to value interdisciplinary research, but in reality just about every feature of academia frustrates genuinely interdisciplinary work. Those of us who are engaged in it are the last hired and the first fired.” 24 As to the question of deliberate omissions, to avoid making Behavior and Culture in One Dimension much longer and more technical than it already is, I have chosen to sidestep nuance, ambiguity, and controversy unless it is germane to the point being made. This is not the place to relitigate the intramural disputes of perceptual psychology, linguistics, evolutionary theory, cognitive science, anthropology, astrobiology, and the rest. In building my argument, I do not want to befuddle non-specialists by hashing through these abundant controversies. I am happy just to import the relevant concepts and let the reader follow the endnotes and references to whatever depth is desired. I should also point out that while this book has plenty to say about complexity, function, organization, and evolution, it has very little to say about brains, minds, intentions, or human consciousness. This omission may give you pause, but I have concluded that explaining the details of brains and consciousness is not a necessary part of the study of sequences. Without question, humans require brains (and bodies!) in order to interpret and replicate sequences, but there is no need to assume that we require exactly the brains and bodies that we have. As we shall see, sequences do have certain hardware requirements in order to function and evolve, but any mechanism that fulfills these requirements will get the job done. Other brain-like equipment might do just as well as the kind we have.25 The same is true of consciousness. We assume we have it, but we don’t know how the sequences of language fit into the puzzle. Is consciousness a prerequisite for speech? Is speech a prerequisite for consciousness? Or is consciousness just a side effect of other more fundamental processes, what philosophers like Daniel Dennett call epiphenomenal?26 Consciousness, like the brain itself, may
Introduction
9
be personally important to us, but to build a theory about the sequences of language we can consign it to the big blue bucket I have labeled true but not important.27 As linguist Nikolaus Ritt says, “there is a sense in which languages are insensitive to the existence of their users as conscious beings.”28 Another reason not to spend too much time worrying about brains is that we cannot observe sequences in the brain. We hardly know what it means for sequences to exist in the brain—or even if they exist there as sequences. But we do know that sequences exist in the world; they are observable material patterns, pulses of air or marks on paper or voltages in a chip.29 “Language studies are uniquely positioned to contribute to our knowledge of human behavior,” writes linguist William Croft, “because language is an ‘externalized’ entity that can be studied more concretely than other aspects of mind, culture, and society.”30 This book is about the means by which one-dimensional patterns organize three-dimensional behaviors. Genetic and linguistic sequences can be potent constraints, but we can specifically disavow genetic determinism or, more generally, “sequence determinism.” Chapters 2 and 3 explore this and related questions in detail but suffice it to say at this introductory stage that what sequences do is change probabilities, making specific behaviors more or less likely. Sometimes they can increase a probability to the verge of certainty or decrease a probability to the very edge of impossibility. However, they lack the inexorable causality of the laws of physics. Thus, I have tried to avoid claiming that sequences cause behaviors; rather, I try to say that they constrain or guide or orchestrate or choreograph them. As Nobel laureate physicist Max Planck writes: “There must be an unfathomable gulf between a probability, however small, and an absolute impossibility.”31 The same gulf exists between high probability and deterministic causality. Finally, this is a book published in a scholarly series about ecological psychology, the field founded by James Gibson. Though its interdisciplinary scope carries it well beyond ecological psychology as such, it remains relevant to the field in two ways. First, it demonstrates the universality of Gibson’s affordance concept, extending it to the molecular level, to evolution, to animal communication, and to human institutions. Second, it provides a new angle for understanding human language in Gibsonian terms by showing that perception and behavior are the sine qua non of grammar. This circles us back to Pattee’s question: How does symbolic information actually get control of physical systems? Sequences are largely inert; physical systems are anything but. It is not enough to claim that sequences orchestrate the behavior of physical systems. The trick is to explain the mechanisms which allow this to happen, that bridge the gap between the static world of sequences and the dynamic world of matter and energy. Our point of departure in Chapter 1 will be to compare the world of sequences with the ordinary physical world of time, space, matter, and energy.
10
Introduction
Notes 1. This chapter title is inspired by the autobiographical paper of the same name by biochemist Frederick Sanger, two-time Nobel laureate and inventor of techniques and instruments to determine the sequences of proteins and nucleic acids (1988). Sanger’s sequences, sequences, and sequences were protein, RNA, and DNA, whereas mine are DNA, human texts, and binary code. 2. Pattee 2007 3. At a slightly higher level, science writer Roger Lewin puts the question this way: “What we have to do is find out how the genes get hold of the cell” (1984). 4. Jacob 1977b 5. Pinker 2006. Even critics of the linguistic model concede that it is “so pervasive that it almost seems impossible that, short of pathological convolution, the experimental results of genetics can even be communicated without these resources” (Sarkar 1996). 6. Molecular biologist Gunther Stent drew the distinction between structurists and informationists in the early decades of the field: “There have existed, and there still exist, two schools of molecular biologists—structurists and informationists, threedimensionists and one-dimensionists” (1968). For a broad history and critique of the role of informational concepts in biology, see Lily Kay’s Who Wrote the Book of Life? (2000). Küppers 1990, Maynard Smith 2000, Oyama 2000, Roederer 2005, Sarkar 2005, Keller 2009, and Barbieri 2015 are also valuable. 7. Additional references on the linguistic model in biology include Kalmus 1962, Sankoff 1973, Ratner 1974, Lees 1979, Abler 1989, Sebeok 1991, Sereno 1991, Ji 1997, and García 2005. 8. Jacob 1977a 9. The analogy between cells and computers led computer scientist Leonard Adleman to show that a molecular computer built around DNA can solve certain problems beyond the practical reach of conventional computers (1994). (Those interested in computer security may note that Adleman is the “A” in the RSA encryption algorithm.) In practice using the parts list of a cell to build physical computers has proven difficult to scale; it remains not much more than a curiosity in computer science ( Rambidi 2014). Recently, however, there has been interest in using DNA for largescale data storage, like a molecular f lash drive (Bancroft et al. 2001; Church et al. 2012). The idea is that the sequences of zeros and ones found in a digital computer are mapped to sequences of DNA, where they are stored and subsequently read out. 10. Some of the ideas in this book were originally roughed out in the dissertation Pattee supervised (Waters 1990). 11. See, for example, Pattee 1953. 12. Pattee was a participant in all four of the inf luential Serbelloni meetings on theoretical biology organized by Waddington in the late 1960s and early 1970s (Waddington 1968, 1969b, 1970, 1972b). 13. Most of the Pattee papers referenced in Behavior and Culture in One Dimension can be found in the collection Laws, Language and Life ( Pattee & Rączaszek-Leonardi 2012). For a discussion of how Pattee’s work can be applied in cognitive neuroscience, see Gazzaniga 2018. For his relation to ecological psychology, see Turvey 2019. 14. Dawkins 2006 15. Dawkins 1982 16. Crick 1958, 1970 17. Gibson 1979 18. Olson 1994 19. Goody 1986 20. Von Neumann 1966 21. Simon 1969, 1973, 2005
Introduction
22. 23. 24. 25.
26. 27. 28. 29. 30. 31.
11
Searle 1995 Boyd 1979; Brown 2003; Hallyn 2000 Hull 2002b Anyone who assumes a priori that the human brain is the most sophisticated object on Earth has most likely never met a ribosome or spliceosome. These are the huge, complicated, ubiquitous molecular aggregates central to the functional expression of gene sequences. But as with brains, we can imagine other possible ribosomes and spliceosomes which could perform comparably to the ones we have, so long as they fulfill the functional requirements of sequence processing. Dennett 1991. “Part of the problem of explaining consciousness is that there are powerful forces acting to make us think it is more marvelous than it actually is,” he says (Dennett 2003). A good theory is “a model that tells you what is important, as distinguished from what may be true but unimportant,” as Pattee puts it (1980). Ritt 2004 Linguists will recognize this as focus on linguistic performance, rather than linguistic competence as put forward by Chomsky 1965. Croft 1991. As psychologist Monica Tamariz writes, “Only public, observable information can be replicated by humans” (2019b). Planck 1925
1 THE PROBLEM OF SEQUENTIALIZATION
1.1 Our World as Sequences See It In the beginning there were no sequences. The Earth of four billion years ago was devoid of one-dimensional patterns. Everything was predictable, humming along in accordance with the laws of nature. But then sequences appeared, and nothing was the same again. The Earth we inhabit today is overrun with sequences and their products. First to emerge on our lifeless planet were the sequences of RNA, DNA, and protein that govern the function of living things. Later a certain kind of living thing, a social primate, developed sequences of speech and eventually of writing. Today all of these molecular and linguistic sequences are routinely transformed into one-dimensional patterns of zeros and ones that f low through wires and over the air. All day, every day, we swim in an ocean of sequences. And like the proverbial fish who does not know what water is, we are largely oblivious to their profound inf luences. But on the sequence-free Earth of four billion years ago there was just the ordinary stuff of matter and energy, all obeying the predictable and inexorable laws of nature. Matter and energy are still with us and still obey the same laws but, thanks to sequences, they are much better organized across time and space, into liver cells and violins and universities. That’s a big change. How did it happen? To appreciate how sequences achieved their hegemony, we must first understand how they differ from the ancient sequence-free physical world from which they arose. What are the properties of sequences—systems of sequences, really, because sequences never travel alone—that distinguish their behavior from that of ordinary matter governed by the laws of nature? Despite the long tenure
The Problem of Sequentialization 13
of sequences on our planet, this is a relatively recent question, unasked before the modern age of molecular biology. This chapter will set out exactly why sequences are different and why they are worthy of study in their own right. In the decade after Nobel laureates James Watson and Francis Crick figured out that the DNA molecule was in fact a sequence of smaller molecules,1 the field of molecular genetics struggled with the problem of what Crick calls sequentialization. “It is this problem, the problem of ‘sequentialization’, which is the crux of the matter,” he writes.2 DNA was known to be a sequence of nucleotides, as was its cousin RNA, and proteins were known to be sequences of amino acids. What was not known was how these sequences related to one another. By the late 1960s, researchers had worked out a fundamental explanation of the genetic code, which shows how the one-dimensional patterns of nucleotides in DNA are mapped to the one-dimensional patterns of amino acids in proteins.3 That is what a code is, a mapping from one kind of sequence to another. In Morse Code, for example, unique sequences of dots and dashes map to the letters, numbers, and symbols of the alphabet. In the genetic code, threeletter sequences of DNA (codons) map uniquely to each of the 20 individual amino acids found in protein sequences. But Crick’s problem of sequentialization can be viewed more broadly. How did Earth’s surface become sequentialized? How did systems of sequences emerge? How did they create and propagate their own complex world of coordinated matter and energy, a world that exhibits functional coherence and temporal endurance in ways that could never be predicted from the laws of nature, but which in no way contradict those laws? How did inanimate matter give rise to life and life in turn give rise to civilization? But first things first. What is a sequence, anyway? In everyday usage, it is one thing following another: a sequence of steps to bake a pie or a sequence of stages in the development of an embryo or a sequence of f loats and marching bands in a parade. Sequence is rooted in the Latin sequor (“to follow”), best known today for describing a lapse in logic: a non sequitur (“it does not follow”). The steps required to bake a pie form an instructive sequence; the sequences of text in a recipe tell us what actions to perform and in what order. Navigational directions to a destination, the user manual for a kitchen appliance, instructions for erecting a tent, and lessons in how to tango are all examples of instructive sequences. They tell us how to behave, how to move in time and space; their one-dimensional arrangement specifies the order of steps to be performed in an activity. The sequence of stages in embryonic development, however, is a descriptive sequence. We observe an event as it unfolds and, based on our perceptions, generate a one-dimensional pattern to record how it changes over time. Almost all observations of the natural world—most of science, from how stars form to how horses canter—take the form of descriptive sequences. The point is that
14
The Problem of Sequentialization
descriptive sequences do not tell us how to behave; they result from our perception of how the world behaves.4 The sequences of human language can both describe and instruct. This may seem obvious, but it is also crucial. A single set of letters and words, and a single set of grammatical rules for combining them, can not only create a record of something that has happened in the world but also can guide worldly activities like baking a pie. We may not think twice about this, but try to envision a world that requires one language for instruction and a completely separate language for description. How cumbersome would that be? Imagine the instructions for assembling a bookcase where the text describing the parts is in Estonian and the text telling us how to put them together is in Hindi. Instead, we have unitary languages in which instructive and descriptive sequences complement one another. We can receive instructions that guide our behavior: what to do, when to do it, how to do it, etc. We can also perceive the world and create a record of what we observe, what scientists call measurement. Since a single language can do both, the interplay between description and instruction gives systems of sequences extraordinary power to organize the world. See something, do something.
1.2 Persistence of One-Dimensional Patterns When we think of sequences, we envision events in a fixed order in time: first this happens, then that happens, etc. This is true regardless of whether the sequence is descriptive (“I saw this, then I saw that”) or instructive (“Do this, then do that”). The later steps are usually dependent in some way upon completion of the earlier steps. Embryologists observe that the retina develops only after formation of the neural tube. The developmental sequence is fixed: it makes no sense to speak of the retina emerging before the neural tube. Likewise, in baking a pie the crust is made before the pie is placed in the oven; trying to make the crust after baking does not produce good results.5 However, the sequence of f loats and marching bands in a parade does not quite fit the model.6 The pattern of the parade sequence persists even when the parade is not moving; it exists independently of time. Unlike the performative steps of a mechanical procedure or the observed stages of a natural process, the sequential ordering of a parade is a one-dimensional pattern of interchangeable elements (bands, f loats, drill teams, etc.). Further, if the Navy Band follows the Chamber of Commerce f loat, in no way does this imply that other arrangements are impossible. It could just as easily be the other way around, with the Navy Band in the lead. This is how I will use sequence in this book: sequences are persistent, onedimensional patterns composed of interchangeable elements. Many sequential arrangements describe and coordinate activities like baking a pie or building an organism, but it is their persistence, their stability in time, that accounts
The Problem of Sequentialization 15
for their power. A pie recipe is made up of sequences of letters and numbers forming a pattern we call a text. It comprises individual elements drawn from an alphabet, elements that have no inherent meaning in and of themselves but which gain meaning when combined and recombined into one-dimensional patterns. Before life emerged on Earth there were no sequences, no one-dimensional patterns either to describe what was going on or to guide the behavior of matter. The forces of nature that governed earth, air, fire, and water operated of their own accord. Once sequences appeared, however, strange behaviors emerged that are difficult to explain satisfactorily with the universal laws of nature. If you drop a $20 bill, the laws of physics can tell you that it will fall to the ground, but they cannot tell you that someone else will quickly pick it up.7 To be sure, sequences—whether DNA molecules, ink on a page, or voltages in a chip—are nothing other than ordinary physical objects that obey the laws of nature like everything else. There are no special laws or supernatural activity that somehow exempt sequences from the physical and chemical processes that control the universe. In this sense, sequences are quite ordinary, just more stuff. But in another sense, in their ability to orchestrate the behavior of matter, they are extraordinary. We must stand ready to look at sequences both ways, as ordinary matter and as something much more. “Central to the notion of a ‘message’ is the difference between the structure of a physical object which follows necessarily from its physical makeup and that which does not,” explains David Hull.8 You and I may readily acknowledge that this book is made of paper, ink, and glue, but unless we are bookbinders by trade, the details of its fabrication are of little interest. The pattern of sequences in the book is what interests us. Sequences lead us to ignore their own physical details and attend instead to their persistent one-dimensional arrangement.
1.3 Sequences, In and Out of Time Time, space, matter, and energy are the great quartet of physics. So unusual are sequences that their behavior can be contrasted across all four domains. We shall take them in turn, starting with time. When a gym class runs a 100-meter dash, the student who finished ahead of all the rest always wins. Likewise, when a linguistics class takes its final examination, the student who completed the test ahead of the others always gets the best grade. Uh no, wait, that’s not right. The student who finished the test first might easily get the worst grade, and the student who finished last might get the best. Why is this? Why should time be decisive in foot races but not in finals? The answer is central to how sequences function. It is to be found in the difference between behaviors that call for processing sequences, like taking a test, and behaviors that entail only dynamic motion, like running a race. Taking a
16
The Problem of Sequentialization
test is rate-independent and running a race is rate-dependent.9 Rate dependence means that the result of an activity depends on how quickly it is performed. Rate independence means the opposite: the result does not depend on speed. Think of our parade. Whether the marchers are standing still, walking at a normal pace, or double-timing, their order, their pattern, their sequence, remains the same. The motion of the marchers may be faster or slower, but their relationship to one another does not change. This relationship, their one-dimensional arrangement, is rate-independent. Moving faster does mean the parade will finish sooner; its motion is rate-dependent even though its arrangement is not. Physical laws like gravity and electromagnetism predict how events develop over time. When we describe the everyday physical world, we are all about rates and how the fundamental forces of nature affect them. Here’s a clue. You can tell an activity is rate-dependent if it contains the time variable “t” in its denominator: miles per hour, cycles per second, beats per minute. Not so with sequences. When you ditch your old phone and get an upgrade, then every app will run faster, but there is no effect on the app’s function. The speed at which a processor calculates is a rate, but the solution to the problem being calculated is not affected by changing that rate. The parade may be standing still or moving briskly, but the order of f loats and bands is unaffected. “Rate independence is a general property of information,” explains Howard Pattee. When a cell is synthesizing protein or when a computer is executing a program or when I am writing a paper the informational aspects of these processes do not depend in any explicit sense on the rate of protein synthesis, the rate of computation, or the rate of my writing, respectively. If we say that information has meaning, then the meaning is independent of the rate at which the information is created, transmitted, or interpreted. We cannot imagine a useful concept of information that would require the type of protein to change with the rate of its synthesis, the plot of a novel to change depending on the rate we read it, or the value of π to be affected by the rate of its computation.10 We describe the rate-dependent behavior of the material world using the laws of physics, like Newton’s laws of motion. But physical laws are inadequate to fully describe the rate-independent behavior of systems of sequences. The result of a sequential procedure like a calculation is always the same whether it takes a second, a minute, or an hour to do the work. Newton is fine for foot races but no good for finals. My favorite everyday example of the distinction between rate independence and rate dependence is the anxiety-provoking test we take to obtain a driving license. There are actually two tests: a written exam that calls for sequence processing and a road test that calls for real-world dynamic behavior (i.e., driving).
The Problem of Sequentialization 17
So which exam is rate-dependent and which rate-independent? Any would-be driver could complete the written test in half of the allotted time and still pass; it is rate-independent. But completing the road test in half of the allotted time would terrify the examiner and lead to certain failure; it is rate-dependent. Why do governments consider both tests necessary to certify a novice driver? Well, you could design a longer and more challenging written test, but no matter how long or challenging, the test would leave unanswered the crucial question of whether your sensorimotor system is up to the task of operating an automobile. By the same token, in principle it is possible for a road test to assess your knowledge of every single rule in the traffic code, but such a test would take a very long time to administer. Once you have shown you know to apply the brake at a red light, you still would have to show you also know to apply the brake at a stop sign or when a school bus stops or an ambulance passes. Energy is saved and efficiency promoted by collapsing that rate-dependent detail into a rate-independent written test. Kim Sterelny notes that chefs will often line up the ingredients for a dish in the order in which they are to be used.11 With a sequence of ingredients on the kitchen counter, the chef substitutes the spatial dimension of their arrangement for the temporal dimension of cooking, mirroring a procedure with a pattern. And should the chef change her mind and decide that the pepper should be added before the salt, she can rearrange the ingredients. By changing the spatial dimension, she adjusts the constraints on the temporal dimension.12 The arrangement of ingredients is rate-independent, but cooking itself is rate-dependent. The chef can arrange her ingredients and leave the kitchen for 20 minutes. When she returns their arrangement will not have changed. But if she leaves for 20 minutes while the onions are frying in the skillet, she will find a very different state of affairs upon her return.13 How quickly she reads the recipe is not important, but how quickly she executes the steps is. At a global scale, consider two means by which nation-states resolve their differences. They can negotiate a treaty, which is a rate-independent activity that yields a text of linguistic sequences. It is rate-independent because the outcome does not depend on the length of either the treaty itself or of the negotiations that produced it. Or two nation-states can go to war, which is a dynamic, rate-dependent process in which small differences in timing can have a dramatic effect on the outcome.14 The rate independence of sequences results from their ability to freeze out the temporal ordering of events. Written sequences collapse the temporal dimension into a spatial dimension.15 Among other things, this endows texts with the property of random access; you can refer to sequences already written in a way that you cannot refer to utterances that have only been spoken.16 It says so right here. We say, “The cat is on the mat” and “The king is in his counting-house,” using the prepositions on and in to indicate spatial relationships. We also say, “The cat is going to the veterinarian on Tuesday” and “The king will depart for
18
The Problem of Sequentialization
France in a week,” using the same prepositions to indicate temporal relationships. This is one way that sequences collapse the temporal into the spatial. A series of actions that play out in real time is captured in a stable one-dimensional arrangement in which the dimension of time is replaced by one dimension of space. But sequences cannot do this on their own. As we shall see in Chapter 2, they need some help.
1.4 Sequences are History The rate-independent pattern of a sequence can describe or guide some rate-dependent behavior in the world, but this is not the only way in which sequences have a peculiar relationship with time. Sequences of DNA are essential to biological evolution, and evolution implies change over time. As Hull notes, “There is something about the structure of a particular molecule of DNA which depends on the sequences of selection processes which gave rise to it.”17 In other words, sequences contain a record of their own history. Linguists Morten Christiansen and Simon Kirby call the study of the origin of language “the hardest problem in science,”18 although they might get some pushback on that claim from those biologists who study the origin of life.19 Explaining the origin of complex systems of sequences, whether genomes or texts, is a hard problem indeed. Fortunately, researchers studying the origin of life and those studying the origin of language are studying what is tantamount to the same problem. This leads us to the second temporal difference between sequences and the physical world. Rate independence also allows sequences to display unusual properties in historical time. “In its evolutionary role the gene [sequence] inhabits eternity, or at least geological time,” writes Richard Dawkins. “Its companions in the river of evolutionary time are other genes, and the fact that in any one generation they inhabit individual bodies can almost be forgotten.”20 The one-dimensional patterns of sequences are more than rate-independent. They are, in principle, eternal. What Dawkins says about genes is also true of the sequences of language. As individuals, you and I have learned to listen, to speak, to read, and to write, but the systems of linguistic sequences we have mastered were here before we were born and will be here after we die. In the river of time, the fact that these sequences have inhabited our individual bodies can almost be forgotten. What, then, is the difference between the history of a system of sequences and the history of an ordinary physical system? The chief distinction is that the dynamic laws of physics are reversible in time. With precise enough measurements, we can not only predict the state of a physical system at any time in the future but also infer its state at any time in the past. Astronomers can tell you when solar eclipses will take place in the future, as well as when and where they took place years, centuries, or millennia ago.
The Problem of Sequentialization 19
Not so with sequences. In 1965, biologist Emile Zuckerkandl and Nobel laureate biochemist Linus Pauling wrote a paper called “Molecules as Documents of Evolutionary History.” “Of all natural systems,” they say, “living matter is the one which, in the face of great transformations, preserves inscribed in its organization the largest amount of its own past history.” 21 In other words, the one-dimensional patterns of sequences provide a partial record of how they evolved. “The genealogical history of an organism,” says Carl Woese, “is written to one extent or another into the sequences of each of its genes.” 22 There is a great gap between the complex systems of linguistic and genetic sequences we observe today and everything else in the physical world.23 Contemporary systems of sequences must have evolved from less elaborate precursors, but there is little evidence remaining of what those precursors were like, or of the specific steps involved in getting from those precursors to where we are today. In English, and in the genome, it appears that every part is necessary for the proper functioning of the system, and yet we know that at some earlier time not all parts were present. As linguist Derek Bickerton says, “Language must have evolved out of some prior system, and yet there does not seem to be any such system out of which it could have evolved.”24 Systems of sequences are complex and interdependent. Complexity makes modeling difficult, and interdependence magnifies the difficulty because we cannot determine which elements are necessary and which are contingent. Also, no intermediate forms survive. No proto-languages are spoken anywhere and no proto-organisms have been collected by scientists. The languages of contemporary hunter-gatherers and the genetics of ancient lineages of bacteria are already complex. Researchers have developed software to analyze patterns in sequences that yield clues to their history.25 These computational tools demonstrate the convergence of two independent strains of sequence studies, historical linguistics and molecular evolutionary genetics. Etymologists studying English analyze patterns in the language to tell us which words have a Germanic pedigree and which a Romance, and molecular biologists analyze patterns of DNA to tell us how much of our genome originated with Neanderthals. Linguists Quentin Atkinson and Russell Gray put it this way: “Researchers using computational methods in evolutionary biology and historical linguistics aim to answer similar questions and hence face similar challenges.”26 This is because they are studying equivalent problems.27 Molecular biologists developed their computational tools to probe the masses of data originating from genome sequencing projects. Evolutionary biologists adapted these tools to address problems like the relatedness of different species or the nature of the common ancestor of all life.28 Linguists adapted the same tools to address questions like the relatedness of modern languages or the nature of extinct languages like Proto-Indo-European.29
20
The Problem of Sequentialization
Two phylogenetic trees show the convergence of sequence analysis methods in evolutionary biology and historical linguistics. The tree on the left shows the evolutionary relationships among the DNA sequences of the domestic dog and its relatives, including which are most closely related and how far back in time they diverged.30 The tree on the right shows the relationships among the sequences of Indo-European languages, including which are most closely related and how far back in time they diverged.31
FIGURE 1.1
One thing we have learned from these analyses is that, in both linguistic and biological evolution, the patterns of commonly used sequences tend to be conserved; in Dawkins’s river of evolutionary time they are unusually persistent. The cell contains common genes and short DNA sequences for which the fundamental arrangement has remained largely unchanged for hundreds of millions of years. By the same token, the most frequently used words in a language retain their forms longer than those that are less common. In English, for example, only three percent of modern verbs are irregular, but of the ten most common verbs, all are irregular.32 Using the historical record to describe the previous states of a system of sequences is not a lawful process but rather a statistical one. The tools of computational biology cannot provide precise answers to historical questions, but only estimates within a range of probabilities. Further, none of these software tools is of use in trying to explain the history of an ordinary physical system because physical systems are governed only by the laws of nature. The laws
The Problem of Sequentialization 21
of nature may retain no explicit record of their history, but their reversibility allows us to calculate their previous states if we so desire. Predicting the future of sequential systems is also probabilistic. The trajectories of physical systems can be predicted with great precision; this is how we land space probes on comets. But no matter how much behavioral detail we accumulate, we cannot forecast the direction of either biological or cultural evolution.
1.5 Collapsing Three Dimensions to One Time in the physical world and time in the world of sequences could not be more different. Sequences are rate-independent, and their one-dimensional patterns can persist and evolve over very long periods of time. Now we turn our attention to sequences in space. A modern truck factory is a marvel of automation. Robots controlled by software grasp parts, weld joints, apply paint, and orient the subassemblies under construction to the correct position. This robotic motion takes place in three dimensions and the final product of the assembly line is a functioning, three-dimensional machine. However, the software that governs all of this activity is a one-dimensional arrangement of zeros and ones. We see a similar phenomenon in the cell, where a gene, a one-dimensional sequence of DNA, is transcribed and translated into a functioning, threedimensional enzyme. Or, in everyday life, when a set of one-dimensional instructions guides you in erecting a 3-D tent, or a one-dimensional recipe tells you how to bake a 3-D pie. Sequences not only allow the dimension of time to be collapsed into a dimension of space, but space itself can be collapsed from three dimensions down to one. Building trucks and baking pies require instructional sequences, onedimensional patterns that guide the behavior of physical systems like robots and people. But descriptive sequences also collapse three dimensions into one. All processes of inspection, examination, and measurement capture 3-D arrangements in one-dimensional sequences. A medical textbook like Gray’s Anatomy reduces the dimensional complexity of the human body into sequences of text. But the relationship of sequences to space can extend well beyond the local context. Charles Hockett noticed this 50 years ago when he compiled a list of 16 of what he called the design features of the sequences of speech.33 His purpose was not to describe human speech in the abstract but rather to contrast our vocalizations with other forms of animal communication. Nonetheless, several of his design features describe fundamental properties of all sequences, not just language, and his list will reappear regularly throughout this chapter. One design feature he calls displacement: “Messages may refer to things remote in time or space, or both, from the site of the communication.”34 Sequences can extend their three-dimensional effects over long spatial distances. By deploying
22
The Problem of Sequentialization
one-dimensional patterns, aerospace engineers can orchestrate the threedimensional behavior of a spacecraft landing on a distant comet. The sequences of a science or history textbook can describe objects and events well outside the perceptual range of either author or reader. Sequences make possible both “action at a distance” and “perception at a distance.” Unlike Hockett, I prefer not to use the term refer to describe what sequences do. Rather, I would say that sequences describe and/or govern things remote in time or space, or both. If the text of a web page describes a three-dimensional object residing in a faraway warehouse, I can deploy sequences to guide that object to my doorstep. Describing or guiding three-dimensional activities in one place by one-dimensional sequences in another place does not require suspension of the laws of physics. Sequences accomplish their description and instruction at a distance without invoking any new laws. These, then, are two ways for us to think about the relationship between the three dimensions of the world and the single dimension of a sequence. First, a sequence can describe and/or govern local three-dimensional activities, like building a truck. Second, it can extend its influence over arbitrarily large distances and times, like ordering a book from Amazon or landing a probe on a comet. A third important spatial relationship exists not between sequences and the world but rather is internal to sequences themselves, and that is the patterned arrangement of space within and between sequences. In the physical world, if you increase the spatial distance between two entities then you decrease the force between them. The force of gravity between a planet and a moon is inversely proportional to the square of the distance between them; the same holds true for the attraction between the opposite poles of two magnets. This is called distance decay; the dynamic (rate-dependent) relationship between two objects is determined by how close they are to one another. But when we look at the spatial distances between elements in a sequence, we find something different. The distance between any two elements does not change the interpretation of the sequence. Fixed and proportional fonts may change the aesthetics of a text, but not its semantics. The only significant spatial property in a sequence is adjacency: some element precedes or follows some other element. “Talking or writing is not carried out with respect to some measured space,” says linguist Zellig Harris. “The only distance between any two words of a sentence is the sequence of other words between them . . . the only elementary relation between two words in a word sequence is that of being next neighbors.”35 Of course, when taken as physical objects sequences are not immune to the distance decay of the fundamental forces, but when they function as sequences these details are ignored. This is because we consider sequences to be discrete entities as opposed to the continuous entities of the everyday world. In physics, forces and distances can be larger or smaller without limit, but since sequences are built from discrete elements drawn from a preset alphabet, the shortest possible sequence is one
The Problem of Sequentialization 23
element in length; fractional elements are meaningless. As psychologist Marc Hauser and colleagues note, “There are 6-word sentences and 7-word sentences but no 6.5-word sentences.”36 Another Hockett design feature is duality of patterning, which describes the hierarchical relationship among the spatial patterns within systems of sequences. At the lowest level there exists an alphabet comprising a small number of meaningless elements, letters in the case of writing, nucleotide bases in the case of the genome, zeros and ones in the case of computer code. These are organized into linear patterns, which have a hierarchical structure that we call a syntax or grammar.37 The elementary level is meaningless, but when lower-level elements are arranged into patterns at higher levels, meaning emerges and the sequences can be interpreted functionally. Duality of patterning gives systems of sequences their creative power. The spatial rearrangement of primitive elements gives us new words and new genes, and the spatial rearrangement of words and genes gives us new recipes and new metabolic and developmental pathways. This leads us to another Hockett design feature, openness, meaning that “new linguistic messages are coined freely and easily.”38 Rearranging the elements into different spatial patterns yields new options for description and instruction that form the basis for evolution and increasing complexity. When thinking about sequences and space, we look to spatial relationships both outside and inside the sequence. We look outside to observe one-dimensional patterns describing and orchestrating the three-dimensional behavior of physical systems, and to witness that behavior extending great distances into threedimensional space. We also look at spatial arrangement of patterns within the sequence. Later on, we will see how this internal grammar ref lects the structure of the three-dimensional world.
1.6 Low-Energy Sequences You may not realize it, but English contains many idioms that relate sequences to energy. Talk is cheap and actions speak louder than words. We often use idioms like these without much ref lection on their deeper meaning, but I invite you to consider in more detail what expressions like these are really getting at. You may conclude that it is easier said than done. How we think about energy in the world of sequences is nothing like how we think about energy in the everyday world of ordinary matter. After all, what are we implying about energy in the sequences comprising our talk, or words, or what is said? The implication is that sequences are low-energy affairs compared with behaviors in the real world. Talk is cheap; it does not cost much in the way of energy. Discussing an activity consumes less energy than performing it; it is easier said than done. Conversely, actions speak louder than words. This is because they require more energy.
24
The Problem of Sequentialization
Like the valve that opens the f loodgates of a dam or the switch that activates a jackhammer or the trigger that fires a shotgun, a sequence consumes very little energy when compared with the magnitude of the dynamic processes it can unleash. The energy involved in transcribing and translating a sequence of DNA into an enzyme is tiny compared with the energy the enzyme can harness in a metabolic pathway.39 Likewise, the effort you make in asking a dining companion to pass the pepper is much less than the effort he makes in passing it. Anthropologists Robert Boyd and Peter Richerson put it succinctly: “Energetically minor causes have energetically major effects.”40 The asymmetrical relationship between the energy needed to interpret, replicate, or store a sequence and the energy needed to behave in the world is well-known to researchers who study animal communication. “By a communication from animal X to animal Y,” writes biologist J.B.S. Haldane, “I mean an action by X involving a moderate expenditure of energy, which evokes a change in the behavior of Y involving much larger quantities of energy.”41 Hockett lists this energy asymmetry as one of his design features. He calls it specialization. “The direct-energetic consequences of linguistic signals are usually biologically trivial; only the triggering effects are important,” he writes. “Even the sound of a heated conversation does not raise the temperature of a room enough to benefit those in it” (emphasis his).42 Sequences may be physical objects, but as physical objects they are largely inert. “A communicative act is specialized,” says ethologist Stuart Altmann, “to the extent that its direct energetic consequences are biologically irrelevant to anything but communication.”43 The pen is mightier than the sword because of the forces it can release, not because of its inherent energy level. And as you know, sticks and stones may break your bones, but words can never hurt you. A soprano at the Metropolitan Opera might shatter a wine glass by hitting a resonant high note, but that is due to physics, not to the subject matter of her aria. Much superstition and magical thinking is built around the imagined efficacy of spoken sequences to intervene directly in the physical world, in the West perhaps inspired by Psalm 33:6, “By the word of the Lord were the heavens made.” Spells and incantations are common plot devices in fairy tales, and they persist in folklore, but our literate technological civilization doesn’t put much stock in them. “Rarely do we shout down the walls of a Jericho or successfully command the sun to stop or the waves to stand still,” writes psychologist B. F. Skinner. “Names do not break bones. The consequences of such behavior are mediated by a train of events no less physical or inevitable than direct mechanical action, but clearly more difficult to describe.”44 To change the world, sequences require the intervention of some physical mechanism, an interactor. We will meet interactors in Chapter 3. As we did with space, we can look at the relationship of sequences and energy both outside and within the sequence. To the physical world outside,
The Problem of Sequentialization 25
sequences are energetically inert; they don’t do much on their own. With a little help, though, they can orchestrate enormous external forces. However, within any sequence the energy differences among elements are vanishingly small. Think of f loats and marching bands in a parade: it is equally possible for the Navy Band to precede or follow the Chamber of Commerce f loat. One pattern does not require more energy than the other; there is no energetic limitation on the order of elements. Put simply, no element of a sequence requires significantly more effort to process than any other element. The energy consumed in interpreting, replicating, or storing any sequence of length 20 or 30 or 100 is essentially the same as for any other sequence of the same length.45 One name for this property is energy degeneracy, and it is common to all systems of sequences.46 As a result, there is no physical reason why any sequence of a certain length is more likely to occur than any other sequence of that length; all are equally likely, or equiprobable. It requires no more energy to speak, write, hear, or read the word thumb than it does to speak, write, hear, or read the word crumb. Even if there is an energy difference, it is so small that it does not affect the probability that you will use one word rather than the other. Suppose a computer needed twice as much energy to store a zero rather than a one, or that it were twice as hard for you to produce the sound of th rather than sh, or it required more cognitive effort for you to read u instead of a. In these hypothetical cases, the sequences containing the most energy-intensive elements would become disfavored solely on the basis of their physics.47 Energy degeneracy has important implications for the mechanisms responsible for interpreting, copying, and storing sequences. Whatever they may be doing—replicating a genome, reading code from a thumb drive, or repeating orders from the commanding officer—these mechanisms need not worry about unusual energy demands.48 They can process whatever is thrown at them with the same expenditure of effort. This is especially important in evolution. If you swap one element for another in a sequence—in other words, if you introduce a mutation—the changed sequence is no more difficult to replicate than the original. When it comes to sequences and energy, the points to remember are (1) the direct energetic effects of sequences are not important, and (2) sequences themselves are energy-degenerate. Sequences do have their own internal dynamics as purely physical objects, but the small amount of energy involved is of no interest to us. Rather, we are interested in the larger energy of the systems that sequences orchestrate.
1.7 Matter Matters, Except When It Doesn’t Having looked at time, space, and energy, we have seen that sequences have unusual properties and behaviors with respect to each. How will this play out
26
The Problem of Sequentialization
with the final member of the great quartet of physics, matter? We can start with a quick lesson from another field that deals with material things, and that is economics. A fundamental concept of economics is scarcity. If there is only one pie, there is no way you and I can both eat the entire pie. Economists call a pie a rival good, meaning if I eat it you cannot and vice versa. A pie can also be an excludable good, meaning I can lock it in a cabinet to keep you from eating it. Every physical object, everything that is made of matter, is rival and excludable. A pie cannot be in two kitchens at the same time, and a photon of light cannot energize two plant cells at the same time. However, two kitchens can each have a copy of the recipe for the pie and two plant cells can each have a copy of the gene for chlorophyll. I can have the recipe for the pie in my kitchen and you can have it in yours. Neither of us is disadvantaged even though the note card displaying my recipe is physically different from the note card displaying yours. The note cards themselves are rival and excludable, but the sequential patterns of the recipes are not.49 Economists call sequences public goods.50 They are non-rival and, unless copyrighted or protected with a paywall, non-excludable. “Knowledge is considered to be a public good,” write molecular biologist James McInerney and colleagues. “It is difficult to prevent the spread of knowledge and facts don’t get ‘used up’ if many people know them.”51 Most of what economists call knowledge is embodied in sequences. Another way to think about matter and sequences is that the meaning and function of a sequence do not depend upon its medium of expression. Texts can take on a wide variety of material structures. Besides paper, they have been pressed into clay tablets, engraved in stone and metal, and written on animal hides. And the sequences of zeros and ones used by computers have been stored on everything from punch cards to photographic film to magnetic wires to optical discs. “Written languages are not associated with their particular physical representations,” says Pattee. “We do not consider the type of paper, ink, or writing instruments as crucial properties of language structure.”52 Sequences possess their own material reality, but in order to function as sequences they must rely on external mechanisms in ways that the physical world does not. The laws of physics are incorporeal; they are disembodied and quite able to function without any mechanism to make certain they are obeyed.53 Earth and its moon experience gravitational attraction, but there is no grav-o-matic machine making this happen. We have already established that, taken by themselves, sequences are inert. Unlike gravity, they are reliant upon external mechanisms for their expression, interpretation, and replication. The pre-existence of a complete set of these mechanisms is a requirement for sequences to express their functions and replicate their patterns. “We inherit not only genes made of DNA but an intricate structure of cellular machinery made up of proteins,” writes evolutionary biologist Richard Lewontin. “An egg, before fertilization, contains a complete
The Problem of Sequentialization 27
apparatus of production deposited there in the course of its cellular development.”54 This is the matter that matters. Were humanity to become extinct, our libraries filled with books might still exist, but the books would be mere material objects, no different from the shelves on which they rest. The sequences of human language are useless in the absence of human bodies and brains. Likewise, the genetic code loses its functional value without a molecular starter kit in the reproducing cell, and the binary sequences of computer code have no effect without a computer to run them. Sequences can choreograph the three-dimensional world only when they are part of a larger system that includes sequence processing machinery. The closest thing we have to a pure sequence life form is a virus, which has to borrow another cell’s molecular equipment to function and replicate. Viruses will be discussed in Chapter 8. Developmental biologists like to point out that most machines do not have to function until after they are constructed: a truck does not have to run until it comes off the assembly line. But living things must function while they are being constructed. It is amazing that a caterpillar converts itself into a butterf ly; perhaps more amazing is that it does so while remaining alive during the entire process.55 This continuity of function through evolution and development is only possible because the sequence processing equipment is always present. Sequences and matter have a complementary relationship. Although sequences depend upon real material structures for their function, the material nature of the sequences themselves is not relevant to that function. It’s the pattern that counts.
1.8 Laws are Not Rules, and Vice Versa If a physicist traveled to another planet, she would expect Newton’s laws of motion to operate there as they do here, but she would not expect to hear Finnish being spoken. If an earthly legislative body decreed that for one day each year the speed of light must be 50 percent slower, that piece of legislation would not be obeyed; light would be a scoff law. If an asteroid struck Earth and wiped out all life, any remaining molecules of DNA would continue to behave according to their chemical and physical properties, but their ability to choreograph the complex dance of other molecules would vanish. In this chapter we have been studying the contrast between how the physical world functions and how sequences function. Systems of sequences demonstrate unusual behaviors with respect to time and space, matter and energy. Pattee says it all boils down to what he calls the difference between laws and rules.56 This is an important distinction, but it can be confusing at first because in everyday conversation most people use law in two different senses. We talk about the law of the land, meaning rules and regulations that are created by governments, but we also call gravity a law of nature that is fundamental to how the universe operates.
28
The Problem of Sequentialization
To Pattee, laws—like the laws of physics—have three characteristics. First, they are universal, which means they hold true at all places and times, even on distant planets.57 Second, they are inexorable, which means they cannot be evaded or modified, even by determined legislatures. Finally, they are incorporeal, which means they are obeyed without the need for any external mechanism. A piano falling from a crane does so without any help. Pattee’s rules are the opposite of laws. They do not hold true at all places and times, they can be evaded or modified, and they are dependent upon external mechanisms in order to function. In other words, rules are local, arbitrary, and structure-dependent. Life and civilization are more like rules than laws, as are the behavioral mandates issued by governments. They are not found everywhere, they do not have to be exactly as they are, and they apply only if there is equipment around to execute them. However, the molecular sequences of DNA, RNA, and protein that make life possible depend on the absolute reliability of the laws of physics. If those laws were not universal, not inexorable, or not incorporeal, then sometimes life would work and sometimes it wouldn’t. It certainly would not be life as we know it. Life is only possible because certain aspects of the physical environment, such as energy from the sun, appear and disappear on a reliable, predictable schedule. Here’s the thing. No matter how robust and reliable the living world appears to be, it does not share the absolute properties of physical law. “A mature physicist, acquainting himself for the first time with the problems of biology,” says Nobel laureate biophysicist Max Delbrück, is puzzled by the circumstance that there are no ‘absolute phenomena’ in biology. Everything is time bound and space bound. The animal or plant or micro-organism he is working with is but a link in an evolutionary chain of changing forms, none of which has any permanent validity.58 However regular they seem to us, the regularities of biology and culture are contingent, not necessary. They are like rules, not laws. Our biological regularities are far from universal, having arisen on a single lonely planet among likely billions, and within a temporal window that represents one-quarter of the age of the universe. And they are hardly inexorable; the diversity of the living world is largely due to chance, to what Crick calls frozen accidents.59 As for incorporeality, as we have seen, biological regularities depend on the continuity of living matter. DNA transcription, translation, and replication cannot take place without an ensemble of helper molecules in a functioning cell, and language cannot survive without humans to speak it. So, in comparison with the universality, inexorability, and incorporeality of physical law, biological regularities are 0-for-3, none of the above. This leads to a quandary. It is one thing to say that the sequences of the living world are reliant upon the lawful behavior of the underlying physics and
The Problem of Sequentialization 29
chemistry. But it is quite another to say that the speech and text that govern human civilization are dependent in the same way upon the lawful behavior of the underlying biology. This is because we just saw that the living world is not lawful; it is rule-ful, time bound and space bound. In one sense, life is no more than a set of arbitrary molecular constraints foisted upon the lawful behavior of the underlying physical world. Biology may f loat on a layer of physics and civilization on a layer of biology, but the complicating factor is that we know physics is lawful and biology is not. The distinction between laws and rules thus has the potential to trip us up in our effort to provide a unified account of sequences. How can we describe them as two examples of one sort of thing when one is built on the laws of physics and the other on the non-laws of biology? The answer seems to be that, although the regularities of biology may be arbitrary and rule-like, they can rise to the stature of physical law in explaining how sequences behave. Natural selection can simulate a law, says Delbrück.60 Evolutionary change may be a necessary component of the living world, but so too are stability, reproducibility, and predictability. Evolution does not require the absolutes of physics, merely the predictable good-enoughs of natural selection. So coherent and persistent is the living world that its contingent regularities appear lawful for all practical purposes. It is a testament to the coordinating power of DNA sequences that ecosystems operate as if biological consistency were just as lawful as gravity or electromagnetism. Many organisms depend not only on the predictability of the physical environment, like gravity and solar energy but also on the predictable presence of other organisms in that environment. Green plants may rely on the astrophysical regularities of planetary motion and nuclear fusion, but herbivores depend on a reliable supply of green plants and carnivores on a reliable supply of herbivores. Predator-prey and parasite-host relationships come into being only because predators can bank on a supply of prey, as can parasites of hosts. As observers, we may recognize the contingent nature of these relationships, but from the perspective of natural selection the availability of food sources is as lawful as gravity. While saying so might make a physicist wince, inexorability is in the eye of the beholder. “Many of the most beautiful regularities of the natural world [are] only approximate regularities,” says Herbert Simon.61 As a result, our higher-level sequences of speech and text can rely on the biological regularities of our bodies and brains in the same way that sequences of DNA can rely on the regularities of the physical world. It’s a hierarchy; the rules of the upper level organize and coordinate the lawful behavior of the lower level; genes coordinate the underlying physics and language the underlying biology. Speech is coupled to the biology of human bodies and brains and thus behaves more like a law. To persist across the generations, sequences of human speech
30
The Problem of Sequentialization
require nothing more than a reliable supply of new speakers. As reproducing biological creatures, we are generally happy to maintain that supply. However, the sequences of written language are not as tightly coupled to our underlying biology. Literacy behaves more like a rule. Texts are a cultural invention not shared equally by all members of our species. To facilitate what Jack Goody calls the reproduction of the readers, we divert significant economic resources to social institutions of instruction and learning.62 We will return to this point in Chapter 7. The distinction between laws and rules is readily seen in human institutions. “Even though social systems observations are many hierarchical levels away from the measurement of electrons,” says Pattee, there is still the need to distinguish the dynamical ‘laws’ of social organizations from the descriptive rules that we call goals, plans or policies. Here, the concept of law does not have the degree of inexorability or universality as does physical law, but relative to the goals of the individual it has similar effects. Certainly, one characteristic of social dynamics is its rate-dependence as opposed to the rate-independence of our descriptive social plans and policies.63 We will take a closer look at institutional behavior in Chapter 8. Although the regularities of biology are not quite as reliable as the laws of physics, they are reliable and lawful enough for natural selection to take place. And in the case of humans they provide sufficient stability and reliability for the sequences of speech and writing to perform their organizational roles in society. Systems of sequences may be layered atop other pre-existing systems of sequences, like a laminate, but the lowest level is grounded in physical law.
1.9 Sequential and Self-Referential In a famous episode in Homer, the hero Odysseus orders his crew to tie him to the ship’s mast so that he can hear the sweet song of the sirens without surrendering to their fatal allure. He stuffs his crew’s ears with beeswax and further orders them to ignore any of his future pleas to untie him while within earshot of the sirens. In essence he says: “I hereby order you to pay no attention to certain of my future orders.” Odysseus may not realize it, but there is a big difference between these two commands. Strapping one object to another object, like a man to a mast, is a physical real-world behavior. But his second order doesn’t describe anything physical in the real world; it is speech talking about speech. With one vocal sequence he negates the effects of future vocal sequences. What he does not realize is that sequences can not only describe and govern activities in the physical world, like baking a pie, but they can also interact with
The Problem of Sequentialization 31
one another. When one sequence affects the interpretation of another sequence it is called self-reference. Hockett has a design feature for this, which he calls reflexiveness: “One can communicate about communication.”64 Zellig Harris puts it this way: “Every natural language must contain its own metalanguage, i.e., the set of sentences which talk about any part of the language.” He continues: “Observably the grammar which describes the sentences of a language can be stated in sentences of the same language.”65 By whatever name, this self-referential, self-regulatory property is crucial to the evolution of complexity. Sequences not only describe and govern behaviors in the world but also the effects of other sequences. Like Odysseus, they can modify or regulate their expression, meaning, or function. When sequences become untethered from directly describing or instructing activities in the physical world and instead begin regulating the activities of other sequences, we are witness to the emergence of abstraction. The set of rules by which sequences regulate each other is part of grammar. Note again the difference between the sequential world and the physical world. Interactions between physical objects are described by laws like gravity, electromagnetism, and conservation of momentum. Interactions between sequences are not governed by such laws but rather by rules that are specific to the sequences involved. These interactions require explanation in their own right. The DNA sequences that comprise the genome can be broadly classified into two groups, structural genes and regulatory genes. Structural genes oversee the metabolism and physical architecture of the cell, while regulatory genes manage the behavior of other genes. Regulatory genes turn other genes on or off or cause them to be expressed to a greater or lesser degree. They are gene sequences that refer to other gene sequences, the genetic metalanguage. The distinction between sequences that describe the physical world and sequences that refer to other sequences is clear in our language too. To Odysseus, a siren may represent physical danger, but siren is a noun.66 Factory managers divide the workforce into blue-collar workers and whitecollar workers. Like structural genes, blue-collar workers are those who do things, who interact with the physical world. They help the factory make what it makes. Like regulatory genes, white-collar workers manage the activities of blue-collar workers. But white-collar workers can have their activities managed by other white-collar workers, who in turn are managed by still other whitecollar workers. This is how bureaucracies and systems of sequences undergo open-ended evolution. The unbridled capacity of sequences to reference each other to create new interpretations is Hockett’s design feature of openness.67 We use sequences of language to describe things in the physical world like moose and wolves, but we can also use the sequences of language to describe things like money, property, points scored in games, and political offices.68 These and many other abstract categories exist only by virtue of sequences
32
The Problem of Sequentialization
interacting with other sequences. The nouns moose and marriage both use the same alphabet and are treated in the same way grammatically, yet moose describes something in the physical world and marriage describes a set of rights and obligations specified by a system of interacting sequences. You can point to a moose, but you cannot point to marriage; you can only define it using other sequences. Earlier we saw that we do not need separate languages to describe and to instruct. A single language can do both, and that is a powerful feature. We can now add to this the fact that we do not need one language to refer to the world and a second language to refer to the first language. There is no way to determine by physical inspection whether a sequence is referring to something in the world that is not a sequence, like moose, or to something in the world that is a sequence, like the word marriage. “The metalanguage [is] a not immediately distinguishable part of the language,” writes Zellig Harris.69 Language and metalanguage both use the same alphabet, the same grammar, and the same material structures for interpretation and replication. For self-reference to occur, the first step is to locate the snippet of sequence being referred to. This is what computer scientists call random access. Systems of stable sequences (think writing) are more amenable to self-reference than systems of unstable sequences (think speech). Writing makes it easy for sequences to interact with one another. It is easier to refer explicitly to part of a written text than to part of an utterance spoken last week or last year. “It is certainly easier to perceive contradictions in writing than it is in speech,” says Goody, partly because one can formalize the statements in a syllogistic manner and partly because writing arrests the f low of oral converse so that one can compare side by side utterances that have been made at different times and at different places.70 Odysseus was able to get away with self-referential speech because his vocal sequences were clear and narrowly circumscribed, time bound and space bound. Had his instructions become overly complex by adding more and more contingencies, then the sailors would have had trouble keeping everything straight, leading to disputes about what he actually said or meant. There is some threshold of complexity beyond which he would have had to write down his instructions to have any hope that they would be carried out correctly. Decisions by the United States Supreme Court are paragons of the kind of written self-referential interaction Goody has in mind. They are sequences that do nothing except constrain the effects of other sequences, namely, the decisions of lower courts, which are themselves collections of sequences often a level or two removed from the physical world. A lower court may issue a ruling that is subsequently overturned by an appeals court. That ruling may then itself be overturned by the Supreme Court. Each level regulates the effects of
The Problem of Sequentialization 33
the level below. In principle this kind of self-reference can continue without limit. New, higher levels can emerge without destroying existing sequences and functions at lower levels.71 We have seen how the behavior of sequences diverges from the behavior of the everyday physical world. Self-reference puts the icing on the cake. A system of self-referential sequences can become a one-dimensional world unto itself, describing and orchestrating the physical world through the interactions of its blue-collar workforce. We will meet this workforce in Chapter 3.
1.10 Sequences Made Explicit Sequences of code have become our cultural operating system.72 Patterns of zeros and ones are ubiquitous, processing away in the background of our homes and workplaces, and as we travel from place to place. However, just as we must differentiate between the world of physics and the world of sequences, we must also differentiate between types of sequences. There are important distinctions between sequences of speech and writing, on the one hand, and sequences of mathematics and code on the other. We will look at the origin of these differences in Chapter 7. Sequences of software and those of human writing have many features in common. They have small alphabets, syntactic rules for how to combine and parse them, and straightforward methods for faithful replication. They are rate-independent and energy-degenerate. They require physical mechanisms to interact with the world, they are capable of self-reference, and their combinatorial power allows for open-ended increases in complexity. It is possible in principle for all extant human texts to be converted to binary code. Despite these similarities and connections, we users of natural language appreciate that the binary sequences and Boolean logic which govern the operation of digital computers differ from the sequences of speech and writing that otherwise govern human interaction. Mathematicians and computer scientists call them formal systems to distinguish them from naturally evolving systems of sequences like human spoken and written language.73 Among his other accomplishments, mathematician Alan Turing showed that all operations of formal systems are reducible to sequences of zeros and ones.74 In this sense, computers exhibit their own universality. Everyday discussions of this subject often take the form of questions like “Will robots rule the world?” or “Can a smartphone beat a human at ‘Jeopardy!’?” or “Why do Alexa and Siri get so confused?” This is not the place to probe these particular questions, but we can take a quick look at four important ways in which formal and naturally occurring systems of sequences differ. First, to get a computer to do anything you must provide a detailed, unambiguous instruction. A robotic arm, for example, cannot be left to its own devices. Every single move must be spelled out in detail. By contrast, a human
34
The Problem of Sequentialization
teenager is given the high-level instruction to go wash the car with the expectation that she will figure out what needs to be done. A robotic car wash would need explicit instruction concerning the details of how to manage different body types, wheel sizes, windshield angles, etc. It takes an awful lot of code to tell an inf lexible machine how to be f lexible.75 Second, binary sequences interact with human beings only through keyboards, printers, and other input/output devices. To interact with the physical world, they need specialized sensors and effectors, like video cameras and pincers, programmed with a high degree of detail. The sequences of human language evolved in a social species possessing bodies that could already see, move, and manipulate. To make language possible, we did not need to invent sensors and effectors; we already had them. Third, binary sequences are syntactically inf lexible; they lack fault-tolerance unless it is explicitly built in. The ungrammaticality of “This sentence no verb” does not keep people from getting the joke, but a missing syntactic element in software often produces a fatal error. A misplaced semicolon in a line of code in an application can be enough to crash it, whereas a misplaced semicolon in a human text may do nothing worse than annoy prescriptive grammarians. Finally, an algebra student can solve an equation for x without any knowledge of what x stands for; formal systems of sequences lack implicit semantics. Sequences of zeros and ones have no meaning unless some programmer has assigned that meaning. Human speech and writing, on the other hand, are grounded and embodied in the world around them.76 They have their semantics built in because they connect to our bodies, our environments, and our cultural and evolutionary histories. Sequences of code lack both ontogeny and phylogeny; they are created de novo through the work of the software engineer. Mathematics is the ultimate abstraction, a set of sequences and rules for their manipulation that connect to the physical world only as we instruct them to. We might envision a continuum of abstraction, anchored at one end by the material world governed by universal physical laws, and anchored at the other by the abstract sequences of software and mathematics. Between these extremes we find the naturally evolving sequences we are studying in this book, which are entangled with the physical world and yet are abstract enough to coordinate the behavior of that physical world in novel ways not predicted by the laws of nature. It is time to look at how that coordination happens.
Notes 1. 2. 3. 4.
Watson & Crick 1953 Crick 1958 Judson 1996 Whether you view any sequence as instructive or descriptive depends on what you plan to do with it. A cell would interpret a sequence of its own DNA as instructive; the sequence governs some behavior of the cell itself, although a subjective
The Problem of Sequentialization 35
5.
6.
7. 8. 9. 10. 11. 12.
13.
14.
15.
16.
17. 18. 19.
observer, like a researcher studying that cell, might find the same DNA sequence descriptive. This is called path dependence; the correct ordering of steps is crucial. Path-dependent systems “cannot shake free of their history” (David 2001). Every day you and I deal with the modern archetype of path dependence, the QWERTY keyboard. Although no one would design an awkward keyboard layout like this today, we remain locked into a path originally established by the physical constraints of designing manual typewriters. We cannot shake free. True, if you are marching in the parade you view the sequence as instructive and comply with it by taking your assigned position. Or if you email a friend about watching the parade you offer a descriptive sequence reporting on what you witnessed. McCloskey 2016 Hull 1982 Pattee 1978 Pattee 1979 Sterelny 2003 Mathematician Emil Post, a founder of number theory, also observes this subtle relationship between the temporal and spatial. With a sequence, “its linear order can be easily associated with time.” He continues, “The process itself [of creating a sequence] is temporal but we are to think of the result of the process as spatial. We are then to think of these symbols as created in the f lux of the universe, and preserved there through time” (2004, emphasis his). Post is talking about the creation of sequences, what I have been calling description. But the principle is the same: the temporal dimension of an event is measured and described in the spatial dimension of a sequence. Looking at the activities of everyday life through the lenses of rate dependence and rate independence is illuminating. In American football, for example, “each play is a highly rate-dependent social dynamics in which the outcome is exceedingly sensitive to small changes of rate,” writes Pattee. “The huddle, on the other hand, is a social policy activity in which the outcome is not sensitive to rate (excluding ‘delaying the game’ penalty). The policy in this case acts as a constraint on the dynamical behavior; but it is also clear that one team’s choice of policy is based on both its model of the game dynamics and its model of the other team’s policy” (1978). As philosopher John Searle points out, even war itself exhibits its own ratedependent and rate-independent elements: “War as a social fact can exist no matter how it came about, but under the U.S. Constitution, war as an institutional fact exists only if it is created by an act of Congress” (1995). Linguist George Zipf says: “One of the functions of speaking is to convert the spatial-temporal arrangements of experience into purely temporal arrangement” (1935). With writing that temporal pattern reverts to the spatial. Speech is an event; writing is an object. Speech has a mix of rate-independent and rate-dependent qualities, which will be discussed in Chapter 7. Linguist Wallace Chafe points out a listener cannot listen any faster than a speaker can speak; speaking and listening are held hostage by their rate-dependent simultaneity. However, most readers read much faster than most writers write, creating a temporal asymmetry: “The abnormal quickness of reading fits together with the abnormal slowness of writing to foster a kind of language in which ideas are combined to form more complex idea units and sentences” (1982). Hull 1982. “Evolution, as a historical process, bears the decisive mark of the historical uniqueness of its constraints,” says Bernd-Olaf Küppers (1990). Christiansen & Kirby 2003 Physicists Sara Walker and Paul Davies call life merely a “hard problem” (2017).
36
The Problem of Sequentialization
20. 21. 22. 23.
Dawkins 1990 Zuckerkandl & Pauling 1965 Woese 2000 “The theory of biological evolution is a historic theory,” says chemist Günter Wächtershäuser. “If we could ever trace this historic process backwards far enough in time, we would wind up with an origin of life in purely chemical processes. The science of chemistry, however, is an ahistoric science striving for universal laws, independent of space and time or of geography and the calendar. This then is the challenge of the origin of life: to reduce the historic process of biological evolution to a universal chemical law of evolution” (1997). Bickerton 2009 Platnick & Cameron 1977; McMahon & McMahon 2003; Searls 2003; Atkinson & Gray 2005; Barbrook et al. 1998. Atkinson & Gray 2005 Hull asks: ‘Why do biologists and linguistics [sic] confront such similar problems and suggest such similar solutions? One possible answer is that they are dealing with fundamentally the same sort of process” (2002a). Ranea et al. 2006 Gray & Atkinson 2003 Lindblad-Toh et al. 2005 Gray & Atkinson 2003 Lieberman et al. 2007 Hockett 1959, 1960a, 1960b, 1966; Hockett & Altmann 1968 Hockett 1966 Harris 1968 Hauser et al. 2002. By the same token, the difference between any two similar sequences (e.g., “comb” and “womb”) is never less than one single element. In the physical world, however, two objects can be arbitrarily similar, a fact well-known to counterfeiters. Duality of patterning achieves its greatest power in alphabetic writing systems, where the primitive elements lack inherent meaning. Logographic systems (e.g., Chinese) and pictographic systems (e.g., hieroglyphics) have grammars, but their primitive elements are not meaningless in themselves. Hockett 1966 By the same token, the energy needed to process sequences is greater than the energy needed to store them quiescently. “Whereas the energetic cost of possessing genes is trivial,” write biochemist Nick Lane and botanist William Martin, “the cost of expressing them as protein is not and consumes most of the cell’s energy budget” (2010). Boyd & Richerson 1985 Haldane 1953 Hockett 1966 Altmann 1967 Skinner 1957 “It is not essential that the various memory states be completely degenerate,” says Pattee, “but only that their energy difference does not introduce strong intersymbol interference within the memory” (1966). Another term to describe this property is “equipollent,” which is used by linguist Roy Harris (no relation to Zellig Harris): “The letters of the alphabet are not hierarchically or relationally ordered. True, there is a traditional ‘alphabetical order’: but that is something externally imposed on the system. It derives neither from the structure of the system nor from its function. ‘A, B, C, D, E, F . . .’ is not like ’1, 2, 3, 4, 5, 6 . . .’, nor like ‘January, February, March, April, May, June . . .’. The
24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36.
37.
38. 39.
40. 41. 42. 43. 44. 45. 46.
The Problem of Sequentialization 37
47.
48.
49.
50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65.
letters of the alphabet are independent and equipollent characters” (1986). He goes on to say: “the entire architecture of the alphabetic system rests on the application of these two principles of equipollence and free sequential combination” (1986). Just as there is no physical reason why reading, writing, or storing “womb” should require more energy than reading, writing, or storing “comb” or “tomb,” there is no physical reason why a sequence means one thing rather than another. In the genetic code, the RNA sequence AGC maps to the amino acid serine, but this is an accident of history, not a requirement of physical law. It might just as easily have mapped to a different amino acid. “There is no known physical, chemical, or logical reason,” says Pattee, “why equivalent alternative [genetic] codes could not exist in principle” (1972b). Energy-degeneracy is the property that makes possible the arbitrariness of sequences. What does it mean to be arbitrary? As Pattee observes, “the mathematical connotation of an arbitrary choice is that there exists no significance to the choice except that it must be made decisively” (1972b). It’s like driving on the left-hand or right-hand side of the road; it makes no difference which side you choose as long as you stick with the choice you have made. Energy-degeneracy is a functional requirement for any informational molecule, according to biochemist Steven Benner and colleagues: “Its physical properties [must remain] largely unchanged even after substantial change in its sequence, so that the polymer remains acceptable to the mechanisms by which it is replicated” (1999). As origin of life researcher Gerald Joyce writes, “the polymer must be replicated in essentially the same manner regardless of its sequence. Variation will arise owing to inevitable copying errors, and those variants too must be amenable to replication” (2002). DeLong & Froomkin 2000. Excludability is enforced among human sequences by intellectual property rights and institutional safeguards against plagiarism, which themselves are sequences. If I own a sequence, I can control whether and how you can use it. In the cell, immune systems do the opposite. They keep sequences that are not your own from using you. McInerney et al. 2011; Erwin 2015 McInerney et al. 2011 Pattee 1972b Pattee 1978 Lewontin 1992 In a similar fashion, Zellig Harris points out that even though human languages change over time, “as far as we can see, they do so without at any point failing to have a grammatical structure” (Harris 1968). Pattee 1978 “To state that something obeys physical law is a tautology,” says Pattee, “if you understand what the concept of physical law means” (1982b). Delbrück 1949 Crick 1968 Delbrück 1949 Simon 1973 Goody 1986 Pattee 1978 Hockett 1966 Harris 1968. Altmann also views self-reference as a meta property that enables sequences to constrain the effects of other sequences, building a hierarchy that can grow essentially without limit: “Metacommunication is a stochastic process, each of whose components is a stochastic process. That is, when we say that metamessages affect the interpretation of other messages, we mean that the sequential contingency between message and response is, in turn, contingent upon a metamessage” (1967).
38
The Problem of Sequentialization
66. John Searle suggests that we “distinguish between language-independent facts, such as the fact that Mt. Everest has snow and ice at the summit, and language-dependent facts, such as the fact that ‘Mt. Everest has snow and ice at the summit’ is a sentence of English” (1995, emphasis his). 67. Hockett 1966. Noam Chomsky uses the phrase discrete infinity to describe the generative capacity of linguistic sequences (1980). 68. Searle 1995 69. Harris 1968 70. Goody 1977 71. Pattee 1972b. In real-world practice there are always limits on how far self-reference can continue. The United States Supreme Court is such a limit. 72. Balkin 1998 73. Computer scientist Allen Newell calls them physical symbol systems (1980). 74. Turing 1936 75. Hofstadter 1979 76. Harnad 1990; Clark & Chalmers 1998
2 THE EMERGENCE OF CONSTRAINT
2.1 Sequences at the Dinner Table How does symbolic information actually get control of physical systems? Let’s start by looking at an everyday interaction between sequences and a humanshaped physical system. Imagine you are dining with friends and you would like someone (let’s call him Peter) to pass the pepper grinder to you. How do you arrange for Peter to do this? I can think of three possible ways. Most likely you would simply say: “Please pass the pepper, Peter,” relying on a sequence of pulses in the air to induce Peter to modify his body’s trajectory to comply with your request. “Language is an efficient way to change another’s behavior,” writes psychologist Charles Catania. “By talking, we can change what someone else does.”1 This is one way that sequences get control of physical systems. You could also wave your hand at him to get his attention and then point to the pepper. In this case, you hope that he will attend to your hand-waving and understand from the context that you want him to pass the pepper. Anthropologist Thom Scott-Phillips calls this ostensive-inferential communication: ostensive because you are trying to demonstrate what you want Peter to do, and inferential because you are relying on Peter to figure out from the context how to behave.2 We share such gestural systems with our non-human primate relatives. But there is also a purely mechanical solution that is possible in principle, even if fanciful in practice. You could design and construct a prosthetic harness of cuffs, belts, pulleys, and motors that would attach to Peter’s body and coordinate the trajectories of his arms and hands so that he would pass the pepper to you. In effect this would convert Peter into a puppet. He would not control his movements; the harness would. My point is that all three solutions—harness, gesture, and request—are functionally equivalent; each guides the motion of Peter’s arm, hand, and fingers to
40
The Emergence of Constraint
deliver the pepper to you. In all three scenarios Peter’s body is constrained to move in a specific, highly coordinated way. Strangely, though, the harness is the simplest of the three to explain in purely physical terms. It is a machine that requires energy to function and which operates in accordance with the laws of nature. This may be an absurd way to get the pepper to you, but its engineering principles are readily understood. More difficult to explain is how a sequence of pulses in the air leads Peter to behave as though his trajectory were being controlled by a machine. The pulses in the air are not machines and we know they entail little energy, yet their real-world effects are equivalent to an energy-consuming machine, and a complicated one at that.3 “Language,” writes Mark Pagel, “equips you with something akin to a television remote control device, capable of sending invisible digital signals that reprogram your listener.”4 Thinking about pepper-passing this way entails a novel perspective on the function of speech. As noted in Chapter 1, it has physical effects without causing them directly. Peter is complicated, as is the culture in which he is embedded, but living systems like Peter and rituals of civilization like suppers nonetheless obey the laws of nature. When we perceive Peter moving his arm toward the pepper, we do not assume supernatural forces are at work. If we think about it at all, we assume the coordinated muscle contractions and nerve impulses that cause his arm to move are due to natural forces. However, as Richard Dawkins says, “If we are going to explain the universe in terms of blind physical forces, those blind physical forces are going to have to be deployed in a very peculiar way.”5 Of course, it is always possible that Peter would spontaneously pass the pepper to you without being asked, but that is unlikely. It is also possible that Peter might not hear you, might ignore your request, might resist the harness, etc. so that you do not get the pepper. Your speech cannot force him to move; it is inert. Each of our three methods starts with an action that is possible but improbable—he might pass the pepper without being asked—and converts it into an action that is probable though not certain. In the words of philosopher Gottfried Leibniz, such requests incline without necessitating; they make Peter’s action more likely but do not have the wherewithal to make it mandatory.6 So, how likely is it that you will get the pepper? If you stay silent, it is unlikely, though not impossible. If you speak up, getting the pepper becomes likely, though not certain. Here, in two quick sentences, we have somehow transcended the world of physical law. Under the inexorable laws of nature, actions are either certain or they are impossible. Physical law has no way to account for patterns of behavior that are somewhere in between, like life and civilization.
2.2 Meet the Constraint Even in a system much less complex than Peter the laws of nature by themselves cannot always predict a system’s trajectory. If an ordinary tennis ball
The Emergence of Constraint
41
is falling toward Earth, it obeys the law of gravity and we can calculate its future trajectory with nothing more than knowledge of its initial position and velocity. What was its height above the ground and how fast was it moving at the outset? Physicists call these the initial conditions. Plug the initial conditions into the equations of motion and the future behavior of the system is determined. But is it? What if some obstacle gets in its way, like a tabletop? If there is a f lat surface between the falling tennis ball and the ground, then the ball will strike it and bounce off in a different direction. If you know in advance that the tabletop is there, and you know something about its orientation and composition, then you can still figure out the future trajectory of the tennis ball. But you need to incorporate that extra bit of knowledge into your calculation; the initial conditions and the law of gravity by themselves are not enough. Despite their power and universality, the laws of physics cannot account in advance for the presence of that tabletop. In physics, obstacles like this are called constraints; the tabletop acts as a constraint on the tennis ball’s range of motion. We can still calculate the tabletop-constrained trajectory of the tennis ball, but we have to include a description of the tabletop, how this particular constraint limits the ball’s movement. The tennis ball cannot go through the tabletop; therefore, its movement is restricted to the area above the tabletop. Behavior can be constrained in many ways. “The beads of an abacus are constrained to one-dimensional motion by the supporting wires,” writes physicist Herbert Goldstein in his text Classical Mechanics. Gas molecules within a container are constrained by the walls of the vessel to move only inside the container. A particle placed on the surface of a solid sphere is subject to the constraint that it can move only on the surface or in the region exterior to the sphere.7 Without the wire, the beads of the abacus can move in any direction; with the wire they move in only one dimension. Without the container, the gas molecules can move an unlimited distance in any direction; inside the container they can still move in any direction, but no further than the container walls. Without the sphere a particle can move anywhere, but in the presence of the sphere it still moves anywhere, except inside the sphere. Constraints are all around us. Some are simple, like the tabletop or the abacus, and some are complex. Some have a fixed position, while others change over time. In f ly fishing, the line constrains the motion of the lure. In the Winter Olympics, the bobsled run constrains the motion of the bobsled. In naval aviation, the deck of the aircraft carrier constrains the motion of the F-35 as it lands. In the living world, the spider’s web constrains the motion of the f ly. In horse racing the whip constrains the speed of the horse.8
42
The Emergence of Constraint
But let us not forget the important caveat given by Leibniz: constraints incline without necessitating. Constraints can be quite potent, but they still operate only within some limited range of energy. If that range is exceeded, the constraint fails. A tabletop may constrain the motion of a falling tennis ball, but it has little effect on the trajectory of a falling piano. A backboard constrains a basketball, but not a cannonball. An ordinary light switch constrains the f low of household electrical current but not the surge from a bolt of lightning. The wire of an abacus constrains the motion of the beads, but not if the beads are hotter than the melting point of the wire. If constraints are a prison, they are a minimum-security prison. In Pattee’s world of laws and rules, they are like rules, local and context-dependent, time bound and space bound. The trajectory of this discussion has taken us some distance from the dinner table where we began. Before we return, however, I want to be clear about how constraint will be used in this book. Per Merriam-Webster, constraint means “the state of being checked, restricted, or compelled to avoid or perform some action” or “a constraining condition, agency, or force.” The latter is not far off. In this book, a constraint is an external condition that modifies the trajectory or the rate of some dynamic behavior. Sequences, whether linguistic or genetic, are external conditions that modify behaviors. Complex they may be, but constraints they remain. And these external conditions are not supernatural; they are real physical things. “Constraints, unlike laws of nature, must be the consequence of what we call some form of material structure, such as molecules, membranes, typewriters, or table tops,” explains Pattee. “These structures may be static or time-dependent, but in either case it is important to realize that they are made up of matter which at all times obeys the fundamental laws of nature in addition to behaving as a constraint.”9 Now let’s get back to the dinner table. Sequences are energetically inert. Unlike the prosthetic harness, which requires motive power, your vocal sequence does not supply enough energy to cause Peter to move in a certain way. “No amount of semiotic information, thought, or discourse alone can cause the body to move,” says Pattee. “It takes some physics.”10 In this case Peter provides the mechanism and the energy, and your spoken sequence functions as a constraint, guiding his motion along a trajectory that fulfills the functional requirement of passing the pepper.11 Unlike the falling balls, inclined planes, and rolling discs of classical mechanics, the constraints governing life and human culture are improbable and complex. But they remain constraints nonetheless. The most peculiar of all constraints are sequences, in which changes in the arrangement of onedimensional patterns can effect changes in the dynamic behavior of complex, three-dimensional physical, biological, and social systems. However, from the perspective of the laws of nature, the principle is the same. Some physical object or event has caused a change in behavior in a way which cannot be predicted by the laws and initial conditions. No matter how complex, it is still a constraint.
The Emergence of Constraint
43
2.3 Alternatives and Decisions Our literate technological civilization affords us an unprecedented array of choices. We choose how to earn a living, how to spend the proceeds, how to organize our families, how to participate in society, and how much time to spend on various pursuits. Much of our time, perhaps too much, is taken up just making decisions about these choices. Although choice is omnipresent in human civilization, it is absent from the laws of physics, where actions are either certain or they are impossible. Physical law is inexorable; the deterministic universe affords moving objects no opportunity to make decisions. In no way has our planet decided whether to orbit its star. The positive pole of a magnet does not decide to attract the negative pole of another. When you drop a $20 bill, it does not decide to fall to the f loor. In these cases, there are no alternatives from which to choose; the laws determine the trajectory and that’s that. “Physical laws must give the impression that events do not have alternatives and ‘could not be otherwise,’” writes Pattee, quoting physicist Eugene Wigner.12 Choice, however, is a general property of the living world. Wolves and moose do not need to decide between red and white wine, but they do have to decide between fighting and f leeing, moving and resting, or feeding and reproducing. Even an enzyme in a cell, upon colliding with a nearby molecule, must decide whether its neighbor is the correct shape to be its substrate. Where does all of this choice come from? Why are there any choices at all in an otherwise deterministic universe? The prebiotic Earth lacked sequences, and it also lacked alternatives. As it turns out, there is a deep connection between sequences and alternatives, and it is this idea of a constraint. Let’s return to a simple example. The trajectory of a falling tennis ball can be predicted from the laws and the initial conditions unless some constraint like a tabletop intervenes. Then the ball will follow an alternative trajectory. Without a constraint there is but one trajectory. With a constraint there can be as many trajectories as there are possible constraints. So, when we talk about alternatives we are really talking about constraints. Any choice, any decision among alternatives—even the existence of alternatives—all require constraints. There is always more than one way to skin a cat. If you’re trying to get something done, there are many possible trajectories that yield the correct functional outcome. This set of trajectories is called an equivalence class and the related decision process is what we call classification. Enzymes classify molecules into substrate and non-substrate. Animals classify organisms into food and non-food. Religious institutions classify people into believers and non-believers. At the dinner table you have a very clear criterion for classification: does Peter’s trajectory deliver the pepper? This serves as a reminder that, although we are discussing Peter’s trajectory, there is not just one true and perfect trajectory but rather a large set of possible trajectories—an equivalence class—which will get you the pepper. And as long
44
The Emergence of Constraint
as you get the pepper, you don’t care whether Peter passes it with his left or right hand, hands it to you or tosses it to you, or grasps it with thumb and forefinger or with his whole hand. Any one of these gets the job done, and you are indifferent to the details as long as the goal is achieved. Your vocal sequence underdetermines Peter’s motion; it operates by ignoring the niggling details of how he does it and focusing on the functional outcome. All you want is the pepper. Compare this with the deterministic design of our imaginary prosthetic harness. To configure the harness so that it causes Peter to pass the pepper, you must specify one particular trajectory. The implicit decisions about how he grasps the pepper, how high he lifts it, etc. must be made explicit in advance. The harness does not know which hand Peter should use until you tell it. There are many possible trajectories, but each slightly different trajectory requires a slightly different harness design. Here’s another example. Imagine you are meeting Peter for supper and you instruct him to meet you at a particular restaurant at a particular time. You don’t care whether Peter walks, runs, drives, takes the bus, calls an Uber, or rides a bicycle to reach the appointed time and place. Your sequence specifying the time and place defines the functional outcome but not the detailed rate-dependent dynamics. From your perspective, the travel modes available to Peter constitute an equivalence class. His trajectory is underdetermined.13 How Peter gets to dinner or which hand he uses to pass the pepper are examples of don’t-care conditions. “In an evolutionary system, not everything becomes fixed because there are ‘don’t-care conditions,’” says molecular biologist and Nobel laureate Sydney Brenner. “If you want to do things specifically in biology you have to pay for it in sequence information. But if it doesn’t matter, why bother to pay for it?”14 Brenner’s concern is manifest in the work of Frederick Winslow Taylor, who founded the field of scientific management in the early 20th century.15 “Each man receives in most cases complete written instructions, describing in detail the task which he is to accomplish, as well as the means used in doing the work,” writes Taylor. “This task specifies not only what is to be done, but how it is to be done and the exact time allowed for doing it.” That much detail requires a lot of sequence. Any time we get overly specific about behavior, we pay for that specificity with a longer sequence, which requires more space to store and more time and energy to process. This is the difference between “please pass the pepper” and “please pass the pepper with your left hand, holding it about midway up the grinder with your thumb and forefinger.” Living systems are reluctant to pay for things they don’t need. “Good biological as well as good engineering design makes maximum use of natural (noninformational) constraints and laws of nature,” says Pattee, “so that the control information can be kept to a minimum.”16 Don’t-care conditions are a good thing. They mean, says John Maynard Smith, the “the laws of chemistry and physics do not have to be coded for by the genes: they are given and constant.”17
The Emergence of Constraint
45
“Two roads diverged in a yellow wood,” writes Robert Frost, “And sorry I could not travel both And be one traveler.” To make a decision, to say yes to this and no to that, you need a way to know that there is a difference between this and that. “To be able to discriminate is to be able to judge whether two inputs are the same or different, and, if different, how different they are,” writes cognitive scientist Stevan Harnad. “Discrimination is a relative judgment, based on our capacity to tell things apart and discern their degree of similarity.”18 To classify, you need a special kind of constraint. You need perception.
2.4 Classification, and Then Reclassification People have mixed feelings about details. Sometimes the devil is in the details, meaning large consequences can result from small differences, as in for want of a nail, the kingdom was lost. Then again, when you click accept on Facebook’s license agreement without reading it, you demonstrate that you don’t care about the details of the fine print. Sometimes the details matter and sometimes they don’t. Decision requires classification, meaning we need to pay attention to the relevant and ignore the irrelevant, the true-but-unimportant. But how do we perceive which is which? Consider an intersection with a traffic light. There’s plenty of physics here: the mass, velocity, acceleration, and momentum of vehicles. Plus, there’s a constraint, an electro-optical device that precisely regulates the timing of vehicle behavior. Traffic f lows are highly variable, yet when you arrive at a traffic light and plan to drive straight ahead, you don’t care about the physical details of the cross traffic. You base your stop-or-go decision strictly on the color of the signal. Even if there is no cross traffic, the constraint of a red light overrules what your perceptual system is telling you; you wait for the green. However, if you arrive at a red light and you are turning right, in many cases you can stop and then make the right-on-red. Rather than pay attention to the higher-level constraint of the red light, you attend instead to the lower-level physics of the traffic f low, your visual acuity and ref lexes, and the capacity of your truck to accelerate. Somewhere in the text of the motor vehicle code is a rule, a sequential constraint, that says you must stop at a red light. Somewhere else is a rule that says if you are turning right, you may proceed cautiously after stopping even if the light is red. The second rule reclassifies the first. The intersection is a system which can be described in two completely different ways: first, as the higher-level constraint of the traffic signal and, second, as the lower-level details of traffic f low. If you are going straight, the details of the cross traffic do not matter; the red signal is paramount. However, if you are turning right the red signal does not matter, only the physics of the traffic f low.19 A key property of constraints is that they can spontaneously evolve new levels that reclassify the constraints at lower levels. Today’s uppermost level can
46
The Emergence of Constraint
become subject to constraint by a new uppermost level, thereby becoming tomorrow’s lower level, and so forth. The generic right-on-red rule can itself be reclassified by a sign reading No Right Turn on Red, and an adjacent sign reading 3 PM-6 PM Only is a further reclassification. In principle this can continue without limit. Just as Odysseus could issue orders about his orders, what makes this possible is self-reference; the right-on-red rule refers to the stop-on-red rule and the no-right-on-red sign refers to the right-on-red-rule. Returning to the dinner table, you have asked Peter to pass you the pepper and he is about to comply. But after making your request you realize you do not want the pepper after all, and so you correct yourself with the sequence: “I’m sorry. I meant the mustard.” By speaking this sequence, you reclassify the constraints on Peter’s trajectory, so you get the mustard instead of the pepper.20 By the same token, Peter might have responded to your request by saying “There is one by your elbow.” In this case, Peter reclassifies the functional constraints imposed by your original sequence. Instead of constraining the trajectory of his arm, hand, and fingers, your sequence constrains his perceptual system and vocal tract to produce a new sequence that creates a new hierarchical level. His spoken sequence, in turn, constrains your trajectory to reach for the pepper next to your elbow. The function of your original sequence is fulfilled; you still get the pepper. The details are reclassified, but the global outcome remains unchanged. This is what is really going on when Odysseus is tied to the mast. His order for the sailors to ignore his future instructions reclassified those future instructions, establishing a new level of control. He also told them to ignore any future instructions telling them to ignore his previous instructions. The establishment of new hierarchical levels by reclassifying lower-level elements happens all the time in human interaction but also in the living world. The open-ended reclassification potential of self-reference makes evolution possible. “Any hierarchical organization that did not have the potential for establishing new levels of function and control would hardly be of biological interest,” says Pattee (emphasis his).21 In the cells of multicellular organisms, energy management is overseen by organelles like mitochondria and, in green plants, chloroplasts. Evolutionary biologist Lynn Margulis discovered that these organelles are what are called endosymbionts.22 Once upon a time these were free-living bacteria—they still have their own small genomes—but they have been incorporated into the metabolism of the larger cells where they reside. In forming the endosymbiotic partnership, their free-standing genomes were reclassified and are now constrained by the genomes of their hosts. Then there’s speciation. When two genetically identical populations become reproductively isolated, each may evolve in its own separate direction, like the finches Darwin studied in the Galapagos Islands. Eventually they diverge to the point that we recognize them as distinct species. Since each population begins with the same pool of genetic sequences, speciation means that the system has evolved a new level of constraint over the existing set of elements. This might
The Emergence of Constraint
47
be due to natural selection or genetic drift, but in either case it represents a reclassification of the elements at lower levels. Most elements retain their original structures and functions; only a few mutated sequences may be needed to establish a new level of control. The opposite of speciation is merger, which also requires establishing a new level of control. When two corporations or universities merge, enormous quantities of text are generated to put a new management level into place to supervise the elements of both original enterprises, thereby constraining their functions as a unified entity. Every day we encounter the complex set of constraints embodied in the sequences of our legal system. Although separated by billions of years of evolution from the earliest living things, legal systems are an example of these principles at work. They make explicit their own decision rules for classifying lower-level details. When the law says that certain groups of people cannot be discriminated against, it is defining its own set of don’t-care conditions. “The principle of ‘equality under the law’ is at the foundation of most legal forms of hierarchical control,” writes Pattee, “and this is little more than an abbreviated statement that legal controls should apply, not to individuals, but to equivalence classes of individuals, where most details must be ignored.”23
2.5 Sequences are Invisible Boundaries You can also think of a constraint as a boundary that limits the range of motion available to a system. In some cases, it is a physical boundary, as when a container filled with a gas limits the motion of the gas molecules. In other cases, it is an invisible fence of the sort used to constrain dogs from running into the street. Or it can be sequential, like when your vocal sequences place a boundary on Peter’s motion as he passes the pepper. Simple constraints like tabletops and even less-simple ones like aircraft carrier decks are relatively easy to imagine. You can visualize the constraint in action, the ball bouncing off the tabletop or the fighter jet landing on the aircraft carrier. It is not so easy to visualize how the constraint works when “Please pass the pepper” guides Peter’s motion. The constraint is harnessing the laws of physics, but how? To probe further we will turn to an important paper called “Life’s Irreducible Structure,” written 50 years ago by chemist and economist Michael Polanyi.24 Polanyi will shift our discussion of sequences operating as constraints from the level of the dinner table to the level of the cell. Physicists sometimes use the term boundary condition when they refer to a constraint.25 This is Polanyi’s term of choice. Just as we can think of our vocal sequence as a boundary condition on Peter’s behavior, we can also think of a sequence of DNA as a boundary condition on the behavior of molecules in a cell. “Physics is dumb without the gift of boundary conditions, forming its frame,” says Polanyi.26
48
The Emergence of Constraint
But before he gets to cells, Polanyi begins by discussing machines, which are mechanisms designed and built by humans to perform specific functions, but which depend upon the laws of nature for their operation. Machines are special-purpose constraints. Observe a machine like a microwave oven and your attention turns to its design and function, and not so much to the inexorable laws of electromagnetism and molecular motion on which it depends. The boundary conditions and the function interest you but the laws do not; they are assumed without much thought. “The machine as a whole works under the control of two distinct principles,” writes Polanyi. “The higher one is the principle of the machine’s design, and this harnesses the lower one, which consists in the physical-chemical processes on which the machine relies.”27 A microwave oven relies on physical-chemical processes, like the fact that microwave energy can increase molecular motion in certain substances and that household electrical current can be converted to microwave energy. If these processes were not universal and inexorable, the microwave oven would not work. Of greater interest, however, is the machine’s design that allows us to warm a cold cup of coffee quickly and safely. Polanyi calls this dual control. The design of the machine determines the function, but it needs the laws of nature to make it work; the design is a boundary condition on the laws. The laws are the physical-chemical processes, and the constraints are the machine’s design; changing the design means changing the constraints. Dual control also holds true in living systems. Organisms obey the laws of nature in their operation, but we take those laws for granted. More interesting are the structural and functional constraints that harness the laws. “The organism is shown to be, like a machine, a system which works according to two different principles,” says Polanyi. “Its structure serves as a boundary condition harnessing the physical-chemical processes by which its organs perform their functions.”28 Like machines, living things need both special boundary conditions and the laws of nature in order to function. But neither is reducible to the other. Function can only be understood by reference to the boundary conditions. The laws say nothing about function. The boundary conditions, on the other hand, are impotent without the laws. “The existence of dual control in machines and living mechanisms,” says Polanyi, “represents a discontinuity between machines and living things on the one hand and inanimate nature on the other.”29 The blind physical forces are deployed in a very peculiar way. If an organism’s structure is a boundary condition that harnesses the laws of nature, what should we then say about the DNA sequences that guide development of that structure in the first place? Polanyi’s point is that the sequences themselves constitute a boundary condition. “DNA itself is such a system,” he writes, since every system conveying information is under dual control, for every such system restricts and orders, in the service of conveying its
The Emergence of Constraint
49
information, extensive resources of particulars that would otherwise be left at random, and thereby acts as a boundary condition. In the case of DNA this boundary condition is a blueprint of the growing organism.30 In more detail, a sequence of DNA is a boundary condition that constrains the sequence of amino acids in a protein, which in turn is a boundary condition on the structure and function of the cell. The amino acids that constitute the protein are the extensive resources of particulars that would otherwise be left at random. Without the one-dimensional pattern specified by DNA, amino acids would randomly form useless blobs rather than functioning proteins. “Every system conveying information is under dual control,” says Polanyi. Thanks to molecular biology, we can readily see how dual control arises in the cell, but it is not so easy to see how it happens with human language, though it clearly does. After all, the patterned sequences of human speech and writing are nothing if not systems conveying information. Just as DNA sequences harness the extensive resources of particulars of physics and chemistry to build organisms, the sequences of speech and writing harness the extensive resources of particulars of human populations to build civilizations. Returning to the dinner table, we see that Peter is under dual control. When he passes the pepper, Peter the organism supplies the mechanism and the motive power, but the sequence of your request is a boundary condition on his range of motion. Just as surely as a physical harness, your sequence guides the trajectories of his arm, hand, and fingers, brief ly configuring him as a special-purpose machine. You cannot explain what he is doing without reference to both your sequence and the laws it constrains. “On the one hand the boundary conditions encode the whole secret of living matter,” says physicist Bernd-Olaf Küppers. “On the other, the concept of boundary conditions is completely anchored in physics, and there is nothing mysterious about it.”31
2.6 Constraining Individuals in a Collection Let’s take a closer look at what happens when you ask Peter to pass the pepper, not just at the level of gross anatomy but also at deeper levels of dual control. Peter’s body comprises billions and billions of cells. As in most multicellular organisms they are differentiated into retinal cells, muscle cells, bone cells, blood cells, etc. Contrast this with a collection of billions of nearly identical bacterial cells in a petri dish, all doing more or less the same thing at the same time. We would call a collection like this a colony, not an organism. And we would not expect a bacterial colony to be sufficiently well-organized to pass the pepper. We would never call Peter a colony. Though the cells of Peter’s body have identical genomes, each does its own thing while other cells are doing other things. To get you the pepper, cells in his perceptual system, his nervous system,
50
The Emergence of Constraint
and his musculoskeletal system behave in a coherent, rate-dependent fashion. How does the high-level collection of cells we call Peter manage to constrain and coordinate the detailed behavior of the individual member cells of his own collection? Pattee states the problem more generally: “How do structures that have only common physical properties as individuals achieve special functions in a collection?”32 Put another way, why are we organisms rather than colonies? This question can be approached in two ways: top-down and bottom-up. The top-down approach asks: how does the collection of cells called Peter achieve coherent high-level function by imposing boundary conditions on the behavior of its own cells, each of which is genetically identical? How does the sequence of your request manage to constrain these billions of detailed cellular behaviors? The bottom-up approach asks: how do cells with indistinguishable genomes differentiate their behaviors when placed into a collection? How are identical arrangements of genomic DNA able to orchestrate and coordinate a variety of cellular behaviors that are anything but identical, what molecular biologist and Nobel laureate Jacques Monod calls “a heterogeneous response in a genetically homogeneous population.”33 Imagine a muscle cell in the index finger of Peter’s left hand. To get you the pepper, the fibers in this cell must contract at exactly the right time. Your vocalization does not directly constrain the cell; it does not have ears to hear you.34 But the cell does have receptors to respond to local boundary conditions, intercellular signals like electrical impulses, hormones, and neurotransmitters, and it generates internal constraints from the sequences of its own genome. The local boundary conditions are being produced by nearby muscle cells subject to their own sets of constraints. Any organism-level behavior, whether simpler or more complex than passing the pepper, is the result of a cascade of local and not-so-local constraints on individual cells. The top level may specify the function, but at the bottom level the laws of nature make it go. Social scientist Donald Campbell describes the conundrum this way: “All processes at the higher levels are restrained by and act in conformity to the laws of lower levels, including the levels of subatomic physics.” And yet, “all processes at the lower levels of a hierarchy are restrained by and act in conformity to the laws of the higher levels.”35 He calls this downward causation. This is clearly a more complex form of dual control. Campbell recognizes downward causation as a problem of hierarchy. There are two levels in the hierarchy; the upper level imposes boundary conditions on the lower, in the form of specific instructions to specific cells to behave in specific ways at specific times. Meanwhile the elements of the lower level are constraining each other in a way that makes the upper level possible. Constraints are constraining other constraints. Many kinds of hierarchies exist in the natural world, but most interesting to us are the coherent collections of elements governed by the sequential constraints found in living things and in human civilization. Pattee calls
The Emergence of Constraint
51
them control hierarchies, and they are present throughout the living world, from molecules within single cells to single cells within multicellular organisms to individuals within societies. “What is the basis for this coherence?” asks Pattee. “How do coherent organizations arise from chaotic collections of matter? How does one molecule, ordinary by all chemical criteria, establish extraordinary control over other molecules in a collection?”36 We often think of hierarchies as larger things containing smaller things, like nested Russian matryoshka dolls. But control hierarchies are about behavior rather than structure. In a military hierarchy, “generals issue orders to their field commanders,” writes psychobiologist Henry Plotkin, but “the generals are not in any sense made up of their officers.”37 In such hierarchies, he writes, the upper level constrains “the activities of lower-level entities precisely to allow the emergence of those higher-order levels of organization.”38 The upper level governs the detailed behavior of the lower level, which makes the upper level possible in the first place. Control hierarchies are not confined to molecules, cells, and organisms; they are also fundamental to how societies are organized. “As isolated individuals we behave in certain patterns,” says Pattee, but when we live in a group we find that additional constraints are imposed on us as individuals by some ‘authority.’ It may appear that this constraining authority is just one ordinary individual of the group to whom we give a title, such as admiral, president, or policeman, but tracing the origin of this authority reveals that these are more accurately said to be group constraints that are executed by an individual holding an ‘office’ established by a collective hierarchical organization.39 Not only are control hierarchies present throughout the living world, they define the living world. If physics is all about explaining those universal regularities of nature over which we have no control, then biology is all about explaining the time bound and space bound events constrained by hierarchical boundary conditions.40 What are the characteristics of control hierarchies? How do they function and evolve? What roles do sequences play? To approach these questions, it will be useful to start by thinking about hierarchies more generally.
2.7 Levels of Description and Control Everywhere we see larger things made up of smaller things. The objects we encounter in our daily lives are made of atoms and molecules, although we usually pay attention to the objects themselves and not to their microscopic components. Our everyday environment is also part of a planet that is part of a solar system that is part of a galaxy that is part of a galactic cluster, but in
52
The Emergence of Constraint
daily life our awareness of astronomy rarely ventures beyond sunrise and sunset or whether the moon is full. Humans have evolved to concern ourselves with objects and events at our scale, more-or-less our size and more-or-less our speed. In our level of this spatial hierarchy we are like Goldilocks, interested in nothing too big or too small, too fast or too slow, only what’s just right. The most powerful of the four forces of nature is the strong force that holds atomic nuclei together. Despite its potency we don’t worry about it because the strong force works over distances smaller than the diameter of atoms. Aside from holding matter together, it has no macro effects. At the other extreme, the weakest force in the universe is gravity, which operates over long distances. It intrudes into our lives through our gravitational relationship with our own planet. But we don’t concern ourselves with the gravitational attraction among pots and pans in the kitchen. And, except for the moon and the tides, we mostly ignore the large-scale attractions, like our planet’s gravitational relationship to its star or our solar system’s gravitational relationship to the rest of its galaxy. As we ascend each level in the structural hierarchy of the physical world, we find classes of objects under the inf luence of scale-appropriate forces. The everyday scale into which we are born is made up of much smaller things like atoms and molecules subject to their own forces. In turn, it makes up even larger things like galaxies subject to their own forces. Our Goldilocks world is isolated from the details of both the microscopic world below and the cosmological world above. The microscopic world moves faster than we do. The incessant jiggling of atoms and molecules is way too rapid for us to perceive without the help of measuring instruments. Meanwhile, the motion of the cosmological world above seems far more leisurely. Distant stars and galaxies appear to move too slowly for us to perceive without other measuring instruments. For all practical purposes, says Pattee, when we perceive the world “the faster motions one level down are averaged out and the slower motions one level up are constant.”41 We enjoy the illusion of stability. If you walk into a room and notice it’s a little chilly, you may decide to turn on the heat. But what do we mean by chilly and heat? In a room filled with air, individual gas molecules are bouncing around in accordance with the inexorable laws of kinetics, some faster and some slower. But when you say the room feels chilly you are not responding to the detailed motions of individual molecules— “Oh, that one’s fast, and so is that one”—you are responding to the average of their motions. That average is what we call temperature. The chilly room offers two different ways of describing what is happening. We can describe the detailed motions of the molecules or we can describe their statistical average, their temperature. The choice is ours, depending on our needs. “Which description we use depends entirely on which aspects of the behavior of the system we are interested in,” writes theoretical biologist Robert Rosen. “Two apparently independent system descriptions are appropriate
The Emergence of Constraint
53
to different activities being simultaneously manifested by the same system.”42 Such systems are not under dual control, but we can say they are subject to dual description. The first way of describing our chilly room captures the details of the lower level (molecular motion) and the second describes the regularities of the upper level (temperature). “The idea of a hierarchical organization simply does not arise,” says Rosen, “if the same kind of system description is appropriate for all of them.”43 Like Peter at the dinner table, the upper-level description of pepper-passing involves an enormous simplification compared with the lowerlevel details of contracting muscle fibers. What is the difference between a control hierarchy like Peter and a descriptive hierarchy like a room filled with air? To measure the warmth of the air in the room, you ignore the detailed motion of the air molecules and produce a statistic called temperature that averages out the details. To make the room warmer or cooler, you do not worry about the behavior of each individual molecule.44 Rather, you turn on a furnace or an air conditioner to adjust the average motion until you are comfortable. Some molecules move faster or slower as a result, but you don’t care which molecules they are as long as their average motion yields the new temperature you want. Compare this with the control hierarchy operating when Peter passes the pepper. The upper level called Peter is not concerned with the average behavior of his collection of cells. A successful outcome depends not on their average behavior, but on the specific coordinated behavior of individual elements of his collection many levels down. Certain nerve cells fire and certain muscle cells contract in a coherent way in order for Peter to pass the pepper. It may all be subconscious, but Peter very much cares about which cells are doing what. As Monod says, “the elementary phenomena of cell physiology are not reducible to statistical laws but rather to mechanisms whose construction and complex and precise circuits are not unlike those of a machine.”45 The elements of the Peter collection are not interchangeable air molecules. Rather, they are differentiated cells and protein molecules. And their motion is not constrained by something simple like the walls of a room. Rather, the constraint is the result of the sequence-guided boundary conditions of intracellular gene expression and intercellular signaling. “In a control hierarchy,” says Pattee, “the collective upper level structures exert a specific, dynamic constraint on the details of motion on individuals at the lower level, so that the fast dynamics of the lower level cannot be averaged out.”46 While we can measure Peter’s body temperature, we would never think of his body temperature as his function. Function has little to do with averages and everything to do with specifics.47 We do not think of temperature as a function of the air in the room and Peter’s global behavior is more than just the average of the behavior of his cells. Were that the case, he would be a colony rather than an organism. “Functional specificity is the fundamental phenomenon
54
The Emergence of Constraint
of animacy,” writes psychologist Edward Reed.48 To achieve function, dual description must give way to dual control.
2.8 Loosely Coupled and Nearly Decomposable A central figure in the scientific study of hierarchies is Herbert Simon. In his book The Sciences of the Artificial, Simon calls the hierarchies of atoms and galaxies nearly decomposable systems, meaning there is a greater density of interaction among elements within a given level than there is between elements at adjacent levels.49 Molecules interact with other molecules, galaxies with other galaxies, and people with other people. In a nearly decomposable system we ignore the details of the upper and lower levels and concentrate on the level at which we find ourselves. You and Peter are a nearly decomposable system. You don’t care about the microscopic details of how his cells coordinate among themselves, or about the gravitational attraction between Peter and Jupiter; you just want the pepper. When you ask Peter to pass the pepper, the microscopic details of how you produce your vocal sequence and the microscopic details of how Peter responds are not relevant to either of you. But neither are most other details at a macro level, like the color of Peter’s shirt or whether his meal is vegetarian or whether he finds his dinner companions boring. At the level of personal interaction, there are two relevant questions: (1) whether your sequence specifies enough detail to guide Peter’s behavior; and (2) whether Peter’s trajectory is constrained to your satisfaction. The interaction contains a myriad of details, but it boils down to whether Peter interprets your sequence and fulfills it. Since the system is under dual control the microscopic facts matter; the entire transaction depends on the cells of your body and those of Peter’s being suitably constrained. Simon’s point, extending Rosen’s dual levels of description, is that to explain function at one level, we ignore the details one level down. Physiological complexity gives rise to functional simplicity. We can describe the system either way, as detailed physiology or as organismal behavior, but neither description provides a complete picture. In addition to the minimal coupling between the levels of nearly decomposable hierarchies, Simon also describes what he calls loose horizontal coupling between elements at the same level.50 At the dinner table you and Peter are loosely coupled elements; Simon would call you subassemblies. What does this mean? “Loose horizontal coupling permits each subassembly to operate dynamically in independence of the detail of the others,” says Simon. “Only the inputs it requires and the outputs it produces are relevant for the larger aspects of system behavior.”51 Many of our everyday encounters with other people take the form of loose horizontal coupling. When you transact with the barista at the coffee shop,
The Emergence of Constraint
55
you each operate dynamically in independence of the detail of the other. The encounter is all about coffee and money and nothing else is relevant, including how the money was earned or what kind of truck delivered the coffee to the shop. From the perspective of the transaction, these and many others are don’t-care conditions. In the cell it is common for a series of enzyme molecules to govern what is called a metabolic pathway. In a metabolic pathway, an initial enzyme selects a molecule of raw material and converts it into an intermediate product, which is then passed along to another enzyme, which converts it to a second intermediate product. This can iterate through many steps, but eventually the ultimate enzyme in the series takes the last intermediate product and converts it into some molecule the cell needs. When we eat potatoes or rice, a series of enzymes in the mouth, stomach, and intestine are needed to convert the starch molecules in the food into the useful sugar glucose. Loose horizontal coupling means that no enzyme in the series cares about the steps that led to its input molecule, only that the input is available when needed.52 Molecularly speaking, all it wants is the pepper. Loose horizontal coupling is closely tied to the idea of modularity.53 Simon uses the example of two watchmakers who are often interrupted in their work. The first builds each watch one piece at a time until the watch is finished. If she is interrupted the assembly falls apart and she is forced to start over. The second watchmaker builds independent subassemblies that are later fashioned into a completed watch. If she is interrupted, the work she loses is the current subassembly; any modules she has already completed are unaffected. It is easy to see, as Simon does, that the watchmaker building subassemblies will produce many more watches than the one who builds entire watches one piece at a time.54 So it is with evolution. By starting with a collection of stable parts like enzymes or cells or organs that function independently, natural selection does not need to begin anew in order to try out a novel configuration. This allows “mutation and natural selection to go on in particular subsystems,” says Simon, “without requiring synchronous changes in all the other systems that make up the total organism.”55 Learning and behavior also rely on loose horizontal coupling. “Hierarchically structured behavioral sequences,” write cultural evolution researcher Alex Mesoudi and anthropologist Michael O’Brien, “should be less vulnerable to error and more easily executed and learned than holistically structured behavioral sequences.”56 The same principle applies to the development of a multicellular organism from a fertilized egg. Like the watchmaker building a watch from subassemblies, constructing an organism from a single cell is made reliable through the use of what are called gene regulatory networks (GRNs). Gene regulatory networks have a modular, hierarchical structure and each subassembly performs a distinct regulatory function in the process of development.57 These networks
56
The Emergence of Constraint
have been stable across hundreds of millions of years of evolutionary time, allowing natural selection to reuse existing modules to build new organisms.
2.9 Hierarchical Patterns in Sequences A room filled with air is a hierarchical system which we can describe in two complementary ways, either as detailed molecular motion or as the average molecular motion we call temperature. Peter is a more complex and functional control hierarchy, in which a collection of elements constrains the detailed behavior of its members, which in turn make possible the coherent behavior of the collection. But there is another kind of hierarchy that is specific to sequences. Linguists have long known that the rules for combining sequences of letters into words and sequences of words into sentences constitute a hierarchy known as a syntax or grammar. Logic and mathematics also have hierarchical rules for constructing well-formed sequences. So now we turn our attention to the hierarchies that exist within sequences and govern their organization. As discussed in Chapter 1, we are interested here in spatial arrangement within sequences rather than the space outside. Sequences of DNA are built from nucleotide bases which have no function as individual elements. These combine into the three-base codons of the genetic code that map to individual amino acids drawn from their own alphabet. Sequences of codons gather into the segments of genes that form the template for the production of functional proteins. The sequences of English comprising this book are built from individual letters, alphabetic elements with no inherent meaning or function. These combine into morphemes and words and sentences; meaning is absent at the lowest level of the hierarchy but is accrued as we move to higher and higher levels. These systems exhibit Charles Hockett’s duality of patterning, which complements physical systems exhibiting dual description and dual control.58 What is the relationship between the grammatical hierarchies within systems of sequences and the hierarchical behavior they constrain in the external world? In linguistics, this question is called the distinction between syntax and semantics. Following the lead of mathematician Claude Shannon, a pioneer of communication theory, many linguists argue that syntax and semantics are separate domains, each amenable to independent analysis. In his classic paper “A Mathematical Theory of Communication,” Shannon writes: “The semantic aspects of communication are irrelevant to the engineering aspects.”59 In other words, a communication channel does not need to preserve the semantics of the sequence being communicated, only its one-dimensional pattern. The meaning of the sequence is not its problem. Starting with Noam Chomsky, many linguists have tried to create models of natural language using tools developed to parse the formal languages of
The Emergence of Constraint
57
mathematics, logic, and computer code.60 We saw in Chapter 1 that formal languages have no built-in semantics; meaning is assigned by the mathematician, logician, or programmer, through what psychologist L.S. Vygotsky calls deliberate semantics.61 But although mathematical formulas, logical statements, and computer programs lack semantic content, they do possess well-defined grammars. Syntactic failure in mathematics means that the calculation is wrong, in logic that the proof has been unsuccessful, and in computer programming that the application does not run. This is another way in which the formal sequences of logic, mathematics, and software differ from our natural human sequences of speech and writing. All possess grammars or internal hierarchies, but only our language is embedded with our bodies in the complex and ever-evolving control hierarchies of human biological and social behavior. Speech and writing have the potential for on-the-f ly self-reference and reclassification of constraints in the physical world. Formal systems of sequences reclassify only when instructed to. A strong relationship exists between the hierarchical grammar of human language and the control hierarchies it deploys to orchestrate the material world. This will be taken up in the next three chapters. But how do sequences interact with the physical world in the first place? How do they actually get control of physical systems? “Please pass the pepper” interacts with the world by constraining Peter’s behavior. Peter is too complicated, however, and to get to the heart of the question, we need to follow Pattee’s lead and look at a simpler system. Fortunately, science has given us many details of how sequences of DNA constrain the behavior of molecules in the cell. These molecular control hierarchies yield clues to the general functional requirements for systems of sequential constraints and the evolution of complexity.
Notes 1. Catania 1990. Note that although everyone at the table can hear your request, your use of “Peter” makes your target explicit. “Peter” is used as an address in order to achieve random access. 2. Scott-Phillips 2014 3. B. F. Skinner describes the problem this way: “Instead of going to a drinking fountain, a thirsty man may simply ‘ask for a glass of water’—that is, may engage in behavior which produces a certain pattern of sounds which in turn induces someone to bring him a glass of water. The sounds themselves are easy to describe in physical terms; but the glass of water reaches the speaker only as the result of a complex series of events including the behavior of a listener. The ultimate consequence, the receipt of water, bears no useful geometrical or mechanical relation to the form of the behavior of ‘asking for water’” (1957). 4. Pagel 2016 5. Dawkins 1983 6. Loemker 1989 7. Goldstein 1980; emphasis his. 8. As constraints become more complex, so do their corresponding effects on the dynamics. Constraints can become so complex that their effects cannot readily be
58
9. 10. 11. 12.
13.
14. 15. 16. 17. 18. 19.
20.
21. 22. 23. 24. 25.
26. 27.
The Emergence of Constraint
calculated to predict the trajectory of the affected system. One term for such nonintegrable constraints is nonholonomic. A discussion of nonholonomic constraints is well beyond the scope of this book; suffice it to say that the sequences on which the living world and human civilization depend are of this form. For more details, see Pattee 1966, 1968. Pattee 1972a Pattee 2009. Pattee’s focus on the physics of sequence processing in the cell parallels physicist Rolf Landauer’s work in establishing “the physical requirements for logical devices” in computing (1961). As physicist and Nobel laureate Richard Feynman writes, “biology is not simply writing information; it is doing something about it” (1960; emphasis his). Pattee 2008, citing Wigner 1964. Choice enters physics only when scientists make measurements: they choose what, when, where, and how to measure. But the choices physicists make already assume a distinction between the level of the scientist-observer and the level of the system being observed, between the subject and the object. Notice how the constraints can shift the alternatives. If you just tell Peter where and when to meet you for supper then the locus of decision-making shifts to him. He has to decide how to get there. However, you could give him bus fare and tell him which bus to take. Then you would retain the decision. But why bother? He is quite capable of getting to the restaurant without your micromanagement. Brenner 2009. Purists might call these “unconstrained degrees of freedom,” but don’t-care conditions works for me. Taylor 1911 Pattee 1982a Maynard Smith 2000 Harnad 1990 Consider the stop sign. Unlike a traffic signal, the stop sign delegates all of the decision-making to you. Traffic engineers assume that once you have stopped, your sensorimotor system will decide when you should go. The system is underdetermined in cases like this where the rate-independent sequence defaults to the rate-dependent dynamics of the immediate context. You did not have to reformulate the full original sequence in its detail. You did not have to say: “Peter, do not pass me the pepper. Please pass me the mustard.” Instead, you employed self-reference to reclassify the elements of the original sequence into elements to be retained (“Peter,” “pass,” “me”) and elements to be ignored (“pepper”). By doing so, you established a new hierarchical level in which the sequence “I meant the mustard” reclassified the constraints established by your original sequence. Pattee 1972b Margulis 1967 Pattee 1973 Polanyi wrote the paper against the backdrop of rapid progress in 1960s molecular genetics to caution his fellow scientists not to assume that working out the molecular details of life was tantamount to explaining life. Another way to think about boundary conditions is as filters that separate the actual from the possible. “The concept of boundary conditions is taken from physics,” writes physicist Bernd-Olaf Küppers, “where it refers to the conditions that select, from among the huge number of possible processes allowed by natural law, those processes that actually do take place in a system.” (2018) In other words, decisions among alternatives. Polanyi 1967 Polanyi 1968
The Emergence of Constraint
59
28. Polanyi 1968. For a discussion of how cells are and are not machine-like, see Nicholson 2019. 29. Polanyi 1968 30. Polanyi 1968 31. Küppers 2018 32. Pattee 1973 33. Monod 1959, quoted in Gayon 2015 34. As Pattee writes, “cells do not have feet or ears, but they have motility and irritability, which are basic functions of feet and ears” (1982b). 35. Campbell 1974 36. Pattee 1972b 37. Plotkin 1993 38. Plotkin 2003 39. Pattee 1973 40. Pattee 2007 41. Pattee 1972b. Says Herbert Simon: “One can usually obtain good approximations to the short-run and middle-run behavior at any given level without considering the details of the higher-frequency movements at the level below, and while taking the situation at the levels above as constant over the interval of interest” (2005). See also Novikoff 1945. 42. Rosen 1969 43. Rosen 1969 44. If we did have access to a device to constrain the motions on a molecule-bymolecule basis, we would have what philosophers call a Maxwell Demon (e.g., Leff & Rex 2002). 45. Monod 1959, quoted in Gayon 2015. 46. Pattee 1972b. 47. “Complexity arises,” says Simon, “when we have a large system and when the system divides into a number of components that interact with each other in ways that amount to something more than uniform, frequent elastic collisions” (2005; emphasis his). 48. Reed 1996 49. Simon 1969 50. Simon 1973 51. Simon 1973 52. Software development takes advantage of this approach. In computer programs it is common for one subroutine to perform a specific calculation and then hand off the result to another subroutine. Loose horizontal coupling means that one subroutine is not concerned with how another subroutine arrives at its result, only that the result is correct and delivered on time. 53. Callebaut & Rasskin-Gutman 2005 54. Simon 1969 55. Simon 1973 56. Mesoudi & O’Brien 2008 57. Davidson & Erwin 2006. 58. Hockett 1966 59. Shannon 1948 60. Searls 2002 61. Vygotsky 1962
3 THE GRAMMAR OF INTERACTION
3.1 Interactions, Ordinary and Specific Imagine a boulder rolling down a mountainside and crashing into another boulder. To explain how two big rocks interact with one another, we turn to the laws of physics. The outcome of a collision between two rocks depends on their initial positions and velocities, their masses, their compositions, and the forces impinging on them. If a chunk of hard rock like granite collides with a chunk of soft rock like sandstone, we expect the sandstone to be damaged more than the granite. Why? Because the internal forces holding the sandstone together are not as strong. Now imagine a wolf charging down a mountainside to attack a moose. To explain how two mammals interact with one another, we might also try the laws of physics, but the results will be unsatisfactory. These interactions take on specific stylized forms that cannot be predicted from the usual laws and initial conditions. We expect the moose to be damaged more than the wolf, but this is not because the internal forces holding the moose together are not as strong. Furthermore, when a wolf interacts with a moose the outcome is nothing like what happens when a wolf interacts with other wolves, or a moose with other moose. Wolves and moose are intelligent compared to rocks, but their behavioral repertoire pales in comparison to the many ways in which mammals of the human species can interact with one another. Moose do not get together to build airplanes or microprocessors, and wolves do not file lawsuits against their neighbors, or expect other wolves to bring them food in exchange for bits of paper. For fundamental questions like how life differs from the ordinary physical world or how human civilization differs from ordinary biology, much of the answer has to do with situational specificity. Living things react to the world with a specificity not predicted by the underlying physical law, and people react to
The Grammar of Interaction
61
the world, and to each other, with a specificity not predicted by the underlying biology. “Biological specificity is fundamentally the same,” says Carl Woese, “no matter where it is encountered” (emphasis his).1 The laws of physics cannot tell us how a wolf will react to a moose, and our knowledge of biology cannot tell us how to open a bank account. Biology and civilization depend on this remarkable situational precision, which is guided by complex constraints, sequences of DNA in the case of biology and sequences of text in the case of our literate technological civilization. If you want a generic interaction like a collision between rocks, the laws of physics are fine; if you want special interactions, you need special constraints or, as Richard Dawkins says, “those blind physical forces are going to have to be deployed in a very peculiar way.”2 We have seen that sequences have no direct, meaningful effect on their environments; in essence they are job descriptions without anyone to do the job. Absent Peter, “Please pass the pepper” accomplishes nothing. The pepper will not move on its own, no matter how stridently you demand. Nonetheless, sequences do constrain and coordinate vast amounts of matter and energy on our planet. Symbolic information actually does get control of physical systems. It seems that sequences have not only posted the job description, but filled the vacant position, and trained someone or something to do the job for them. To constrain the behavior of objects in the world, sequences rely on a workforce of third-party mechanisms. They engage the services of physical agents, of interactors, to function on their behalf. The duties of an interactor, says Kim Sterelny, are “to link the registration of a salient feature of the world to an appropriate response,” in other words to couple perception to behavior.3 Sequences can describe alternatives, but they need real-world interactors to perform the classification and execute the decision.4 What are these interactors, where do they come from, and how do they fulfill the job descriptions of their governing sequences? Interactor as a biological idea was coined by David Hull, who considers the individual organism, the biological phenotype, to be the quintessential example.5 But he also realizes that his idea can be applied much more broadly.6 “Interactor is defined with sufficient generality that it is not necessarily limited to one common-sense level of organization,” Hull writes. “Certainly organisms are paradigm interactors, but entities at other levels of the traditional organizational hierarchy can also function as interactors” (emphasis his).7 In this book we will follow Hull’s lead and treat interactor as a fundamental concept with wide applicability. Interactors are constraints, but far more specialized and discriminating than tabletops or inclined planes. They are constructed or configured to respond in improbable ways to patterns in their environment and, if they are good at what they do, they increase the likelihood that their corresponding sequences will be replicated to construct or configure future interactors. An interactor can be anything from a single cell to a brain surgeon to a criminal justice system. Each links salient features to appropriate responses.
62
The Grammar of Interaction
Setting whole organisms aside, what is the simplest possible example of an interactor? Within the cell, the fundamental unit of interaction is the protein molecule whose construction is constrained by the DNA sequence of its gene. “Proteins generate most of the selectable traits in contemporary organisms,” write biochemist Steven Benner and colleagues, “from structure to motion to catalysis.”8 The catalytic protein known as an enzyme is the most primitive mechanism that couples perception to behavior. In his book The Selfish Gene, Dawkins promulgates a gene-centric view of natural selection, looking at the evolution in the living world from the perspective of the individual gene rather than the animal.9 I am proposing a complementary enzyme-centric view of interaction, looking at perception and behavior from the perspective of the individual enzyme molecule. “We could call the enzymes the ‘verbs’ of the molecular language,” say Nobel laureate chemist Manfred Eigen and colleague Ruthild Winkler, “because they convey what is to be done.”10 Before we get started on the micro scale, however, remember that our macro-scale dining companion Peter is also an interactor. Interactors are physical entities that perform the jobs described by sequences, that link the registration of a salient feature to an appropriate response. Passing the pepper is such a job. Unlike a gene, however, “please pass the pepper” does not so much construct a pepper-passing interactor as configure a general-purpose interactor who is already present. We will revisit the distinction between construction and configuration in the chapters ahead.
3.2 The Folding Transformation How do enzyme interactors get themselves constructed in the first place? Many wonders of our literate technological civilization, from snowmobiles to DNA sequencers to smartphones, are manufactured on industrial assembly lines. Workers and robots start with a basic chassis and add parts and subassemblies until the finished product is ready to ship. The assembly line revolutionized manufacturing and gave rise to the world we know today. We take all of this for granted; it is hard to imagine any other way to build things on a large scale. But how else might you do it? Well, let’s ask a cell, which does something distinctively different. Rather than follow a step-by-step procedure like an assembly line, the cell employs a dynamic and nearly simultaneous procedure. Amino acids are lined up in a one-dimensional sequence that spontaneously folds itself into a functioning protein without further instruction. “It is as if we could design any machine,” says Howard Pattee, “so that it could be assembled simply by hooking the parts together into a chain, and then have the chain spontaneously form itself into a functioning mechanism.”11 A Boeing 747 contains six million parts. Imagine that building a 747 entailed hooking the parts together in a precise one-dimensional chain that
The Grammar of Interaction
63
automatically folded itself into a functioning jetliner in a fraction of a second. It sounds absurd, but this is how cells build their interactors—and not just one, but thousands of different enzymes are built like this. Manufacturing a 747 this way would require a battalion of aerospace engineers to make certain the sequence of parts was exactly right because any out-of-sequence part would most likely produce a misfolded structure that could not f ly. (Or, rarely, such a mutation might produce a better airplane.)12 Proteins are the products of genetic transcription and translation in the cell. Each protein is initially laid out as a one-dimensional sequence of hundreds of amino acids called the primary structure. Once this sequence has been assembled, the primary structure spontaneously folds into a three-dimensional formation. In a f lash, the system pivots from rate independence to rate dependence, transforming the protein, Pattee writes, “from the described object to the functioning object.”13 Because of its real-time scale and complexity, the folding transformation is one of the most mysterious processes in biology and one of the most difficult modeling and simulation problems in computer science.14 Proteins are so big that when first studied in the late 19th and early 20th centuries, many scientists refused to believe any molecule could be that large. A typical protein comprises hundreds of amino acids and thousands of individual atoms, and each amino acid in the protein structure has its own electrochemical properties. Of the 20 kinds of amino acids found in proteins, some are hydrophilic, meaning they have an affinity for water; others are hydrophobic, meaning they avoid it. The interplay of these properties is largely responsible for the protein’s 3-D structure. When the one-dimensional amino acid sequence has been laid out, it is released into the cell’s watery cytoplasm and folding begins automatically, with each amino acid simultaneously exerting forces on all the others. “Amino acids in widely separated positions along the linear protein chain form oily inclusions to avoid interacting with water,” writes biochemist Charles Carter. “These movements eventually lead to more specific packing arrangements that, in turn, order the remaining chain.”15 As the hydrophilic amino acids migrate toward the water at the surface and hydrophobic amino acids try to escape the water and move toward the center, each incremental movement redefines the complex dynamic relationship among all of the hundreds of amino acids. After a brief time, the molecule stabilizes into its final three-dimensional, or tertiary, structure.16 The folding transformation is fundamental to the living world; it is how genotype becomes phenotype. It entails a pivot from a rate-independent, one-dimensional pattern to a rate-dependent, three-dimensional molecular machine. The process is completely lawful, yet it somehow seems magical when we witness an abstract sequential pattern translating itself into real-world function.
64
The Grammar of Interaction
Complex as it may be, folding is governed by physical law. As a result, each unique pattern of amino acid sequence folds into the same shape every time. It is this reliability of the folding transformation that makes life possible. To be useful a protein must always fold the same way. Biochemist Christian Anfinsen won the Nobel Prize for demonstrating the lawful “connection between the amino acid sequence and the biologically active conformation.”17 Michael Polanyi would say that the one-dimensional pattern of DNA in the gene imposes a boundary condition on the three-dimensional shape of the protein. The DNA sequence is transcribed into a messenger RNA sequence, which is then translated into a chain of amino acids, which then fold lawfully into the functioning protein. Each possible gene sequence gives rise to one and only one three-dimensional structure. If the sequence of DNA in the gene is a job description, then the folded protein is just the interactor to get the job done. Compare this with scrunching up a one-dimensional length of cellophane tape into a three-dimensional structure. A cellophane “protein” retains its 3-D shape because sticky parts at different points on the tape adhere to one another; they form bonds of glue. However, a given length of cellophane tape does not fold into the same structure every time, nor does it fold itself spontaneously (although sometimes it may seem to). Proteins, of course, are not held together by bonds of glue. Rather, they are formed by two different kinds of molecular bonds. The one-dimensional sequence of amino acids, the primary structure, is held together by what are called peptide bonds between adjacent amino acids. These are strong bonds, like the forces holding the length of cellophane tape together. The bonds that form to maintain the folded protein in its three-dimensional shape are called hydrogen bonds. These are relatively weak, like the points where adjacent bits of tape stick to one another. “To describe folding in any structure requires two types of bonds,” writes Pattee, “strong bonds that preserve the passive topological structure of what is folded, and weaker bonds that acting together hold the active folded structure in place.”18 Leibniz would recognize the distinction between strong and weak bonds. The integrity of the strong peptide bond means folding has its limits; the weak hydrogen bonds can incline but cannot necessitate. They give the protein its three-dimensional shape, but only up to a point. Even the strongest combination of hydrogen bonds lacks the strength to rupture a peptide bond of the primary structure. “The only way to distinguish the ‘folding’ from what is ‘being folded,’ is by distinguishing the strong bonds from the weak bonds,” Pattee says. “Folding would have no meaning if all bonds were equivalent types.”19 Although it suffices for the cell, a gene contains very little information compared with the complexity of the finished protein; this is why modeling the folding transformation is such a challenge. Imagine if the gene had to contain enough detail to pinpoint the three-dimensional coordinates of every atom in
The Grammar of Interaction
65
the folded protein. The resulting volume of data would render the system nonfunctional. As Sydney Brenner says, you pay for specificity with more sequence information.20 But in the cell, information storage is efficient. The gene determines the primary amino acid sequence and delegates the folding to the laws of physics; no further instruction is needed. “Some hundreds of bits of genetic information,” writes Pattee, specify “a highly coordinated structure, with over 20,000 atomic degrees of freedom.”21 Folding is the cell’s distinctive approach to giving symbolic information control of a physical system. But how universal is the folding transformation? Is it a principle that can be generalized, or just an idiosyncratic behavior of cells? What about other kinds of sequences, like spoken utterances or written texts? Is folding a good model for how these sequences take control?22 Clearly, when you ask Peter to pass the pepper, your sequences guide the folding of his body into a new shape. Whether folding is a good model for how the brain responds to one-dimensional patterns remains an open question.
3.3 How the Improbable Becomes Probable Our discussion has taken us from the one-dimensional pattern of a gene sequence to the complex three-dimensional shape of a protein molecule. Now the fun really begins. This is because the function of a protein is determined by its shape. The surface of a protein contains many intricate nooks and crannies, and these microscopic architectural details determine how it interacts with other molecules, including other proteins. Says Daniel Dennett: “Shape is destiny in the world of macromolecules.”23 Some proteins provide structure by building muscle fibers or controlling the f low of molecules through cell membranes. Some, like the silk spiders weave into webs, provide external structure. Other proteins like hormones and neurotransmitters allow cells to constrain the activity of other cells, what is called cell signaling. Still other proteins are the antibodies that battle parasites and infectious agents by binding to invading antigens. At the center of our discussion, however, are the catalytic proteins known as enzymes, which coordinate the metabolism of the cell.24 Science journalist Charles Mann compares a catalyst to a jaywalker whose behavior causes two automobiles to collide but walks away unscathed and ready to cause more mayhem.25 In other words, a catalyst is any substance that speeds up a chemical reaction without being consumed in the process. After the catalyzed reaction has run its course, the catalyst remains available for further reactions. As a result, only a small amount of catalyst is needed to catalyze large amounts of reactants. Catalysts share Hockett’s property of specialization: the energy involved in the reaction is much greater than the energy supplied by the catalyst. About 90 percent of the reactions in modern industrial chemistry are constrained by some kind of catalyst.26
66
The Grammar of Interaction
In the cell, a single enzyme molecule can catalyze a reaction many times, which means the cell does not have to waste energy manufacturing a brandnew enzyme each time it needs the reaction. An enzyme is like an assembly line worker whose job is to tighten the same bolt on the same chassis over and over all day long. The iterative steps of catalyze-reset-repeat are common throughout the living world and human culture. Catalytic enzymes are remarkable molecules, but they are not supernatural miracle workers; catalysts cannot force reactions that are contrary to physical law. “No selective pressure, however strong, could build an enzyme able to activate a chemically impossible reaction,” say Jacques Monod and colleagues.27 All enzymes can do is speed up reactions that are possible but otherwise take place much more slowly. They incline, but they cannot necessitate. If a reaction can happen, a catalyst can arrange for it to happen more quickly. In the case of enzymes in the cell, this change in rate can be spectacular. A catalyst thus resides in that strange middle ground between the impossible and the certain as discussed in Chapter 2. It cannot compel an impossible reaction, but it can make a possible reaction far more likely, though not quite certain. If iron is exposed to oxygen in the air it will slowly rust. This is not an instantaneous process, but it is a reliable one, and it proceeds at a known rate. Eventually all of the iron will oxidize. Chemists describe this rate in terms of the half-life of a reaction. This is the length of time after which, on average, half of a substance has undergone the reaction, with the remaining half still to go. Many biochemical reactions are agonizingly slow, with half-lives measured in geological time. For example, a reaction called decarboxylation of a molecule known as OMP has a half-life of 78 million years. Had you dodged the dinosaurs during the Cretaceous Period 78 million years ago and mixed two milligrams of OMP in water, today there would be one milligram left, the other milligram having “spontaneously” decarboxylated. However, if you add a tiny bit of the catalytic enzyme OMP decarboxylase to the solution, the half-life of the reaction shrinks to less than 20 milliseconds!28 The presence of the enzyme cuts the reaction half-life from a geological time scale to a tiny fraction of a second. If you are keeping score at home, this is a speed-up by a factor of 1017, or 100 quadrillion. The thousands of chemical reactions in a living cell have a wide range of half-lives, from less than one minute on the short end to more than two billion years on the long end.29 By deploying enzymes, the cell can expand or compress all of these half-lives until they are more or less the same. The army of enzymes in the cell has evolved so that, however long or short their half-lives, all reactions converge on a narrow range, around 10 milliseconds. 30 By squeezing every reaction into a tight window of time, the cell can synchronize its metabolic business. It is the cellular version of just-in-time delivery. If a reaction is possible, a catalyst can make it happen that much sooner. Chemists speak of catalytic action as a change in reaction rate, but we can also
The Grammar of Interaction
67
think of it as a change in probability. To accelerate a reaction is to make it more likely, to nudge it in the direction of certainty. Catalytic enzymes start with less probable reactions and make them more probable. Take OMP decarboxylase. With its half-life of 78 million years, the probability is minuscule that a molecule of OMP will react at any given second. But with the enzyme present, the probability approaches certainty. For all practical purposes, the fast reaction appears lawful, even though it is just a trick of natural selection. Life is nothing if not improbable. Blind physical forces are deployed in a very peculiar way. The same is true of culture. If you think back to the dinner table, it is always possible, if unlikely, that Peter might spontaneously pass the pepper to you without being asked. If you wait long enough, perhaps he will. Your request is a catalyst; it accelerates the improbable into the probable. In fact, this is one definition of communication. “At any one moment, the behavior of an individual is not a determined process,” writes ethologist Stuart Altmann. Rather, each of the responses of which the individual is capable has a certain likelihood of occurring, depending on numerous conditions, past and present. If the behavior of another individual produces any change in these likelihoods, then communication has taken place. Thus, when we say that, in a communication process, the behavior of one individual affects the behavior of another, we mean that it changes the probability distribution of the behavior of the other.31 As always, we must take care in distinguishing the improbable from the impossible. Violating the laws of physics is impossible, but anything consistent with those laws is still possible, no matter how improbable. “A complex thing is a statistically improbable thing,” says Dawkins, “something with a very low a priori likelihood of coming into being.”32 Could a microwave oven have spontaneously appeared without human intervention? Unlikely. But can we say the unplanned emergence of a microwave oven is impossible? No, we cannot. “It is clear to everybody,” writes Max Planck, “that there must be an unfathomable gulf between a probability, however small, and an absolute impossibility.”33 Inclining is one thing; necessitating is another. Our catalytic human technologies increase the probability of a microwave oven but cannot guarantee it. Like enzymes, we speed up the necessary reactions. “In humans a restructuring of the brain has acted like a catalyst,” writes Terrence Deacon, “making the immensely improbable nearly inevitable.”34
3.4 Molecular Pattern Recognition Like the distinction between wolves and rocks, the difference between biocatalytic enzymes in the cell and the ordinary catalysts used in industrial chemistry
68
The Grammar of Interaction
is specificity. Enzymes are special-purpose interactors, most of which evolved to facilitate a single reaction unique to each enzyme. Since thousands of reactions are needed to make life go, it follows that a cell needs many different kinds of enzymes to manage its metabolic affairs. In a human cell about 10,000 genes are being transcribed and translated into enzymes and other proteins at any given time. How does an enzyme do its work? How does it perform with such specificity? It all comes down to the details of its shape, both geometrical and electrical. The shape of the enzyme fits the shape of the molecule on which it acts, which is called its substrate. (The long-lived molecule OMP is the substrate of the enzyme OMP decarboxylase.) The nooks and crannies of the enzyme’s surface, including the distribution of electric charge, are complementary to the physical and electrical features of the substrate. How enzymes and substrates fit together so precisely was first described as a lock-and-key mechanism by biochemist Emil Fischer in 1894.35 Fischer’s metaphor was updated by biochemist Daniel Koshland in 1958, when it was discovered that the substrate-key can cause a slight change in the shape of the enzyme-lock.36 This he called induced-fit, and it holds not only for enzymes and their substrates but also for cell signaling molecules and their receptors and for antibodies and their antigens. A cell is a busy, crowded place, with molecules constantly dancing around and colliding with one another—huge molecules like enzymes and the many small molecules that enzymes catalyze. The majority of these collisions are exactly that, ordinary physical encounters between molecules that are of no more interest than encounters between rocks. But occasionally an enzyme collides with its substrate and the result is less like granite meets sandstone and more like wolf meets moose. How does the enzyme make its substrate or not-substrate decision? Somehow the enzyme must separate signal from noise and ignore all of the other shapes in its molecular environment. Enzymes “are seen as successful,” writes biochemist Michael Page, “if they can recognize and select their faithful partner when confronted with a host of enticing infidels.”37 This is the simplest case of pattern recognition or perception or classification in the living world. Like a lock and key, some feature of the substrate’s shape precisely fits a corresponding feature on the enzyme, which is called the binding site. “Even though most of the molecules colliding with the enzyme are the wrong ones and even though most of the collisions of the right one are in the wrong place,” says Brenner, “the number of random collisions is so large that a given substrate will find its binding site on the enzyme with reasonable frequency.”38 An enzyme recognizing its substrate in a population of other molecules is not like you or me picking a suspect out of a police lineup. Binding is as intimate as it sounds, a dance in which enzyme and substrate fit together with precision, each uniquely matched to the other.39 Their fit is complementary; the shape of the enzyme fitting the substrate and the shape of the substrate
The Grammar of Interaction
69
fitting the enzyme. But binding alone accomplishes nothing useful; molecular recognition of a salient feature must lead to some appropriate response, some catalysis. This catalytic behavior is performed at the enzyme’s active site. This is the business end of the enzyme, where molecular manipulation takes place. An enzyme’s biocatalytic function is to create or break a chemical bond. Perhaps it clips off an atom or a group of atoms or attaches an atom or group of atoms. Whatever its mechanism, the enzyme transforms the substrate X into a different molecule Y. Once this transformation is complete, the enzyme releases the modified substrate from its grip; it is then ready to embrace another substrate molecule and perform the same manipulation. This is an oversimplification of both biocatalytic manipulation and the enzyme-substrate relationship. It is also the simplest case, in which a single enzyme recognizes a single substrate and executes a single catalysis. Two-thirds of known enzymes are of this kind; they do one thing, they do it well, and they do it every time; catalyze-reset-repeat. They exhibit no variation either in affinity for their substrate or in manipulative power. As for the remaining onethird, they are called allosteric enzymes, and they are far more f lexible because they can change their shape. These are molecular constraints that are not only constructed but can also be configured. We will meet them in Chapter 5.
3.5 Enzymes, Robots, and Perception Perhaps we should think of enzymes as little robots. Protein biochemists Charles Tanford and Jacqueline Reynolds seem to think so, as they titled their
Enzyme molecules are typically much larger than their substrates. Here the substrate is recognized by the enzyme’s active site and binds to form the enzyme-substrate complex as the first step in catalysis. Like a gem cutter precisely cleaving a stone, the enzyme breaks a chemical bond and releases two independent molecules as products.
FIGURE 3.1
Source: National Human Genome Research Institute (www.genome.gov/genetics-glossary/ Enzyme?id=58)
70
The Grammar of Interaction
book on the history of proteins Nature’s Robots.40 “For every imaginable task in a living organism, for every little step in every imaginable task, there is a protein designed to carry it out,” they write. “The common feature is that proteins are in control and know what to do without being told by the conscious mind.” Molecular biologist Craig Venter concurs: “All living cells run on DNA software, which directs hundreds to thousands of protein robots.”41 But how appropriate is this robotic metaphor? On the one hand, it is appropriate because, like enzymes, robots are interactors constructed to perform a task over and over again without fatigue. We use robotic to describe repetitive physical jobs like assembly line work, kitchen prep, or setting fence posts. This is what enzymes do: they bind, catalyze, reset, and repeat for their entire molecular lifetimes. The robot description is also inappropriate, however, because the mechanisms by which enzymes and robots perform their tasks are distinctly different. Robots constructed by humans have three connected elements. First is a sensor to monitor some salient feature of the world: light, shape, texture, sound, temperature, or a combination. Second is an effector to perform an appropriate action: grasping, swiveling, rolling, welding, cutting, or a combination. Between a robot’s sensors and effectors is a third element, a computer to convert the rate-dependent analog signals from the sensors into rate-independent digital sequences, process those sequences, and issue instructions to the effectors, which translate the one-dimensional instructions into three-dimensional motion. Traditional information processing models of human thought use this input-processing-output paradigm.42 It is sometimes called the sandwich model of cognition because of the layer of processing sandwiched between input and output.43 A robot is a digital computer with rate-dependent inputs and outputs, and lots of rate-independent processing in between.44 Like robots, enzymes have input sensors (binding sites) and output effectors (active sites). What they lack is a digital computer in between. Enzymatic function is the rate-dependent result of the three-dimensional structure of the molecule itself; no sequence processing between input and output is required. Although the structure of the enzyme is governed by sequential patterns in the gene, once the enzyme has folded it leaves the world of sequences behind and becomes a dynamic machine, devoid of computation. This leads to a very big question. Are brains and nervous systems more like robotic interactors or more like enzymatic interactors? In other words, does intelligent behavior require sequential information processing to mediate between perception and behavior, like a robot, or is it unmediated and dynamic, like an enzyme?45 This question is beyond the scope of the book, but it does serve to introduce perceptual psychologist James Gibson, whose idea of an affordance will be crucial to explaining how sequences function. Historically, most perception researchers have followed the robotic approach; information processing is a necessary step between perception and behavior.
The Grammar of Interaction
71
According to this account, when a wolf spies a moose, photons coming from the direction of the moose fall on the wolf ’s retina and induce neurons to fire. The wolf ’s brain then processes and transforms these millions of inputs to compute a mental representation of the moose. Only then can we say the wolf “saw” the moose. This is the way most robotic vision systems work.46 Coming down on the side of the enzyme, however, is Gibson, who reasons that light ref lecting off the surfaces of the environment and falling on the eye— the optic array, as he calls it—already contains information specific to the structure of that environment. Light is more than just a torrent of photons; it conveys information about its sources, the surfaces from which it has been ref lected. Given the laws of optics, there is no other alternative. The eye evolved to allow animals to extract the salient features of their environments from the light itself. The neural and motor pathways that connect perception and action within an animal also loop through the world. For Gibson, Edward Reed writes, “the problem of vision became one of explaining what kind of biological system can sample an optic array in order to use the information it contains.”47 By Gibson’s account, light ref lecting off the moose has a structure specific to the moose; the texture, color, and angles of the surface of the moose shape the light. Just as the binding site of an enzyme evolved to recognize its substrate, the perceptual system of the wolf evolved to recognize structure in the light that specifies the moose. “It is easy to imagine that evolutionary processes would typically prefer careful perception to complex cognition,” write psychologists Rob Withagen and Anthony Chemero, “especially when it comes to vital and time-sensitive activities such as food finding, mate selection, and predator avoidance.”48 The wolf ’s task, argues Gibson, is to pick up salient patterns that are already there; it perceives the moose directly, without the need for intermediate layers of computation.
3.6 All That the Environment Affords In order to include the environment itself as part of the perceptual system, Gibson develops an idea that turns out to be central to explaining how systems of sequences work, and that is the affordance.49 Interactors cannot live by perception alone; an enzyme that binds without catalyzing is of no use, and a wolf that spies a moose but does nothing about it will not live long. Perception may let interactors identify salient features of their world, but the interactors also need to effect appropriate responses, to catalyze the substrate or subdue and eat the moose. For Gibson, perception is all about its adaptive ecological value and the field he founded became known as ecological psychology. The organism’s environment is filled with things to be perceived: things to eat, things to climb, things to run away from, things to hide under, things to mate with, etc. It is the functional relevance of these objects of perception that answers the question of why
72
The Grammar of Interaction
perception evolved to begin with. We perceive things and events in order to eat them, climb them, run away from them, etc. “Perception and action are intimately linked,” write psychologist Michael Tucker and neuroscientist Rob Ellis. “Our decisions to act are not made in a vacuum but are informed by the possibilities inherent in any visual scene. In this sense, vision is important for providing information about what actions are possible.”50 Whether enzyme or wolf or human, an interactor’s perception of the environment is linked to its adaptive behavior in that environment. What is perceived is not just the object or event itself but also its meaning or utility with respect to the perceiver, what it affords. To make his point clear, Gibson adopts the term affordance to describe the perceivable features of the environment in functional terms. “The affordances of the environment are what it offers the animal,” he writes, “what it provides or furnishes, either for good or ill. . . . It implies the complementarity of the animal and the environment” (emphasis his).51 When you and I look at the world we see not just objects and events, but the specific potentials for action afforded by those objects and events. “Affordances write perception in the language of action,” say psychologists Claire Michaels and Claudia Carello.52 To exploit what the world offers, we classify the features of the environment that have value for us, either positive or negative, and respond appropriately. The affordance is the environmental bridge linking perception and behavior in an interactor. This is why perception evolved. “This is the innovation of affordances,” write Michaels and Carello. That chairs afford sitting and cliffs afford avoiding is news to no one; but for Gibson, it is the affordance that is perceived. In other words, an animal perceives what behaviors can be entered into with respect to the environment. When perception is interpreted in this way, we would say that humans do not perceive chairs, pencils, and doughnuts; they perceive places to sit, objects with which to write, and things to eat (emphasis theirs).53 Perception is ecologically relevant because interactors perceive not just objects and events but the functional value of those objects and events. “Perception and motor action do not form two discrete categories,” psychologist Louise Barrett writes, “but instead they work together as a single tightly coordinated, fully integrated unit to detect and exploit affordances, and so produce a highly specific adaptive behavior.”54 Affordances act as boundary conditions on behavior and, as we shall see, they are strongly connected to the functionality of sequences. Affordances are not the same for everyone at every time. For humans, a cobra affords deadly danger; for a mongoose it affords a meal. A truck affords mobility only to those animals who know how to drive (or who can stow away
The Grammar of Interaction
73
in the chassis). The nectar guide of a f lower affords food only to insects with ultraviolet vision. A rabbit affords eating only to those animals quick enough to capture it. An apple tree affords fruit only if you are tall enough to reach it or patient enough to wait for it to fall. “Affordances are resources for an animal at the scale of behavior,” writes Reed. “They are embodied in the objects, events, and places surrounding each animal. Affordances are the aspects of a habitat that can serve to regulate an animal’s behavior.”55 A wolf and a moose reside in the same habitat, but they occupy different niches based on what they eat, how they move, and what sort of shelter they prefer. What a moose affords a wolf and what a wolf affords a moose are two very different things. “A species of animal is said to utilize or occupy a certain niche in the environment,” writes Gibson. “This is not quite the same as the habitat of the species; a niche refers more to how an animal lives than to where it lives. I suggest that a niche is a set of affordances” (emphasis his).56 “A niche is more than a location for the animals,” Michaels and Carello say, it ref lects and supports their way of life. An animal requires a particular kind of environment and a particular environment will support only certain kinds of animals. Each implies the other. Support for a particular way of life means that the niche complements the variety of actions a species must perform; it provides the trees to climb and the bugs to eat. And that is what is meant by seeing affordances: seeing the trees to climb and the bugs to eat.57 The bottom line is that an interactor and its environment are complementary. “Just as there is no organism without an environment,” says Richard Lewontin, “there is no environment without an organism.”58 Gibson places perception in the context of the environment where it evolved and connects it to behaviors that enhance fitness. “The niche implies a kind of animal,” he writes, “and the animal implies a kind of niche.”59
3.7 Affordances Great and Small Gibson’s affordance concept arose in the context of visual perception. But as we have seen, perception itself is a much broader concept that includes, at the micro level, the molecular pattern recognition of enzyme binding to substrate. Enzymes are the simplest interactors in the living world, and they have their own corresponding affordances. For an enzyme, what affords binding and catalysis is its substrate; the enzyme perceives its affordance by binding to it and, once bound, the substrate affords catalytic manipulation. The complementarity of enzyme and substrate parallels the complementarity of animal and environment. To paraphrase Gibson, the enzyme implies the substrate, and the substrate implies the enzyme.
74
The Grammar of Interaction
This brings us to a central theme of this book, the relationship between sequence and environment. Recall that an enzyme’s structure is the product of a series of steps that begins with a gene sequence. The sequence of DNA governs the sequence of amino acids in the enzyme. The folding of the primary structure yields the sites responsible for binding and catalytic manipulation, sites that are specific to a substrate affordance. It is this chain of description and construction that maps sequences to affordances. The one-dimensional pattern of DNA describes not only the three-dimensional enzyme but also its complementary affordance. Simply put, sequences describe the affordances of the environment, how to recognize them and what to do with them. They describe not only the salient features to be perceived by interactors but also the appropriate behaviors that result. “Please pass the pepper” describes an affordance, an environmental feature to be perceived (pepper) and a resulting behavior (passing). Your vocal sequence inclines Peter to attend to and act on the affordance. To accomplish this, the control hierarchy that is Peter depends on countless enzymes attending to and acting on their substrate affordances. The sequence-interactoraffordance relationship scales all the way from enzymes to Peter, and every level in between. But what about higher levels, like natural selection? Do affordances evolve? “There is an infinity of ways in which parts of the world can be assembled to make an environment,” says Lewontin.60 This implies an infinity of potential affordances, and an infinity of potential organisms to exploit them. Many of these potential affordances remain unperceived and unexploited. The environment surely contains latent affordances with stable features that no perceptual system has yet evolved to recognize; these features await discovery. “For all we know,” Gibson notes, “there may be many offerings of the environment that have not been taken advantage of ” (emphasis his).61 In other words, we can think of natural selection as a process of affordance discovery, exploitation, and memorialization.62 “Clearly there have been in the past ways of making a living that were unexploited and were then ‘discovered’ or ‘created’ by existing organisms,” says Lewontin.63 At the dawn of life four billion years ago, most of the world was invisible to living things, neither perceived nor exploited. Over the generations, natural selection revealed many stable attributes of the world and enabled the useful behaviors they afford. Once discovered, affordances are remembered, memorialized in onedimensional patterns of DNA. The affordances animals take for granted today were discovered through the slow unfolding of evolution. “Some affordances of the environment are in fact very persistent, even with respect to phylogenetic time,” says Reed. “If information about affordances exists in the environment, the ability to detect and utilize that information in the regulation of behavior would tend to confer significant evolutionary advantages on animals that use those abilities relative to those that did not.”64
The Grammar of Interaction
75
Humans can’t perceive everything. No interactor can. In fact, other species can discern stable features of the environment that we cannot. A good example is Earth’s magnetic field which, write zoologists Wolfgang and Roswitha Wiltschko, “represents a reliable, omnipresent source of navigational information.”65 Absent suitable instruments, it remains invisible to you and me, but the magnetic field is out there, ready for use by any organism with an appropriate perceptual system. The capacity to perceive Earth’s magnetic field is called magnetoreception. Like every perceptual system, it exploits a stable feature of the environment and reveals a novel set of affordances. “Dozens of experiments have now shown that diverse animal species, ranging from bees to salamanders to sea turtles to birds, have internal compasses,” biologists Sönke Johnsen and Kenneth Lohmann write. Some species use their compasses to navigate entire oceans, others to find better mud just a few inches away. Certain migratory species even appear to use the geographic variations in the strength and inclination of Earth’s field to determine their position.66 The ability to perceive and exploit Earth’s magnetic field appears to have arisen multiple times in the course of evolution. We don’t know how exactly, but we assume it evolved like vision, hearing, or any other perceptual system. The value of environmental awareness is a given, provided the organism leverages that awareness into an improvement in fitness. Earth’s magnetic field is a feature worth perceiving only if it affords adaptive behavior. Some birds today perceive the magnetic field, but the ancestors of those birds did not. Earth’s magnetic field was just as invisible to their avian forebears as it is to modern humans. Through succeeding generations, magnetoreception evolved as a perceptual system coupled to an adaptive behavior, the ability to navigate in the absence of normal visual cues. Although we humans do not possess biological magnetoreception, we have figured out how to build compasses and other instruments to measure the magnetic field and compose similar affordances. Sequences of DNA and their protein interactors generate the variation that makes discovery of new affordances like magnetoreception possible. They then memorialize the pattern that makes perception of the new affordance available to subsequent generations. “In a very important sense,” Lewontin writes, “the physical forces of the world, insofar as they are relevant to living beings, are encoded in those beings’ genes.”67 Genes may describe enzymes, but it is equally valid to say they describe the world and its affordances. Four billion years of evolution separate the molecular affordances of genes and enzymes from the physical and social affordances of our literate technological civilization. When a moose is hit by a truck, we do not shake our heads
76
The Grammar of Interaction
in dismay and say, “Stupid moose, it should not cross when the light is red.” Moose behavior is not governed by our sequences; human social affordances are invisible to the moose. However, when a human is hit by a truck, we assume either the driver or the pedestrian failed to perceive or obey the sequences that define the affordances available to walkers and vehicles. Gibson recognizes these complex, higher-level affordances. “The other animal and the other person provide mutual and reciprocal affordances at extremely high levels of behavioral complexity,” he says.68 What wolves and moose afford each other is different from what wolves afford other wolves and moose other moose, and certainly from what humans afford other humans. “At the highest level,” Gibson says, “when vocalization becomes speech and manufactured displays become images, pictures, and writing, the affordances of human behavior are staggering.”69 But no matter how abstract our literate technological civilization becomes, we must remember that it remains rooted in physical reality through our evolutionarily ancient perceptual systems. Language is grounded in perception and action.70 Human civilization is invisible in the absence of sequences; revealing its affordances requires language to describe them. Think of great literate institutions like religion, law, science, and commerce. They are impossible without language. “You can imagine a society that has a language but has no government, property, marriage, or money,” says John Searle. “But you cannot imagine a society that has a government, property, marriage, and money but no language.” 71 In particular, our civilization is impossible without the sequences of written language, which will be discussed more fully in Chapter 7. Affordances are what make salient features salient. What interactors recognize and how they respond are guided by sequences, but we could just as correctly say that the sequences are describing affordances. Objects and events in the world need to be perceived and acted upon, and sequences are a way to remember these affordances for future generations of interactors. As systems of sequences evolve, they explore the space of possible affordances and memorialize the useful ones.
3.8 The Eye of the Enzyme Without good eye-hand coordination, Peter cannot pass you the pepper. Like all normally sighted humans, he uses his eyes to distinguish the pepper from other objects on the table and uses his hand to reach for it and pass it to you. Whether chopping vegetables, riding a bicycle, or picking a book from the shelf, you and I depend on eye-hand coordination. Our eyes recognize some pattern in the environment, and we link that salient feature to some appropriate behavior of our hands. True, there can be situations in which we feel our way, using our hands for both perception and movement; during a blackout Peter could still get you
The Grammar of Interaction
77
the pepper, albeit less efficiently. But the opposite is never the case. No matter how good our vision, we cannot manipulate the world with our eyes; they are strictly organs of perception. For normally sighted people in everyday situations, what hands do is guided by what eyes see. We could also say that an enzyme has good eye-hand coordination. Each enzymatic interactor in the cell is constructed to recognize the shape of its substrate, and then to couple that to a specific and functional catalytic manipulation. The marvel of an enzyme is the gratuitous relationship between the pattern recognition of binding and the subsequent catalytic behavior. Such a precise and useful relationship does not have to exist—no law of nature requires it— but it exists nonetheless. Absent its substrate, an enzyme is just another protein molecule f loating around in the cell; the presence of the substrate affords it the opportunity to be a catalyst, a very specific interactive constraint. But how did it get that way in the first place? How did all of these precise shapes get themselves coupled to the catalysis of specific chemical reactions? Underlying these questions is a critical yet subtle requirement for DNA sequences and their enzyme interactors. First, the DNA sequence must be able to describe the three-dimensional shape of the substrate, the salient pattern to be recognized. But it also must be able to describe a catalytic behavior, a chemical reaction. Both descriptions, eye and hand, must exist simultaneously within the one-dimensional DNA sequence. If the gene described only binding sites or only active sites then enzymes would not be enzymes; the gene must describe both. In Chapter 1 we discussed how sequences collapse three dimensions into one. Now we see that the three dimensions are of two kinds, three dimensions of pattern recognition (binding) and three dimensions of manipulation (catalysis). If DNA is a language, it is a language of binding and catalysis, of perception and behavior, of description and instruction. It is also a language of affordances, describing salient environmental features in terms of the appropriate behaviors that can result. In the cell, thousands of different genes guide the construction of thousands of different enzymes that recognize and respond to thousands of different affordances. Each molecular affordance is defined through a unique coupling of binding with catalysis. Over billions of years, natural selection has sorted through many of the available pairings, retaining the useful ones and discarding the rest. But the search space is very large. Natural selection may just be getting started. “It does not strain our imagination,” Pattee says, “to consider the possibility that an enzyme could be designed to recognize almost any substrate and catalyze almost any bond with almost any arbitrary correlation between the recognition and catalytic steps.” 72 This leads to an important question. Where exactly in the gene sequence do the patterns that govern shape recognition and catalytic manipulation reside? If we went looking for such patterns would we find them? Is the binding sequence
78
The Grammar of Interaction
pattern over here and the catalytic pattern over there? Or are they mixed up, smeared together? Our own human language may provide a clue. Among the sequences of language, we have discrete lexical items called nouns and verbs. At their most fundamental, nouns describe objects of perception and verbs describe behaviors. If we examine a sequence of text, we can pick out the patterns that constrain perception (“pepper”) and those that constrain behavior (“pass”). According to linguists Paul Hopper and Sandra Thompson, a prototypical verb describes “a concrete, kinetic, visible, effective action” and a prototypical noun “a visible (tangible, etc.) object.” 73 Well, what about genes? Do genes also have discrete subsequences corresponding to pattern recognition and others corresponding to catalytic manipulation, the molecular equivalents of nouns and verbs? The answer is yes. Gene subsequences for binding and catalysis do exist, and they exist because enzymes themselves have a modular construction. Herbert Simon’s watchmaker would be proud. Each genetic subsequence describes a distinct three-dimensional subunit of an enzyme molecule. These are called domains or motifs. A domain can be a binding site, an active site, or some other functional region. These plug-and-play modules can be used and reused in different configurations, resulting in the great diversity of enzyme molecules. “Proteins exhibit a modular architecture with domains serving as autonomous folding units,” write computational biologist Joseph Szustakowski and colleagues. “These motifs can be used to describe a substantial portion of protein structure space and may be thought of as protein legos—basic protein parts that can be assembled in a variety of ways to construct different protein domains.” 74 Humans combine and recombine nouns and verbs in different ways to compose an open-ended variety of behavioral constraints. Most of the objects on the dinner table afford passing. The sequence “pepper” allows Peter to select the appropriate affordance from the set. By the same token, noun-like and verblike subsegments of genes are mixed and matched to compose an open-ended variety of binding and catalytic couplings. “Extensive combination, mixing and modulation of existing [protein] folds has occurred during evolution to generate the multitude of functions necessary for life,” write protein biochemist Annabel Todd and colleagues.75 The DNA subsequences that correspond to these independent protein domains were dubbed genes in pieces by Nobel laureate molecular biologist Walter Gilbert.76 They code, he writes, “for useful portions of protein structure: functional regions, folding elements, domains, or subdomains—any segment that can be sorted independently during evolution.” 77 As the living world has evolved, these sequences have been shuff led, spliced, fused, duplicated, and otherwise reclassified to build new enzymes with new functions. “The huge spectrum of proteins observable today has its roots in genetic events operating on a set of ancestral genes,” write protein biochemist Andrei Lupas and
The Grammar of Interaction
79
colleagues, “similar to the near endless complexity of language resulting from the operation of a set of grammatical rules on a limited vocabulary.” 78 Once natural selection has identified a useful three-dimensional domain for binding or catalysis, it can reuse that domain in a new enzyme by reusing the sequence for that domain in a new gene to compose a new molecular affordance. “Generating new chemistry does not necessarily require large leaps, such as the evolution of novel protein structures or large structural rearrangements,” write molecular biologist Nicholas Furnham and colleagues, “but can be made by small local changes e.g. [amino acid] substitutions or small insertions or deletions.” 79 Think of the rules governing this recombination as a grammar of interaction. You combine the sequences “pepper” and “pass” to construct an affordance for Peter, but you could just as easily use other sequences to construct other affordances, “salt” and “pass” or “pepper” and “refill.” In the same fashion, natural selection has combined and recombined DNA sequences describing diverse protein Legos to construct the enormous variety of enzymes found in the living world. But note that, at this point, the grammar of interaction describes only affordances in the here and now, time bound and space bound.
3.9 Genes Go to Pieces Literate humans spend a lot of time completing forms. A form is a pattern of sequences that constrains us to insert other sequences at specific locations. Perhaps we insert our name into an application, an answer into a test, or a
Many genes exist as disconnected sequence segments (exons) separated by noncoding segments (introns). After the gene is transcribed to messenger RNA, the spliceosome precisely snips out the introns and splices the ends of the exons together to produce the final mRNA transcript.
FIGURE 3.2
Source: (Courtesy: National Human Genome Research Institute (www.genome.gov/geneticsglossary/Intron))
80
The Grammar of Interaction
measurement into a table. We fill in the blanks. Linguists call these phrasal templates, constructions containing empty slots into which we are obliged to insert words or numbers to produce specific phrases. They allow the generation of novelty, but only within well-defined limits. In the parlor game Mad Libs, for example, players are constrained to insert a “body part” or a “color” or a “kind of spice” into a blank space. Whenever a website or app prompts us to select an item from a drop-down menu, we are using a phrasal template.80 In the cell, sequences of messenger RNA lack the convenience of dropdown menus, but in 1977 molecular biologists Richard Roberts and Phillip Sharp independently discovered that genes engage in their own form of phrasal templating. Roberts and Sharp showed that, contrary to common belief, genes do not have to be continuous; they often exist as a series of separate, disconnected sequence segments, like nouns and verbs with a bunch of other stuff in between. These split genes contain not only segments that describe protein Legos—motifs and domains—but also segments that do not describe anything, that are meaningless. The latter, called introns, are laboriously spliced out of the messenger RNA as a final step before it is translated into protein. Once the introns are gone, the remaining segments, called exons, are spliced together end-to-end to create the functioning gene transcript. This is one way the grammar of interaction is instantiated at the molecular level. In 1993, Roberts and Sharp won the Nobel Prize for their discovery. In his Nobel lecture, Sharp marvels at how much irrelevant sequence has to be thrown away: “The average cellular gene contains approximately eight introns,” he says, “and the primary transcription unit is typically four times larger than the final mRNA.”81 After all of the introns are snipped out, in other words, the useful sequence left over is only about 25 percent of the original. That seems to be a lot of wasted sequence, but it is waste in pursuit of evolutionary, developmental, and metabolic f lexibility. As editors tell us, we have a lot of wasted sequence in human language, too. Even genes need to get to the point. Split genes are the chief means by which enzyme domains and motifs are recombined during evolution. This is how Gilbert’s genes in pieces are shuff led around to build new constraints. Each exon sequence describing a particular enzyme Lego is bordered at each end by an irrelevant intron sequence. This makes it easier for natural selection to recombine useful domains into new enzymes. “Clearly the intron-exon structure of genes has been very important in the generation of new genes during evolution,” writes Sharp.82 It is fundamental to the grammar of interaction. Building new interactors to address new affordances across long spans of evolutionary time is one thing but assembling different combinations of binding and catalysis in metabolic time, during the lifetime of the cell, is quite another. The discovery that gene sequences were divided into functional exons and non-functional introns was astonishing, but the kicker came when it was further discovered that not every exon has to be used each time it is transcribed.83 If an mRNA sequence comprises exons A, B, and C and their
The Grammar of Interaction
81
intervening introns, the cell is under no obligation to produce only gene ABC. It can sometimes produce gene AB, sometimes BC, or sometimes AC, ignoring B altogether. And since each exon corresponds to a different enzyme domain, AB, BC, and AC each describe enzymes with different binding and catalytic properties and different molecular affordances. “The presence of exon units which correspond to functional protein domains has been critical for the evolution of complex organisms,” says Sharp. “The ability to alternatively select different combinations of exons at the stage of RNA splicing to generate proteins with different functions is clearly critical for the viability of many vertebrate organisms.”84 The name given this is alternative splicing. In real time, the cell transcribes a sequence of mRNA, decides which exons to keep and which to throw away, cuts out and discards all unwanted sequence (introns and unused exons), splices the remaining exons together and hands it off to the ribosome for translation into protein. That is a lot of sequence processing. For example, consider the following nonsense sequence: The allium Peter ramps wolves chives attacked scallion passed shallot the onion moose garlic pepper. First, the “introns” that must be spliced out and discarded are the sequences denoting onions and their relatives: allium, ramps, chives, scallion, shallot, onion, garlic. If we remove these intron sequences we are left with: The Peter wolves attacked passed the moose pepper. By selecting certain of these “exons” and ignoring others we can assemble at least two functional alternative splices. One sequence (“the”) is used in both, but the rest are unique: Peter passed the pepper. The wolves attacked the moose. The grammar of interaction gets much of its power from alternative splicing, the cell’s capacity to describe many different affordances within a single sequence. The tricky question of how the cell decides which exons to combine and which to discard is a matter of regulation, which will be taken up in Chapter 5. A remarkable example of alternative splicing is found in the Drosophila (fruit f ly) gene called “fruitless” ( fru), which affects courtship behavior. “Male courtship in Drosophila is an elaborate ritual that involves multiple sensory inputs and complex motor outputs,” write molecular geneticists Ebru Demir and Barry Dickson, who discovered the effect. “An obvious but nonetheless remarkable aspect of this behavior is that mature males court only females, never other males, whereas females do not court at all.”85
82
The Grammar of Interaction
The fru gene is present in both males and females, but it undergoes alternative splicing, with males and females having distinct splicing patterns. Male fruit f lies and female fruit f lies start with the same mRNA transcript, but each is assigned a different combination of exons and their corresponding Legos. Flipping the splice pattern changes the courtship behavior of both males and females. “Forcing female splicing in the male results in a loss of male courtship behavior and orientation,” report Demir and Dickson, “confirming that male specific splicing of fru is indeed essential for male behavior. More dramatically, females in which fru is spliced in the male mode behave as if they were males: they court other females. Thus, male-specific splicing of fru is both necessary and sufficient to specify male courtship behavior and sexual orientation.”86 If you splice the fru gene one way, you get courting behavior, regardless of whether the f ly is male or female. Selecting a different set of fru exons from the same transcript eliminates courting behavior, again regardless of the sex of the f ly. “Male splicing is indeed necessary for male courtship behavior and is also sufficient to generate male behavior by an otherwise normal female.”87 Changing the one-dimensional pattern of exon sequences yields enormous changes in the sexual affordances and three-dimensional behavior of the f ly.88 Alternative splicing is not some weird outlier; it is a common process. “Most genes encode pre-mRNAs that are alternatively spliced,” write molecular biologists Timothy Nilsen and Brenton Graveley. “95–100% of human premRNAs that contain sequence corresponding to more than one exon are processed to yield multiple mRNAs.”89 This is the cell’s version of Mad Libs, and it needs a lot of molecular machinery to pull it off. The central actor is an enormous molecular complex called a spliceosome, which comprises as many as 300 distinct proteins and five RNAs.90 The spliceosome is considerably larger than even the ribosome.91 The spliceosome locates a short marker sequence at each intron-exon boundary, clips out the introns, and splices together the remaining exons. The short sequence snippets at exon boundaries are ubiquitous and have been assiduously conserved over evolutionary time. “Despite the large potential for errors, the splicing process appears to occur with very high fidelity,” write computational biologists Zefeng Wang and Christopher Burge.92 A complex set of regulatory enzymes guides selection of the exons needed to assemble various splicing configurations. It takes a village to splice a transcript. We will discuss regulatory enzymes further in Chapter 5. As an example, changes in alternative splicing account for significant differences between humans and chimpanzees. At least four percent of genes in the two species “display pronounced splicing level differences,” write molecular geneticist John Calarco and colleagues. “Alternative splicing has evolved rapidly to substantially increase the number of protein coding genes with altered patterns of regulation between humans and chimpanzees.”93 Mutations in splice sites and other splicing dysfunction also play a role in human disease.94
The Grammar of Interaction
83
The key point is that certain one-dimensional sequences of DNA (exons) describe certain functional three-dimensional modules of proteins (domains, motifs). Just as humans combine nouns and verbs to describe affordances, the cell constructs a variety of different proteins by rearranging the sequences describing modular domains. The f lexible syntax of gene sequences, their grammar of interaction, allows cells to perceive and respond to a wider variety of internal and external affordances.
Notes 1. 2. 3. 4. 5.
6. 7.
8. 9. 10. 11. 12. 13. 14.
15. 16.
17.
Woese 1973 Dawkins 1983 Sterelny 2003 Sequences (replicators) “provide interactors with a limiting set of potential behaviours,” writes organizational theorist Thorbjørn Knudsen (2002). In other words, they are constraints. Hull introduces interactor against a slightly different background (1980), as a counterweight to Dawkins’s idea of a replicator, a generalized reproducing entity like a gene (Dawkins 1978). If a sequence is a replicator, then the mechanism the replicator uses to manipulate elements of its environment is its interactor. Hull recognizes that, like all sequences, genes are energetically inert and require a mechanism to interact with their environment. C. H. Waddington calls the equivalent entity an operator in his paper “Paradigm for an Evolutionary Process” in Volume 2 of Towards a Theoretical Biology (1969a). See also Plotkin 2003. Hull 1988. Hull originally calls his interactor “an entity that directly interacts as a cohesive whole with its environment in such a way that replication is differential” (1980). But he does not give his interactor a one-to-one correspondence to an individual gene, a specific DNA sequence. Hull’s original interactor is the complete phenotype, the “cohesive whole” assembled and operated by all genes of the organism. Later, Hull extends his definition to include other levels, including the individual gene. Benner et al. 1999 Dawkins 2006 Eigen & Winkler 1981 Pattee 1980 Folding would provide a third option to Herbert Simon’s watchmaker. She could attach the springs, gears, etc. to each other in a sequence and they would fold into a functioning watch. Pattee 1977 Using computer simulation to predict the three-dimensional structure of proteins from their amino acid sequences has been the focus of an ongoing computer science competition organized by the Protein Structure Prediction Center (predictioncenter.org) since 1994. Carter 2016 In practice, folding is often helped along by special molecules called chaperones. Given plenty of space and no distractions in the environment, proteins always fold properly. But the cell is a crowded and noisy place and nearby molecules can interfere with a protein that is beginning to fold. Chaperones help the folding amino acid sequence to ignore distractions and stay on the proper trajectory (Balchin et al. 2016). Anfinsen 1973
84
The Grammar of Interaction
18. Pattee 2007. “In biology, weak interactions always play an important part in situations where large molecular structures are continually built up and broken down,” says Bern-Olaf Küppers. “The stability of such structures, like that of a zip, is provided by the mutual, co-operative stabilisation between the individual bonding interactions; however the individual bonds are weak enough to be disrupted rapidly if this is needed” (2018). 19. Pattee 1977. Constraints are always weak bonds compared to the strong bonds of the systems they constrain. Culture is a weak bond compared to its underlying biology or, as biologists Charles Lumsden and Edward O. Wilson write, “genetic natural selection operates in such a way as to keep culture on a leash” (Lumsden & Wilson 1981). 20. Brenner 2009 21. Pattee 1982a 22. Pattee 1980 23. Dennett 1995. 24. “Enzyme” is derived from the Greek meaning “in yeast,” a reference to its origin in studies of fermentation in the 19th century. 25. Mann 2018 26. Neither modern agriculture nor modern warfare would be possible without the large-scale production of ammonia, which uses a catalyst to combine hydrogen and nitrogen into ammonia gas. In the 1930s, catalytic cracking of petroleum made possible an enormous increase in the supply of gasoline. Today, catalytic converters clean automobile exhaust and selective catalytic reduction scrubs the gaseous output of power plants. 27. Monod et al. 1963 28. Radzicka & Wolfenden 1995 29. Stockbridge et al. 2010 30. Wolfenden 2011 31. Altmann 1967 32. Dawkins 1983 33. Planck 1925 34. Deacon 1997 35. Behr 2008 36. Koshland 1958, 1994 37. Page 1986 38. Brenner 1998 39. The functional precision of the complementary fit between enzyme and substrate is remarkable. Key molecules of the enzyme’s active site have been shown to come within less than three ångströms of their substrate targets (Benkovic & HammesSchiffer 2003). Three ångströms is about the diameter of a water molecule. 40. Tanford & Reynolds 2001 41. Venter 2014 42. Pylyshyn 1984 43. Hurley 1998 44. To call it a robot does it have to contain a computer? I would argue yes. There are plenty of complex machines that do not incorporate computers, but we call them machines, or perhaps automata. The mechanism of an old-fashioned f lush toilet does its work without a computer, but would we call it a robot? 45. Pattee 1982b; Cisek 1999 46. Marr 1982 47. Reed 1986. This approach has come to be known as direct perception, and its followers are known as Gibsonians. See also Wagman & Blau 2020. 48. Withagen & Chemero 2009. For a phylogenetic discussion of this point, see Cisek 2019.
The Grammar of Interaction
85
49. Writes psychologist Jeffrey Wagman: “Although James J. Gibson contributed in myriad ways to experimental psychology (and beyond) over a 50-year career, the concept of ‘affordance’ is arguably his most enduring legacy and his most inf luential contribution” (2020). Dennett nominated affordance as his choice for a “scientific term or concept [which] ought to be more widely known” (2017). 50. Tucker & Ellis 1998 51. Gibson 1979. Gibson’s approach is reminiscent of Jakob von Uexküll, the biologist responsible for popularizing the term umwelt, meaning the totality of environmental features specifically relevant to any given animal (1957). Writes von Uexküll: “In all the objects that we have learned to use, we see the function which we perform with them as surely as we see their shape or color” (1957). For a detailed discussion of Gibson and von Uexküll, see Fultot & Turvey 2019. 52. Michaels & Carello 1981 53. Michaels & Carello 1981 54. Barrett 2011 55. Reed 1991 56. Gibson 1979. Again, Gibson’s definition of a niche echoes von Uexküll’s umwelt: “All that a subject perceives becomes his perceptual world and all that he does, his effector world. Perceptual and effector worlds together form a closed unit, the Umwelt” (1957). 57. Michaels & Carello 1981; emphasis theirs. 58. Lewontin 1991 59. Gibson 1979 60. Lewontin 1991 61. Gibson 1979 62. The universality of the affordance concept across the living world is discussed in Turvey 2019. 63. Lewontin 1978 64. Reed 1996 65. Wiltschko & Wiltschko 2005 66. Johnsen & Lohmann 2008 67. Lewontin 1991 68. Gibson 1979 69. Gibson 1979 70. Borghi et al. 2013 71. Searle 2010 72. Pattee 1972b 73. Hopper & Thompson 1984 74. Szustakowski et al. 2005 75. Todd et al. 2001 76. Gilbert 1978, 1985 77. Gilbert 1985 78. Lupas et al. 2001 79. Furnham et al. 2012 80. One of the first uses of Gutenberg’s printing press was the mass publication of fill-in-the-blank indulgences that the Roman Catholic Church could sell to raise money. Martin Luther’s indignation at these early phrasal templates helped spark the Protestant Reformation (McCloskey 2016). 81. Sharp 1994 82. Sharp 1994 83. Early et al. 1980 84. Sharp 1994 85. Demir & Dickson 2005
86
The Grammar of Interaction
86. Demir & Dickson 2005 87. Demir & Dickson 2005 88. Although ancient, alternative splicing does not date to the origin of life. It is universal in multicellular organisms, but absent from bacteria and ancient archaea. “Some form of [alternative splicing] appeared relatively early in eukaryotic evolution,” write molecular biologist Manuel Irimia and colleagues, “at least in the unicellular common ancestor of plants, animals and fungi (around 1300 million years ago, quite early in the evolution of extant eukaryotes). This implies [alternative splicing] appeared before the rise of multicellular organisms, and could therefore have an important role in the biology of ancient unicellular organisms” (2007, references omitted). 89. Nilsen & Graveley 2010. Alternative splicing answers a question that has vexed scientists in this post-genomic age: Why do an estimated 24,000 protein-coding genes reside in the human genome while around 100,000 proteins are constructed from those genes? The answer is simple: because many gene transcripts describe multiple proteins (Keren et al. 2010; see also Matera & Wang 2014). 90. Nilsen 2003 91. Collins & Penny 2005 92. Wang & Burge 2008 93. Calarco et al. 2007
94. Barash et al. 2010; Padgett 2012
4 THE GRAMMAR OF EXTENSION1
4.1 Where Does the Body End (and the World Begin)? Humans have deployed technology to become more discriminating, powerful, and f lexible interactors. We extend our perceptual systems with measuring instruments, thereby increasing the range of environmental patterns we can recognize and respond to. We also extend the strength and f lexibility of our hands and bodies with tools and machines that bring to bear enormous energy with great precision. And we extend the reach of our sequences with technologies like printing and the Internet. The affordances of civilization are vast, complex, and getting more so all the time. Informational and mechanical extensions of human perception and action are the foundation of civilization, but in the context of the natural world they are not so unusual. In recent decades the properties of extension have come under increased scientific scrutiny. It started in 1982, with Richard Dawkins and his inf luential book The Extended Phenotype. 2 He calls it “probably the finest thing I shall ever write”3 and “the idea of which I am most proud.”4 Dawkins’s insight is that the interactive phenotype of an organism can be much more than just the physical structure of its body. Gene sequences guide the construction of the bodily phenotype, true, but the effects of those sequences can also reach farther out in space and time; they can be displaced, to use Charles Hockett’s term. For Dawkins, the skin does not necessarily form a fixed boundary between phenotype and environment. The phenotype can extend well beyond the skin, taking the form of environmental artifacts, or even constraints on the behavior of other organisms.
88
The Grammar of Extension
“Replicators [gene sequences] are not, of course, selected directly, but by proxy; they are judged by their phenotypic effects,” writes Dawkins. Although for some purposes it is convenient to think of these phenotypic effects as being packaged together in discrete ‘vehicles’ [interactors] such as individual organisms, this is not fundamentally necessary. Rather, the replicator should be thought of as having extended phenotypic effects, consisting of all its effects on the world at large, not just its effects on the individual body in which it happens to be sitting.5 Consider a spider web. It extends a spider’s perceptual system by allowing her to sense the impact when potential prey f lies into the web, even if the impact is some distance from the spider herself. It also extends her motor behavioral system by ensnaring the prey until she clambers over to kill and devour it. A web lets a spider capture f lying insects without having to f ly herself.6 The web is made of protein fibers whose construction is governed by sequences in the spider’s genome. The patterned behaviors she employs to build her web are also under genetic constraint. As a result, says Dawkins, the web is as much a part of her phenotype as her legs or eyes, “a temporary functional extension of her body, a huge extension of the effective catchment area of her predatory organs.” 7 The web is a straightforward example of an extended phenotype, but what of situations where the design may be genetically governed, but the materials are not? Many animals construct extended phenotypes from stuff they find in the environment: paper wasps build nests, caddisf ly larvae build protective cases made from pebbles, beavers build dams, termites build mounds, etc. Nobel laureate ethologist Karl von Frisch calls such animal architecture “frozen behavior.”8 The civil engineering work of a beaver is orchestrated by gene sequences, and since a dam contributes to the beaver’s fitness, it meets the definition of extended phenotype. But a beaver dam differs from a spider web in four important ways. First, the dam and the pond it creates are made from materials found in the environment rather than materials produced metabolically by a beaver. Second, the dam is often the work product of several beavers, sometimes spread over more than one generation, while webs are built by individual spiders. Third, the dam can persist, while the web is transient. Subsequent generations of beavers can inherit the dam and its pond from their ancestors. Finally, the dam does not directly enhance a beaver’s perceptual system or its motor behavior; rather, it provides a more congenial set of affordances, a new and improved niche. A beaver’s perceptual and motor systems are optimized for the affordances of a watery environment, so building a dam allows those systems to f lourish. Beavers perceive and escape predators more easily in a pond than on the forest f loor, so they remodel their environment to play to their strengths.
The Grammar of Extension
89
Through artifacts and environmental remodeling, animals can extend their phenotypes not just beyond their bodies, but beyond their individual lifetimes. Parents can leave behind improved affordances that are then available to their progeny. When the extended phenotype of one generation becomes part of the environment of the next, this is called niche construction.9 Earthworms modify the soil; in doing so they help to improve the future habitat of other earthworms. “Contemporary earthworms are adapting to a soil environment largely constructed by their ancestors,” write evolutionary biologist Kevin Laland and colleagues.10 In this view, ecological inheritance resembles more closely “the inheritance of territory or property than it does the inheritance of genes.”11 Humans are the ultimate niche constructors. “Since technology has become cumulative, each generation inherits an increasingly customized environment from the previous generation,” says anthropologist Robert Aunger. “Indeed, contemporary Western humans live within environments that are largely the product of prior human activity.”12 Environments, I should add, which are facilitated by sequences. Animals can extend their phenotypes through construction, by building artifacts and niches, but they can also extend through configuration, by manipulating the behavior of other creatures. “An animal’s behaviour tends to maximize the survival of the genes ‘for’ that behaviour,” says Dawkins, “whether or not those genes happen to be in the body of the particular animal performing it” (emphasis his).13 Let’s look at some cases in which gene sequences in one organism impose boundary conditions on the behavior of another.
4.2 Puppet Masters and Their Puppets Perhaps you have seen one of the gruesome and widely available videos that begins with a cricket jumping into a swimming pool.14 The hapless cricket begins to writhe, and then it becomes obvious why: a long, thin hairworm emerges from within the cricket’s body and swims away to reproduce. Like many nematodes, the hairworm is a parasite, and it spends part of its life cycle growing within the cricket. Once it matures, it has no further use for its unhappy host, so it discards the cricket and squirms away. But crickets are not fond of water, so why would one jump into the pool? It is because the parasite has evolved techniques to hack the cricket, to override its instincts and increase the probability that it will jump into water.15 How such “mind control” happens is not known, but it is a neat trick, especially since the parasite lives in the cricket’s abdomen, distant from the neurons governing the cricket’s perception and behavior. Presumably there are neuroactive chemical signals that let the worm manipulate cricket behavior, although we do not know whether they are produced by the worm itself or by the cricket at the direction of the worm.16 “If the mind is a machine, then anyone can control it,”
90
The Grammar of Extension
says ecoimmunologist Shelley Adamo, “anyone, that is, who knows the code and has access to the machinery.”17 There are trematodes that infect snails and manipulate their behavior and appearance so they are more likely to be eaten by birds, which are intermediate hosts in the life cycle of the parasite.18 Ants can be infected by a trematode that causes them to climb blades of grass and linger there, positioning them to be eaten by grazing sheep, who normally do not include ants in their diet.19 The literature of parasitology and infectious disease is replete with comparable examples, all of them, as Dawkins says, “thrilling in their dark ingenuity.”20 The gene sequences of parasites can achieve remarkable accuracy in hacking the perception and behavior of their hosts. There is a class of fungal parasites that infect worker ants in tropical and temperate forests around the world.21 Infected zombie ants attach themselves to the underside of leaves by clamping their jaws on a leaf vein in what researchers call a death grip. This holds the ant in place while the fungus kills it with chemicals. The fungus then sprouts from the ant’s feet to stitch the ant to the plant, and a fruiting body pops out of the ant’s head to disperse spores to the ground below, where they are picked up by foraging ants to complete the fungal life cycle. Accuracy comes into play because “the height, orientation and timing of the death grip can all be precisely controlled by the fungus,” says entomologist David Hughes.22 Zombie ants clamp onto leaves with a mean height of 25 ± 2 centimeters above the ground, facing north-northwest, and they all bite synchronously at solar noon. These behaviors are beyond the capacity of the fungal phenotype; they require extraordinary engagement between sequences in the fungal genome and the perceptual and motor behavioral systems of the ant. Adamo calls such parasites “evolution’s neurobiologists.”23 Sequences of the ant’s own genome are reclassified and superseded by the genome of the parasite. These examples of extended phenotypes are not artifacts like spider webs or beaver dams that are constructed; rather, they are behaviors that are configured. Genes in the parasite exploit the host and cause it to behave in a non-standard way, as Dawkins says, “subverting it to the benefit of the parasite in ways that arouse admiration of the subtlety, and horror at the ruthlessness, in equal measure.”24 This sets up what Hughes calls intra-organismal conflict.25 Gene sequences of host and parasite battle for control, resulting in a chimera, both genetically and behaviorally. The worm in the cricket needs water in which to reproduce, but it has no perceptual system to locate water nor any legs to get it there. No problem! All it needs is to occupy the body and configure the behavior of some unsuspecting cricket. Why go to the trouble to evolve these complex capabilities when you can borrow them from a neighbor? The same is true of the fungus in the ant. How does a fungus, which has no visual perception, pinpoint the moment of solar noon? How does it measure 25 centimeters above the ground? No problem! It can configure an ant to perform
The Grammar of Extension
91
these measurements on its behalf. The fungus has figured out how to hack the ant’s perceptual and motor systems to respond to affordances the ant would never recognize left to its own devices. The fungus knows the code and has access to the machinery.26 Dawkins classifies extended phenotypes into three broad groups. We have seen the first two, animal architecture—a construction process—and parasite manipulation of host behavior—a configuration process. The third, another configuration process, he calls action at a distance.27 At first it may not be clear how action at a distance differs from the other two. After all, doesn’t the edge of a spider’s web exist at a distance from the spider and doesn’t the worm live in the cricket’s abdomen, at a distance from the limbs it manipulates? True, but Dawkins’s distinction is that spider webs are directly connected to spiders and worms are physically present inside crickets. Action at a distance implies that extended phenotypes can also exist without an overt, continuous physical connection. The paradigm case of action at a distance is avian brood parasitism, exemplified by birds like cuckoos and cowbirds who lay their eggs in the nests of other species, such as reed warblers. They treat the host nest as a foundling hospital, expecting strangers to raise their young for them.28 Unlike the worm in the cricket or the fungus in the ant, the cuckoo chick does not live inside the body of its host; it has no direct molecular access to the host’s nervous system. Nor does it kill off its host (although it may do away with its step-sibling nest mates). But external entry points to a bird’s nervous system do exist and so, as Dawkins writes, the cuckoo chick can “rely on other media for its manipulation, for instance sound waves and light waves.” The chick “uses a supernormally bright gape to inject its control into the reed warbler’s nervous system via the eyes. It uses an especially loud begging cry to control the reed warbler’s nervous system via the ears. Cuckoo genes, in exerting their developmental power over host phenotypes, have to rely on action at a distance.”29 The parasite’s bright mouth and loud cry configure the host parent to divert a large fraction of foraged food to the cuckoo chick. Gapes and cries are potent and reliable substitutes for neuroactive molecules; a cuckoo chick also knows the code and has access to the machinery. Why go to the time and trouble of providing for your own offspring when you can hoodwink someone else into doing it for you?
4.3 The Affordances of Extension If organisms can extend their perceptual systems and motor behavior beyond their bodies and into the environment, how does this affect our idea of an affordance? James Gibson’s canonical affordance is built atop the conventional conception of a phenotype bounded by the animal’s skin. Flat, solid ground of sufficient area affords standing or walking or running. A predator affords injury
92
The Grammar of Extension
or death. A potato chip affords eating. The animal perceives what the environment affords, and the affordance complements the animal at a particular place and time. “The niche implies a kind of animal, and the animal implies a kind of niche,” as Gibson says.30 In this pre-Dawkins view, inside the skin is phenotype and outside the skin is environment. But from what parasites have shown us, the environment affords behavioral opportunities to the extended phenotype that are invisible and inaccessible to the somatic phenotype. If there exist extended phenotypes, there should also be extended affordances, perceivable and actionable features of the environment specific to an animal’s extended phenotype, but not to its somatic phenotype. Gibson does not address this question in the context of affordances, but he does recognize that the boundary between an organism and its environment is f luid. “When in use, a tool is a sort of extension of the hand, almost an attachment to it or a part of the user’s own body, and thus no longer part of the environment of the user,” he says. But when not in use, the tool is simply a detached object of the environment, graspable and portable, to be sure, but nevertheless external to the observer. This capacity to attach something to the body suggests that the boundary between the animal and the environment is not fixed at the surface of the skin but can shift.31 Philosopher Maurice Merleau-Ponty asks us to consider a blind person walking with a cane. If the blind person’s phenotype ends at his hand, then the cane is part of his environment. But Merleau-Ponty, Dawkins, Gibson, and certainly the blind person would agree that the environment begins at the tip of the cane. “The blind man’s stick has ceased to be an object for him and is no longer perceived for itself,” writes Merleau-Ponty. “Its point has become an area of sensitivity, extending the scope and active radius of touch and providing a parallel to sight.”32 The tactile perception provided by the cane is incorporated into the blind person’s phenotype, providing information about the affordances of the environment. To a spider, a f lying insect affords a meal only if the spider perceives and captures it. Most spiders lack the strength and body-eye coordination to do this directly. A f lying insect is not a conventional affordance for a spider but rather an extended affordance, contingent upon construction of a web. To the worm in the cricket, a swimming pool affords reproduction only if located and entered. The worm can do neither by itself, but the cricket can perceive the water and hop in. Once sequences in the worm’s genome configure the cricket’s perception and behavior, the water becomes an affordance for the worm, an extended affordance contingent upon the availability of a cricket. To the fungus in the ant, a leafy plant affords reproduction only if the plant is perceived and climbed. Since the fungus can do neither by itself, the plant
The Grammar of Extension
93
affords nothing to its conventional phenotype. But the ant can perceive the plant, climb it, and clamp itself to the underside of a leaf. Once sequences in the fungal genome configure the ant’s perception and behavior, the plant becomes an affordance for the fungus, an extended affordance, contingent upon the availability of an ant. To the cuckoo chick, distant food affords a meal only if a reed warbler is manipulated into providing it. Since the chick cannot forage by itself, it needs to divert the food from its nestmates. Once sequences in the chick’s genome configure the warbler’s perception and behavior, distant food becomes an affordance for the chick, an extended affordance contingent upon the availability of a reed warbler nest.33 These examples are remarkable, but each has its own limitations. After all, the respective parasites and hosts are members of different species, in some cases even different phyla and different kingdoms. A parasite can configure only one host at a time and, immune systems notwithstanding, the constraint is in one direction: from parasite to host. The parasite also does not use a third organism as an intermediary to guide host behavior. However, if we relax these limitations, we will see more complex systems of extended perception, action, and affordance. What happens if the actors are members of the same species, or one actor can configure many others, or the behavioral inf luence is reciprocal instead of one-way, or intermediaries are used? In these cases, fixing the boundary between phenotype and affordance is much less straightforward. In particular, when members of the same species can mutually configure each other’s perception and action, the result is a social behavior, communication.
4.4 Social Constraints and Affordances Reproductive success depends on an animal’s ability to reliably recognize members of its own species. Most young birds learn to distinguish between the equivalence classes of my-species and not-my-species through early association with parents and siblings, a form of social learning called imprinting. But what’s a young cowbird to do? Cowbirds and other brood parasites are raised by adults and share living quarters with nest mates who are not of their species. This poses two problems for the cowbird chicks. First, they must avoid imprinting on their host species. They certainly do not want to grow up singing the wrong song or trying to mate with the wrong bird. Second, even if they have avoided imprinting on their host species, once they leave the nest they still must find a way to accurately identify members of their own species in the wild. Cowbird chicks solve these twin problems through the use of what researchers have come to call a password.34 This is an innate call that automatically triggers recognition of members of their own kind. Until the chick hears the password, it is incapable of imprinting at all. But once it encounters its own species in the wild and hears the password, imprinting can begin.35 A single
94
The Grammar of Extension
sound emitted by other cowbirds triggers a cascade of permanent perceptual and behavioral changes. The password configures the young cowbird’s nervous system to enable the requisite social and reproductive affordances. It is the most important sound the young cowbird will ever hear. The animal communication literature offers many examples of behavioral manipulation among members of the same species, not to mention many theories about how best to describe and explain what is going on in these systems.36 One widely used example is the alarm-response calling system of vervet monkeys, discussed by primatologists Dorothy Cheney and Robert Seyfarth in their book How Monkeys See the World.37 Vervets use three distinctive alarm calls, each associated with one of three equivalence classes of predator, and each yielding a predator-specific evasive behavior. The three are an eagle call, a leopard call, and a snake call. Upon hearing the eagle alarm, monkeys run and hide in the bushes. Upon hearing the leopard alarm, they climb into trees. Upon hearing the snake alarm, they look around for the snake to monitor its movements. And upon hearing one of these alarms, any monkey can repeat it and thus relay it far beyond its point of origin. These calls for action have no syntax; they encapsulate perception and behavior into a single sound: “See a leopard, climb a tree.” Let’s make one monkey the lookout. When the lookout perceives a predator, it broadcasts the characteristic call and monkeys within earshot engage in the appropriate evasive behavior and perhaps also repeat the alarm. In the context of extended phenotypes, what is going on? It depends on which monkey you ask. The lookout that perceives the predator and emits the alarm is clearly manipulating the behavioral response of the other monkeys, Dawkinsian action at a distance. The lookout knows their code and has access to their machinery. However, at the same time the other monkeys are also exploiting the perceptual system of the lookout, using its eyes to extend their own vision. Gibson calls this second-hand perception, which directs “the sense organs of the hearer toward parts of the environment he would not otherwise perceive” and toward “parts of the larger environment that the speaker has perceived but the hearer has not.”38 Here extension is a two-way street. Even as the lookout’s eyes enhance the extended phenotypes of the other monkeys, their evasive behavior is part of the extended phenotype of the lookout. Further, the role of lookout is interchangeable; any monkey may assume the function at any time.39 For any given pair of monkeys, each can extend the perceptual system and manipulate the behavior of the other; they have reciprocal extended phenotypes. And remember that a monkey does not need to hear the alarm call directly from the lookout. Each monkey can repeat the call it hears, so the alarm can travel from monkey to monkey and configure the evasive behavior of monkeys well beyond earshot of the original lookout. In this context, Herbert Simon would say the monkeys exhibit loose horizontal coupling.
The Grammar of Extension
95
How can we explain the extended affordances of vervet monkeys? The lookout provides every monkey with an extra set of eyes. But the other monkeys do not perceive what the lookout sees; rather, they perceive a sound that couples the perceptual system of the lookout with the evasive behavior of the monkeys on the receiving end. It is second-hand perception. Further, if an alarm call is repeated from monkey to monkey, where do we place the threat affordance? Is it the original threat in the environment as perceived by the lookout? Or is it the social affordance offered by the monkey repeating the call? “An understanding of life with one’s fellow creatures,” writes Gibson, “depends on an adequate description of what these creatures offer and then on an analysis of how these offerings are perceived.”40 Within this narrow but important context of responding to common predators, vervets create and share a distributed phenotype, a network in which each monkey is a node in the common phenotype of all. When the environment affords a threat from an eagle, leopard, or snake, perception of that affordance by any monkey allows others to perceive it vicariously and respond appropriately. At the same time, each monkey is also a node in the common distributed affordance of the others, defined with respect to a dynamic, constantly moving population of monkeys, each with a body, a set of eyes, a set of ears, and a set of vocal cords. Here the utility of extended as a concept begins to run out of steam. It works well for phenotypes and affordances in the situations originally described by Dawkins, but it brings little clarity to phenotypes that are not only extended but also reciprocal and multidimensional. For systems at this level of complexity distributed is a better description than extended. The canonical Dawkins extended phenotype, exemplified by unidirectional, one-to-one, direct parasite manipulation of host behavior, does not scale. However, if the unidirectional becomes reciprocal, one-to-one becomes one-to-many, and the direct becomes second-hand, then we witness the emergence of a distributed phenotype and its complementary distributed affordance, perceptual and behavioral constraint in a social network.
4.5 Situating the Grammar of Interaction Why do vervet monkeys have three alarm calls rather than one or two or four or five? Perhaps their inventory of threat affordances is limited; there may not be much more in their environment for them to worry about. Or the number might be arbitrary, a side effect of other evolutionary processes. Or perhaps we have not waited long enough; in a few tens of thousands of years they might evolve a fourth call, maybe to take evasive action when the lookout spots a biologist with a microphone. Vervets are not free to improvise new calls on the f ly, nor do they even have the ability to recombine the simple perceptions and behaviors that are coupled through their existing calls. The calling system is genetically governed. The
96
The Grammar of Extension
snake call constrains the monkeys to look around for the snake. The leopard call constrains them to take shelter in a tree. But unlike genes in pieces, monkeys cannot compose a new affordance by combining the percept of the snake with a different evasive behavior (“See a snake, climb a tree”). The vervet calling system consists of indivisible auditory constraints called holophrases.41 While they guide both perception and behavior, holophrases are not sequences and cannot be decomposed into nouns and verbs. Unlike “pass the pepper,” their configuration of perception and behavior is not separable into discrete elements like “pass” and “pepper.”42 Although vervet calls can be relayed to travel some distance from the original predatory percept, they remain rooted in the here and now, time bound and space bound. In the words of linguist Wallace Chafe, they exhibit situatedness,43 subject to what linguist Talmy Givón calls the tyranny of context.44 Vervets cannot signal to one another about the predator they saw last week or the predator they may see tomorrow or the predator lurking on the other side of the hill. Nor can they signal to each other about which monkey looks like it is putting on a little weight. Three grammatical mechanisms are available to expand the range of affordances available to a system of sequences. The first we met in the context of the cell. This is the grammar of interaction, shuff ling sequences to reorganize the corresponding perceptions and behaviors of interactors. This is the basic function of nouns and verbs, of genes in pieces and alternative splicing. However, achieving displacement, escaping the tyranny of context, requires a different grammar, the grammar of extension, which will be discussed later. Third is the grammar of abstraction, which makes possible self-reference and the evolution of control hierarchies, and which will be discussed in Chapter 5. We learned about the grammar of interaction in the genetic context—genes in pieces and alternative splicing—in the previous chapter. Now let’s look at it in a more familiar context, that of human speech. Humans today speak about 6,500 different languages, grouped into about 250 language families.45 The quest for universals, for shared characteristics applying to all, remains a preoccupation of many linguists.46 One characteristic widely agreed to be universal is the lexical distinction between nouns and verbs. “On almost everyone’s account verbs and nouns are the most fundamental types of content words in a language,” writes Michael Tomasello, “as they are the only two classes that are plausibly universal, and most of the other types of words in a particular language can be shown to be historically derived from nouns and verbs.”47 The noun-verb distinction is also fundamental to how young children learn language. “The tendency to associate a grammatical class conventionally called ‘nouns’ with physical ‘things’—and a class called ‘verbs’ with visible, concrete ‘actions’—is, as might be expected, very pronounced in young children acquiring speech,” write Paul Hopper and Sandra Thompson.48 It is also central to the evolution of grammar from earlier, simpler forms. “All evidence on grammatical
The Grammar of Extension
97
evolution suggests that there are no more than two types of linguistic entities,” write linguists Bernd Heine and Tania Kuteva, “one type denoting thing-like, time-stable entities, that is, nouns, and another one denoting non-time-stable concepts such as actions, activities, and events, i.e., verbs.”49 The simplest combination of concrete noun and verb describes an affordance that couples a salient feature of the environment with an appropriate behavioral response: “this noun affords verbing.” Sequential combination of noun and verb yields a clause, what William Croft calls “the minimal complete information unit.”50 It is minimal and complete because it combines in one sequence both of the essential elements of an affordance: the object or event to be perceived and the behavior the percept affords. Vervet alarm calls also describe affordances (“See a leopard, climb a tree”) but the seeing and climbing are entangled in one grammar-free holophrase that cannot be parsed into its perceptual and behavioral elements. In the situatedness of the here and now, the lexical categories of noun and verb map to the perception and behavior of human interactors: nouns (“pepper”) are what we perceive and verbs describe the behaviors (“passing”) afforded by those percepts. Nouns configure our perceptual systems and verbs our behavior. Chairs afford sitting, trees afford climbing, and potato chips afford eating. This is how language composes the immediate affordances of the environment using its grammar of interaction. As psychologist Arthur Glenberg and colleagues point out, “language understanding requires the integration of affordances, not simply the alignment of words.”51 The deep relationship between speech and affordances is far more explicit in certain languages. For example, some North American Indian languages, such as the group that includes Navajo, have an atypically small number of nouns. “‘Door’ is ch’é’étiin, meaning ‘something has a path horizontally out,’” writes computational linguist Mark Steedman. ‘Towel’ is bee ‘ádít’ oodí, glossed as ‘one wipes oneself with it,’ or perhaps ‘wherewith you wipe yourself,’ and ‘towelrack’ is bee ‘ádít’ oodí bąąh dah náhidiiltsos—roughly, ‘one wipes oneself with it is repeatedly hung on it’ or ‘whereon you hang wherewith you wipe yourself.’52 These languages “appear to lexicalize nouns as a default affordance . . . and to compose such affordances,” says Steedman (emphasis his). Describing an object in terms of what can be done with it is the essence of affordance. “The close relation between the combinatory syntactic primitives and those involved in planned action should not come as a surprise,” says Steedman. If we turn to those aspects of language that presumably ref lect its origin most directly, namely, its use to manipulate the actions of others to our
98
The Grammar of Extension
own advantage, then it is clear that this is, quintessentially, a planning problem, rather than a distinctively linguistic one.53 Parasites, we have seen, are very good at manipulating the actions of others to their own advantage, which makes this a good working definition of a behavioral extended phenotype. Usually we deploy sequences of speech to guide the listener’s perception toward one affordance out of many, like Peter and the pepper. An object “may have a tremendously large number of affordances, so which are considered?,” asks Glenberg. “Derivation of the affordances is controlled in part by syntax.”54 Sometimes, however, an affordance is not obvious until a suitable context presents itself. “Although a sandal can make a useful doorjamb, this property is unlikely to be noted when browsing in a shoe store,” write psychologist Craig Chambers and colleagues. “What makes particular affordances salient are specific actions or events whose identity or relevance will often be communicated by linguistic means.”55 There is a subtlety to the way in which the sequences of language guide our understanding of affordances. Linguist Leonard Talmy offers the example of two sentences using a common preposition, near:56 (1) The bike is near the house. (2) The house is near the bike. At first blush, one might think that nearness should be a symmetrical relationship: if A is near B then B must be near A. “One could have expected these sentences to be synonymous on the grounds that they simply represent the two inverse forms of a symmetric spatial relation,” Talmy writes, “but the obvious fact is that they do not have the same meaning.”57 The divergence in meaning arises from physical differences between the affordances they describe. The bike is movable and affords mobility. The house is a permanent structure affording shelter. The relevant way to guide a listener’s perception toward the affordances of a stable object is in relation to other stable objects, not in relation to a movable object. However, it is reasonable to relate the location of a movable object to a stable one. Thus (1) sounds normal while (2) strikes us as odd. Meaning follows affordance. What the environment affords human primates is contingent on our perceptual and motor behavioral systems—what we can see and what we can do—as well as the nature of the environment itself. Our species has succeeded in colonizing almost every habitat on Earth, and these habitats are a diverse bunch. When a language evolves its grammar of interaction, it must adapt to the particulars of a given environment and its affordances. “Certain non-linguistic contexts favour the persistence of certain items,” says Nettle. “This is clearly true at a lexical level; words for sand, ice, and fish species, for example, are
The Grammar of Extension
99
unevenly distributed amongst languages in proportion to the importance those domains have for different peoples.”58 The earliest Paleolithic speech must have been simple, concrete, and local, like the speech of young children.59 Today the sequences of human speech are more complex than simple combinations of concrete nouns and verbs.60 The natural, social, and cultural affordances available to us far surpass unitary affordances like “water affords drinking.” Complex affordances require enormous lexicons and complex grammars to organize the self-referential capacity of sequences and deeper control hierarchies through which the constraints are further constrained.
4.6 How Does Language Acquire Children? The alarm calls of vervet monkeys impose boundary conditions on the perception and behavior of other monkeys, just like your request for the pepper is a constraint on Peter’s perception and behavior. Monkey calls are innate features of the monkey’s phenotype, each shaped by natural selection due to the persistence of certain threat affordances in the monkeys’ environment. You, on the other hand, were not born with a knowledge of spoken English. You were born with the ability to learn English, but you could have learned any of the thousands of other languages spoken by our species, and perhaps you did. English and other languages, as we shall see, have evolved so that you might learn them. Your native language was a feature of the environmental niche into which you were born. Although not a physical artifact, it was a tool you discovered and learned to deploy in order to constrain the behavior of others, to direct their attention to affordances that benefited you. In your early years, these others were caregivers, and speech was the password that let you manipulate their behavior to your own advantage. You had to learn the code to gain access to their machinery. As linguist Ljiljana Progovac says, “the imperative in general is among the first productive verbal forms used by children.”61 At the same time, you were allowing your own behavior to be constrained by others, using the same system of spoken sequences. Your caregivers knew the code and you were granting them access to your machinery, albeit unwittingly. The process was reciprocal. Niche construction theory says organisms modify their environments and leave those changes in place for their progeny and their progeny’s progeny. An earthworm inherits soil processed by many generations of earthworms and a beaver may inherit a dam and pond manufactured in whole or in part by its ancestors. Our literate technological civilization carries this to an extreme; our everyday perceptions and behaviors take place in an environment of complex affordances manufactured by those who came before and by contemporaries elsewhere in the world. Although it is less tangible than soil or a beaver dam, humans inherit a system of spoken sequences passed down and modified by preceding generations.
100 The Grammar of Extension
“Like genomes, the languages we observe today are the survivors of a long process of being tried out and tested by their speakers,” says evolutionary biologist Mark Pagel. “Like genomes, we can speculate that we have retained those languages that adapted best to our minds, and this may be the most obvious reason why we find them easy to learn and use.”62 In other words, our bodies and brains are a niche to which language must adapt in order to survive. They are boundary conditions on the utility of the patterns that sequences of speech can form. Utterances are ephemeral and affect the material world only through their human interactors. Were our species to be wiped out, our speech would die with us. “Languages need children more than children need languages,” says Terrence Deacon.63 Thus the relevant question in child development should not be “how do children acquire language?,” but rather “how does language acquire children?”64 Fortunately we keep on producing children, so for the moment speech has no worries about its survival. Sequences of speech have adapted well to the physical, biological, and social characteristics of the interactors on whom they depend. “Natural languages exist only because humans can produce, learn, and process them,” write linguists Morten Christiansen and Nick Chater in their book Creating Language. “In order for languages to be passed on from generation to generation, they must adapt to the properties of the human learning and processing mechanisms.”65 Systems of spoken sequences have adjusted and readjusted themselves over thousands of generations to the idiosyncrasies of the human phenotype, not just to our brains, ears, and vocal cords, but to our entire bodies, what we can perceive and how we can behave. “Language speaks us as much as we speak language,” says economist Deirdre McCloskey.66 Systems of spoken sequences are subject to selective forces. Any change causing a language to be more difficult for a child to produce, learn, or process is a change unlikely to persist, but a change making it easier to produce, learn, or process is likely to be retained and passed to subsequent generations of speakers. “Language has been adapted through cultural transmission over generations of language users to fit the cognitive biases inherent in the mechanisms used for processing and acquisition,” write Christiansen and Chater.67 Human bodies, brains, and environments afford only so much.68 For comparison, consider the sport of Formula One racing. Top drivers operate right at the edge of what their perceptual and motor systems can manage. This is more than just a matter of cognitive processing; it goes to the inherent limitations of the human visual and motor behavioral systems, not just the brain but the whole body and the environment in which that body finds itself. A racecourse is a highly constrained affordance. Engineers can build machines to travel much faster than existing F1 autos, but no human can drive them (at least without computer assistance). Our bodies impose a boundary
The Grammar of Extension
101
condition on the design of potential automobiles, as they do on the design of potential languages. Like speech, autos have evolved to match our capabilities. “The structure of a language is under intense selection,” explains Deacon, because in its reproduction from generation to generation it must pass through a narrow bottleneck: children’s minds. Language operations that can be learned quickly and easily by children will tend to get passed to the next generation more effectively and more intact than those that are difficult to learn. So, languages should change through history in ways that tend to conform to children’s expectations; those that employ a more kid-friendly logic should come to outnumber and replace those that don’t.69 Children pick up language from their environment, a task made simpler because language has evolved over many generations of speakers to be easy to pick up, what linguist Henry Brighton and colleagues call cultural selection for learnability.70 This is true not just of language, but of other elements of culture. “When the child aims to learn an aspect of human culture (rather than an aspect of the natural world), the learning problem is dramatically simplified,” write Christiansen and Chater, because culture (including language) is the product of past learning from previous generations. Thus, in learning about the cultural world, we are learning to ‘follow in each other’s footsteps’—so that our wild guesses are likely to be right—because the right guess is the most popular guess by previous generations of learners.71 From the perspective of language, humans are interchangeable, with comparable brains and bodies. We are universal interactors. “Children don’t have to be particularly smart, and parents don’t have to be particularly gifted teachers,” says Deacon. “The limitations of the learners and the teachers are an unavoidable part of the ecology of language transmission.” 72 They are also an unavoidable part of the ecology of language structure. Commonalities among spoken languages result from the adaptive demands of human sequence production, learning, and processing, all grounded in our perceptual and behavioral capabilities.73 “There are universals of language because people all over the world have similar communicative jobs to get done and similar cognitive and social tools with which to do them,” says Tomasello.74 A population of humans amounts to a collection of universal interactors that not only can be configured by language to perceive and exploit novel affordances across a variety of habitats but also—and most importantly—can reliably replicate the sequences of language across the generations. As Givón writes, “at the syntactic level, languages tend to diverge enormously. At the
102 The Grammar of Extension
pragmatic level, they tend to be amazingly similar.” 75 Sequences of speech are not produced by the brain so much as adapted to the brain and the rest of the human phenotype. The locus of functionality is in the sequences of the linguistic system itself.76 The human hardware mechanisms that replicate and interpret the sequences of speech are the perceptual and behavioral equivalents of ribosomes, spliceosomes, and polymerases. Some go further, comparing speech with infection by viruses or manipulation of host behavior by parasites. “It is helpful to imagine language as an independent life form,” writes Deacon, “that colonizes and parasitizes human brains, using them to reproduce.” 77 The code has access to the machinery. The sequences of language are the ultimate phenotype extension, so extended, in fact, that they have broken free and become disembodied. “There is a sense,” says linguist Nikolaus Ritt, “in which languages are insensitive to the existence of their users as conscious beings.” 78 As Dawkins says of genes, “the fact that in any one generation they inhabit individual bodies can almost be forgotten.” 79 Does this mean speech is like the worm in the cricket or the fungus in the ant? No, because speech does not routinely kill its hosts.80 “Languages appear to have a mode of existence in which humans figure not as their owners, designers, users, or controllers at all,” says Ritt, “but rather as their ‘hosts’, ‘survival machines’, or even as elements in their environment, by which their existence and replication are constrained but not fully determined.”81 Say Christiansen and Chater: “Language may be viewed as a ‘beneficial parasite,’ engaged in a symbiotic relationship with its human hosts, without whom it cannot survive.”82 Whether viewed as parasite or symbiont, speech is bounded by what the human body can perceive and do, what it affords. Like a Formula One driver, we see only so far ahead, and we respond only so quickly. Our grammar of interaction is adapted to the limits of our perceptual and motor systems, to features of the environment that can be talked about, and to behaviors in the environment that can be specified. The human phenotype and its affordances throw a blanket of constraint over speech.
4.7 The Grammar of Extension In 2014, after a 10-year voyage, the spacecraft Rosetta began orbiting a comet and launched a landing module called Philae to touch down on the comet, which it did. Translating this achievement into the language of affordances, the grammar of interaction, we could say that “for a spacecraft, a comet affords landing.” But of course, there is a whole lot more going on here than in an everyday affordance like “for Peter, the pepper affords passing.” When Rosetta was launched, the comet was far distant in time and space, and the success of the mission depended on a series of contingencies; every calculation had to be correct and every piece of equipment had to perform as designed. The affordance “for a spacecraft, a comet affords landing” was not tangibly present.
The Grammar of Extension
103
How does the grammar of interaction compose affordances that are not tangibly present? The answer is that it can’t by itself; nouns and verbs need some help if they are to be liberated from the tyranny of context. It is one thing to compose new affordances using materials at hand; it is quite another to extend those affordances into the past or future, into the distance, or to assign them probabilities based upon the presence of other affordances. Comets do not afford landing in a world bounded by the here-and-now-and-must, only in a world extended into the there-and-then-and-might. To supplement the grammar of interaction, we need a grammar of extension. Some linguists agree that the sequences of speech can be placed within the action-at-a-distance context of the extended phenotype. Simon Kirby, for example, says “language is . . . part of our extended phenotype” (emphasis his),83 and Nicholas Evans and Stephen Levinson write, “we are just a species with a very highly developed ‘extended phenotype.’”84 These observations are metaphorical; they would not pass muster with the strict definition of extended phenotype used by Dawkins. Nonetheless, they get at the larger truth that, through the sequences of language, humans can extend our perceptual and behavioral capacities, our social and cultural affordances. To compose these affordances, the average adult speaker of English has access to a vocabulary of around 10,000 names for things,85 and new things are being created and named all the time. Nouns, verbs, adjectives, and adverbs form what linguists call open classes. New members of these lexical groups are coined freely, easily, and apparently without limit. Today’s drivers can compose affordances involving “backup camera” or “satellite radio,” but drivers who lived in the 1940s could not. Open classes are an important source of the evolutionary creativity made possible by systems of sequences. But not all kinds of words can be coined freely. Other common lexical classes are closed; their stock of sequences is ancient, fixed, and not expandable in any practical sense. In English, these classes include prepositions such as over, under, between, beyond, and so forth, deictic (pointing) words like this, that, here, there, etc., and modals that express probabilities, like will, could, and might. For example, the set of English prepositions is small (about 150 words) and highly conserved as the language evolves. It is impossible to imagine coining a new preposition as we might a new noun or a new verb. If you doubt this, give it a try. For every member of a closed class, there are many more members of open classes. “There is a difference of approximately two orders of magnitude,” write linguists Barbara Landau and Ray Jackendoff. “For every spatial relation expressible in English, there are perhaps a hundred object names. This qualitative difference is reproduced in every language we know of.”86 These common grammatical markers describe the spatial, temporal, and probabilistic relationships among nouns and verbs. They allow simple affordances described by the grammar of interaction to escape the tyranny of
104 The Grammar of Extension
context, to describe, as Progovac says, “situations that are distant, non-existent, or that even challenge common sense.”87 Affordances can be extended in time and space as the sequences that describe them deftly maneuver within that vast probabilistic zone between the certain and the impossible. Even though it is far into the future, far, far away in space, and subject to a host of contingencies, “for a spacecraft, a comet affords landing” describes a real affordance. “What grammar consists in most immediately,” writes Tomasello, “is a set of conventional devices and constructions—conventionalized differently in particular languages—for facilitating communication when complex situations outside the here and now need to be referred to.”88 Spatial markers like this and that, over and under, are most explicitly tied to the environment; they allow us to locate affordances with respect to other individuals and other objects in the world. “Being able to describe where objects are, and being able to find objects based on simple locative descriptions, can be regarded as basic skills for any competent speaker of a language,” write psychologists Kenny Coventry and Simon Garrod.89 There may be a leopard over the hill, but vervets cannot describe what it affords unless they can see it. “Clearly, language and spatial understanding map onto each other,” write Landau and Jackendoff,90 or, as Talmy says, grammar has a “topology-like character.”91 With this element of the grammar of extension, affordances can be extended in space. “A common set of vision and action components may underpin spatial language generally,” says Coventry.92 Language and temporal understanding also map onto each other. After all, rate independence—the collapsing of temporal patterns into spatial patterns— is a foundational property of sequences. Temporal markers like before and after, now and then, and tense markers of verbs, allow us to locate affordances with respect to the present moment and to events past and future. A leopard may be arriving after sunset, but vervets cannot describe what it affords unless they can see it. As Progovac says, more complex syntax allows our sequences “to breakaway from the here-and-now, as well as from the prison of pragmatics more generally, enabling the famous displacement property of human language.”93 Tense markers allow language to engage in what Tomasello calls “arcane temporal bookkeeping.”94 Besides situating them elsewhere in time and space, the grammar of extension can also assign probabilities to affordances. This should not be surprising, as the sequences of language are constraints and constraints are probabilitychangers. Between the certainty of must and the impossibility of cannot lie the whole range of contingencies implied by would, should, and could, will, may, and might. We can discuss whether there could be a leopard over the hill after sunset, but vervets cannot. Linguists who study the evolution of grammar have differing views on how these systems of grammatical markers came to be.95 Linguist Karl Bühler writes of two distinct fields in language, the deictic field, comprising grammatical
The Grammar of Extension
105
markers, and the lexical field, comprising naming words.96 He argues that grammatical markers represent an independent, closed-class subsystem that interleaves with the open-class lexical subsystem to yield the chimeric hybrid we have today. “These subsystems basically perform two different functions,” says Talmy. “Open-class forms largely contribute conceptual content, while closed-class forms determine conceptual structure.”97 Earlier we saw that language both describes and instructs; using the same alphabet and grammar, some sequences describe salient features of the environment and others describe appropriate responses. We also saw that the metalanguage of any language is stated in sentences of that same language. Using the identical alphabet and grammar, some sequences can describe aspects of the world while others can describe other sequences in the system. Now we also see that the same universal mechanisms of interpretation and replication can handle both the open-ended lexical field and the closed-class deictic field. Using the same alphabet and grammar, some sequences describe perceptions and behaviors while others describe the spatial, temporal, and probabilistic relationships among them. Complexity in one dimension has evolved to ref lect complexity in three dimensions.
4.8 The Extended Genome We first met the coupling of perception and behavior to describe affordances— the grammar of interaction—at the molecular level, made possible by genes in pieces and alternative splicing. However, we have not explored the question of whether a grammar of extension also exists in the cell. If the grammar of extension is to be considered a universal property, then sequences of DNA should have their own lexical and deictic fields, their open and closed classes. Do they? The answer appears to be yes. We know that tens of thousands of exon sequences are mixed and matched to create the proteins in our cells. Like nouns and verbs, these sequences constitute an open class, a lexical field; new ones arise through mutation and recombination and are expressed just like any other gene sequence. There appear to be no limits on the proliferation of novel exons. But there also exists a closed class of universal, short, and evolutionarily stable DNA sequences that behave more like prepositions; they constitute the genome’s deictic field. Some of the obvious examples are the start and stop codons of the genetic code, the attachment points for RNA and DNA polymerases and ribosomes, and the sequences at mRNA exon boundaries that allow introns to be snipped out. There are also many DNA binding sites, short snippets that are recognized by enzymes, like the transcription factors that we will meet in Chapter 5. Molecular biologists call these common short snippets consensus sequences.98 Not only do they allow the cell to manage its metabolism temporally and spatially, but they also help to orchestrate the temporal and spatial expression of genes during development. “How these pieces of DNA give rise to complex
106 The Grammar of Extension
patterns of temporal and spatial activity,” write developmental biologists François Spitz and Eileen Furlong, has been “the subject of intensive investigation since the discovery of enhancers some 30 years ago.”99 This finite group of short, common DNA markers enable the genome’s grammar of extension, providing the cell with the same kind of spatial, temporal, and probabilistic scaffolding that prepositions, deictics, and modals provide in the sequences of human speech. To recap, in both the genome and human speech a small number of short, ubiquitous, and evolutionarily stable marker sequences allow their respective systems to choreograph perception and behavior across time and space. Thanks to the grammar of extension, affordances do not have to be tethered to the here-and-now-and-must, to the tyranny of context, but can be extended into the there-and-then-and-might. Dawkins shows it is to the advantage of animals to extend their perceptual and behavioral phenotypes beyond the physical boundaries of their bodies. They have many paths to that goal, including construction of artifacts and manipulation of the perception and behavior of others. Manipulating others requires access to the machinery via the nervous system and a means of specifying affordances, what to perceive and how to respond. The sequences of human speech have evolved into a powerful, precise, and f lexible tool for both. So powerful have these sequences become, in fact, that they have managed to assume an independent existence, one that, while grounded in the human body, has acquired many of the characteristics of a parasite, perhaps beneficial, perhaps not. “Please pass the pepper” describes a concrete affordance. The pepper grinder is a salient feature of the environment and passing it is an appropriate response. Whether it was passed yesterday, will be passed tomorrow, or might be passed if everyone would just shut up so Peter can hear you, it is still a real object in the real world. Nonetheless, we know that sequences of language are not necessarily shackled to the physical environment. Not every noun describes something tangible like pepper or leopard. Sequences of language can f loat away into the abstract, where we encounter nouns like dignity, loyalty, and dark matter. What, if anything, do these afford? How has our system of sequences become unmoored from grounded reality, slipping away into metaphysical, theoretical notions that cannot be directly perceived? The answer is to be found in the self-referential and regulatory characteristics of control hierarchies, which comprise a grammar of abstraction.
Notes 1. Parts of this chapter are based on Waters 2012b. 2. Dawkins initially presented the concept in his 1978 paper: “Replicator Selection and the Extended Phenotype.” 3. Dawkins 2016
The Grammar of Extension
107
4. twitter.com/richarddawkins/status/334060845011173379 (2013). It did not stop with Dawkins. In 1996, Kim Sterelny and colleagues continued with their paper, “The Extended Replicator,” and in 1998 Andy Clark and David Chalmers published “The Extended Mind,” one of the founding manifestos of the situated cognition movement in cognitive science. In 2002 Scott Turner published The Extended Organism, which discusses the physiology of animal-built structures like termite mounds. 5. Dawkins 1982. Taking this concept of effects beyond the body to its logical extreme, one can envision replicators that are completely disembodied, for which all phenotypic effects are extended. They have no conventional phenotype and are known only through their extended effects on and in the environment. Replicating sequences that fit this model include RNA and DNA viruses and, as we shall see, human speech and writing. 6. Recent research suggests that the web is also part of the spider’s cognitive system ( Japyassú & Laland 2017). 7. Dawkins 1982 8. Von Frisch 1974 9. Odling-Smee et al. 2003 10. Laland et al. 2004. What counts as extended phenotype and what as niche construction is not always clear at the margin. Dawkins takes a dim view of niche construction as an evolutionary theory because it does not establish a causal relationship between a gene sequence in an organism and that organism’s fitness. He writes: “I am asked by lay people . . . whether buildings count as extended phenotypes, I answer no, on the grounds that the success or failure of buildings does not affect the frequency of architects’ genes in the gene pool. Extended phenotypes are worthy of the name only if they are candidate adaptations for the benefit of alleles responsible for variations in them” (2004). Buildings are, however, part of the niche constructed by humans for the use of other humans. 11. Laland 2004 12. Aunger 2010a 13. Dawkins 1982 14. For example, www.youtube.com/watch?v=D7r1S6-op8E 15. Adamo 2012; Thomas et al. 2002 16. Biron & Loxdale 2013 17. Adamo 2012 18. Wesenberg-Lund 1931 19. Krull & Mapes 1953 20. Dawkins 2012. For a review, see Hughes et al. 2012 21. Andersen et al. 2009; Arauo et al. 2018 22. Hughes 2013 23. Adamo 2013. Entomologist Maridel Fredericksen and colleagues report that the fungus does not attack the ant’s brain, but rather guides its behavior directly from the muscles. Embedded fungal cells communicate with each other to create what amounts to an invasive brain (2017). 24. Dawkins 2012. In all cases of host manipulation by parasites, Hockett’s (1966) principle of specialization applies. The energy the parasite expends to constrain the host’s behavior is much less than the energy the host expends in executing the behavior. Just as you do not need a prosthetic harness to get Peter to pass the pepper, the worm does not need to mechanically constrain the cricket nor the fungus the ant. If they did, we would think of them more as predators and less as parasites. 25. Hughes 2008 26. Hosts whose perception and behavior are manipulated by internal parasites are like allosteric enzymes that change shape and function when bound to a regulatory molecule. Just as the regulator configures the enzyme, the parasite configures
108 The Grammar of Extension
27. 28. 29. 30. 31. 32. 33.
34. 35. 36. 37. 38.
39. 40. 41. 42.
43. 44. 45. 46. 47.
its host using what amounts to a fancy form of binding using multiple molecules. Allosteric enzymes will be discussed in Chapter 5. Dawkins 1982 For a review, see Davies 2000 Dawkins 1982 Gibson 1979 Gibson 1979; emphasis his. Merleau-Ponty 2002 But what about the beaver? What affordances are made available by the dam? The dam does not appear to add affordances to the beaver’s somatic phenotype; rather, it seems to change the probabilities of certain affordances by homogenizing the beaver’s environment. The beaver’s locomotion, its foraging behavior, and its ability to perceive and evade predators are optimized for an aquatic environment. The terrestrial environment frequently spells trouble for the beaver, but the dam provides a uniform and far more manageable set of affordances. The dam does not extend the capabilities of the beaver’s perceptual and motor systems; rather, it constrains the environment to more closely fit the beaver’s phenotype. This is the point at which niche construction and extended phenotype make their closest approach. Hauber et al. 2001 Researchers have begun to identify the auditory forebrain regions that make recognition of this non-learned password possible, that afford access to the machinery ( Lynch et al. 2017). For a review, see Stegmann 2013 Cheney & Seyfarth 1990 Gibson 1966. Philosopher Daniel Dennett, in a paper discussing Cheney and Seyfarth’s work, uses information gradient to describe the social environment of the monkeys (1988). Of this, Sterelny writes: “Different animals in the group will have overlapping information about their local environment. Information gradients set up an opportunity to use others as instruments for finding out about the world” (2003). Interchangeability is also one of Hockett’s design features: “adult members of any speech community are interchangeably transmitters and receivers of linguistic signals” (1966). Gibson 1979 Tomasello calls holophrases “one-unit communicative act[s]” (2008). Animal communication researchers sometimes use the term functional reference ( Byrne 2016). See also Wray (1998). Stuart Altmann notes further entanglement: “The predator calls of vervets and of baboons are basically arbitrary: there is no obvious resemblance between the contours of the call and the contours of the predators. However, to the extent that calls become louder and more frequent as the danger becomes more imminent, they are iconic, in that a one-to-one mapping is possible between certain properties of the message and certain properties of what the message denotes” (1967). Chafe 1994 Givón 2002 Nettle 1999 Evans & Levinson 2009 Tomasello 2008. One of the founders of modern linguistics agrees. “There must be something to talk about and something must be said about this subject of discourse once it is selected,” writes Edward Sapir. “This distinction is of such fundamental importance that the vast majority of languages have emphasized it by creating some sort of formal barrier between the two terms of the proposition. The subject of discourse is a noun. As the most common subject of discourse is either a person
The Grammar of Extension
48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60.
61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73.
74. 75. 76. 77. 78.
109
or a thing, the noun clusters about concrete concepts of that order. As the thing predicated of a subject is generally an activity in the widest sense of the word, a passage from one moment of existence to another, the form which has been set aside for the business of predicating, in other words, the verb, clusters about concepts of activity” (1921). Hopper & Thompson 1984 Heine & Kuteva 2002. Their distinction between time-stable and non-time-stable echoes Pattee’s rate dependence and rate independence. Croft 1991 Glenberg et al. 2009 Steedman 2009 Steedman 2009 Glenberg 2009 Chambers et al. 2004 Talmy 2000 Talmy 2000 Nettle 1999 Heine & Kuteva 2007 Anthropologist Daniel Everett, in his 2008 book Don’t Sleep, There Are Snakes, describes the Pirahã language spoken by the Amazonian tribe of the same name in Brazil. By his account, the Pirahã never escape the tyranny of context: “Pirahãs only make statements that are anchored to the moment when they are speaking, rather than to any other point in time.” They use simple present, past, and future tenses, but no perfect tenses. An English sentence like “when you arrived, I had already eaten” could not be spoken by Pirahã because the verb “had eaten” is “not defined relative to the moment of speech—it precedes it.” Linguist William Labov also notes the absence of tense markers in most pidgins he studied (1990). However, the further evolution of pidgins into more sophisticated creoles generally is accompanied by more diverse and explicit temporal marking. Progovac 2015. What I call the grammar of interaction, she calls a “binary, twoslot grammar.” Pagel 2009 Deacon 1997 Waters 2014 Christiansen & Chater 2016 McCloskey 2016 Christiansen & Chater 2016. See also Nettle 1999. Lupyan & Dale 2016 Deacon 1997 Brighton et al. 2005 Christiansen & Chater 2016 Deacon 1997. See also Mufwene 2001. Carl Woese argues that in the early history of life the genetic code was unified and stabilized by comparable selection pressures, becoming a “genetic lingua franca” for transferring nucleic acid sequences among primitive organisms (2002). “The code was unified,” writes molecular geneticist Michael Syvanen, “because of continuous selective pressure to allow the exchange of genetic material across species boundaries” (2012). Tomasello 2008 Givón 1979 Port 2010; Bybee 2010; Ellis & Larsen-Freeman 2009 Deacon 1997 Ritt 2004
110 The Grammar of Extension
79. Dawkins 1990 80. War and genocide are made possible by sequences. Should we wipe ourselves out for some reason, then speech would have killed off its host. 81. Ritt 2004. Noam Chomsky argues humans possess what he calls a “language acquisition device” (1965), but the host-parasite view of speech turns this on its head. “Successful languages,” cognitive scientists Andy Clark and Jennifer Misyak write, “would then be Human Acquisition Devices” (2009). 82. Christiansen & Chater 2016 83. Kirby 1998 84. Evans & Levinson 2009 85. Landau & Jackendoff 1993 86. Landau & Jackendoff 1993 87. Progovac 2015 88. Tomasello 2008 89. Coventry & Garrod 2004 90. Landau & Jackendoff 1993 91. Talmy 2000 92. Coventry 2013 93. Progovac 2015 94. Tomasello 2008 95. Heine & Kuteva hypothesize that all markers, plus all other lexical categories, ultimately are derived from nouns (2007). 96. Bühler 1934 97. Talmy 2003 98. Schneider & Stephens 1990; Schneider 2002. Consensus sequences comprise an arcane menagerie, including TATA-, GC-, and CCAAT-boxes, Shine-Dalgarno sequences, BRE- and INR-boxes, Kozak consensuses, and many others. See also Gagniuc & Ionescu-Tirgoviste 2012. 99. Spitz & Furlong 2012
5 THE GRAMMAR OF ABSTRACTION
5.1 Freedom’s Just Another Word You can ask Peter to pass the pepper or the mustard, but everyone at the table would be confused if you asked him to pass the freedom or the imagination. We recognize pepper and mustard as concrete nouns, but freedom and imagination are abstract; they do not describe any object of perception. Yet at the same time, when we speak or write we treat the words freedom and imagination just as we would any other noun. Our sequence processing equipment is universal; it does not distinguish between the abstract and the concrete. We are concerned in this book with the development of three distinct grammars, each adding its own element of complexity to the affordances that can be described by one-dimensional patterns. The grammar of interaction allows us to combine nouns and verbs to describe affordances in the here-andnow, time bound and space bound. The grammar of extension adds closed-class grammatical markers which allow us to project those descriptions into different times and places. But what are we to make of abstractions like freedom? What do they describe? What do they afford? How can they even be? For them we need a grammar of abstraction. Human language is too complicated—and our intuitions about it are too suspect—for us to make much progress on these questions, so once again we will turn to the cell which, although complicated, is more open to mechanical inspection. We will see that the sequences of DNA in cells have evolved their own levels of abstraction that are bound up with self-reference and regulation. Regulation takes many forms, and they constitute the foundation for building a grammar of abstraction. Cells spend much of their time and energy processing raw materials from the environment into finished products, procedures which often take many steps.
112 The Grammar of Abstraction
This work is done by enzymes and, as we have seen, its precision and timing are astounding. These molecular assembly lines are called metabolic pathways, each pathway a series of reactions in which the output of one reaction becomes input to the next. Every reaction in the pathway is catalyzed by a different enzyme, so a complete set of enzymes is needed for the pathway to function, and each needs to show up at the right place and time. “We seem to watch battalions of catalysts lined up,” writes neurophysiologist and Nobel laureate Charles Sherrington, each waiting, stopwatch in hand, for its moment to play the part assigned to it, yet each step is understandable chemistry. In this great company, along with the stopwatches run dials telling how confreres and their substrates are getting on, so that at zero time each takes its turn.1 How these battalions of catalysts get themselves organized is a matter of regulation. The control hierarchy of regulation yields improvements in the cell’s responsiveness to its external and internal affordances. Evolution has provided the cell with many paths to that outcome. “Metabolic regulation is so important,” biochemist Thomas Traut says, “that every possible strategy has evolved and is actively used.”2 Taken together, these strategies provide the context in which abstraction can emerge. There can be many individual enzymes in a pathway, but they are organized to behave as if they were one single enzyme. Say the cell needs to convert a raw material into a finished product, perhaps a sugar like lactose into a more useful sugar like glucose. It matters not to the cell whether the job is done in a single step by one enzyme or by an assembly line comprising a series of enzymes. Metabolic pathways are another example of Herbert Simon’s loose horizontal coupling. Provided that the important work is accomplished, the cell doesn’t worry about how many steps it takes or what the intermediate products might be. This amounts to a transitivity property. In elementary algebra, students learn that transitivity is expressed like this: If a = b and b = c, then a = c. In mathematics, if the first thing is equivalent to a second thing and the second equivalent to a third, then the first is equivalent to the third. In metabolism, if enzyme A converts substrate X into product Y, and enzyme B then converts substrate Y into product Z then, by the transitive property, the pathway (A + B) can be considered an extended enzyme that converts substrate X into product Z.3 Chemically, there may be many possible pathways to get from X to Z; they constitute an equivalence class.
The Grammar of Abstraction
113
Transitivity is a real property. In some cases, evolution has combined each enzyme in a pathway into a single composite enzyme that executes all of the steps by itself. In bacteria, for example, a pathway governed by seven discrete enzymes is needed to produce certain fatty acids. However, in mammals a single larger and more complex enzyme molecule performs all seven catalytic manipulations in the correct order.4 To the cell it doesn’t matter who does the work as long as the work gets done. It just wants the pepper. Taken to its extreme, transitivity allows us to think of the cell itself, or even the organism, as an enormously complex, scaled-up enzyme. “We can adopt a uniform conceptual architecture for all levels,” writes Sydney Brenner, “viewing the organism as a network of interacting cells in the same way we view the cell as a network of interacting molecules.”5 And civilization, then, is a network of interacting organisms. Discussing wolves and moose as collections of enzymes sounds reductionist, but the point is not to reduce animals to nothing more than interacting molecules. Rather, it is to emphasize that the fundamental function of a single enzyme is also the fundamental function of every coherent collection of enzymes, no matter how large or elaborate. This is what living things do at every level of organization; they recognize the relevant affordances, salient features of their environment, and respond with appropriate behaviors. Binding is a simple form of perception, catalysis of behavior. If the behavior improves fitness, organisms leave more of the corresponding sequences in future gene pools. Throughout evolutionary history, living things developed the capacity to perceive and exploit many new affordances. Their repertoire of patterns to recognize and behaviors to effect grew explosively. Eventually what we call memory and learning came into play, giving them the capacity to recognize and respond not only to simple affordances in the physical here-and-now but also to complex affordances unfolding over individual lifetimes and, in the case of humans, extending over many generations around the world and beyond. There is an almost four-billion-year gap between sequences of DNA and sequences of human language. During this long interval, DNA and its enzymatic interactors elaborated on the basic infrastructure of the simple bacterial cell. This gave rise to the symbiosis of chloroplasts and mitochondria, to multicellular organisms, to nervous systems, to sexual reproduction, to social organization, and to the colonization of every habitat on the planet.6 However, the fundamental job description has remained unchanged, “to link the registration of a salient feature of the world to an appropriate response,” as Kim Sterelny says. The property of transitivity holds whether the organism comprises one cell containing millions of enzyme molecules or a billion cells containing trillions of enzyme molecules. Networks of enzymes scale without limit, as do the systems of sequences governing them. For this activity to cohere, to keep the assembly line humming, genomic sequences must be
114 The Grammar of Abstraction
organized to mirror the architecture of metabolic pathways. This requires a grammar of abstraction.
5.2 Shape Shifting Constraints A jackhammer that is always hammering and cannot be turned off is dangerous and annoying, so it is better to have one with a switch. The same is true of lawn mowers, drill presses, airplanes, virtually any appliance or machine you can think of. Switches are convenient and ubiquitous, yet most of us never give them a second thought. They give us a substantial measure of control over the mechanical and electronic affordances of our literate technological civilization. In the cell there are battalions of machines that cannot be turned off; they are always hammering. These uninterruptable machines are the simple biocatalytic enzymes that we met in Chapter 3. Like a jackhammer, an enzymatic interactor performs a useful function under the right circumstances, but enzymes, too, are dangerous if left switched on in the wrong environment.7 A jackhammer without a switch is hazardous because it is both powerful and indiscriminate. While typically used to break up the pavement of a street or sidewalk, a jackhammer will just as easily break up anything else it comes in contact with. Were it an enzyme, it would pack enough catalytic power to indiscriminately destroy many kinds of molecules. Fortunately, about one-third of the enzymes in the cell actually do have switches; some are simple on/off switches, but others are more like volume controls that increase or decrease the enzyme’s capacity to bind and catalyze. Taken together, they give the cell one important way to control the complex affordances of its metabolism. These special enzymes are called allosteric, a property first described by Jacques Monod and colleagues in 1963. “This particular class of interactions plays a special, uniquely significant role in the control of living systems,” they write.8 The Greek roots of allosteric are allos, meaning “other,” and stereos, meaning “shape.” Unlike a simple enzyme, an allosteric enzyme can adjust its shape in response to an external signal and, since function follows shape, the external signal also triggers a change in function. “That biological macromolecules are allosteric is one of the most important discoveries in all of biology,” says Carl Woese.9 Furthermore, this shape shifting is always reversible; allosteric enzymes cannot be permanently stuck in one formation.10 Like a switch, they can be turned on and off and on again. Shape shifting is possible because allosteric enzymes bind not just to their substrates, but they also have extra sites that let them bind to additional, nonsubstrate molecules. These extra binding sites are called allosteric sites and the additional molecules are called regulators. An allosteric site can be anywhere on the molecule; it does not need to be near the enzyme’s business end, its active site. As molecular biologist and Nobel laureate John Kendrew explains,
The Grammar of Abstraction
115
allosterism entails “the inf luence of events in one part of a large molecule on the function of another part.”11 Think of a jackhammer: the on/off switch (allosteric site) is located at some distance from its chisel (active site). Its regulator is the operator’s hand.12 Regulators govern the ability of enzymes to perform their functions. In the intimate dance between enzyme and substrate a third molecule can cut in. In some cases, the regulator binds to an enzyme that is usually on and turns it off; this is called inhibition. In other cases, it binds to an enzyme that is usually off and turns it on; this is called promotion. These regulatory constraints are called inhibitors and promoters, respectively. Just as the leopard call configures the behavior of a vervet monkey, a regulator configures the behavior of an allosteric enzyme. When the regulator binds to its allosteric site, this triggers a slight change in the enzyme’s shape, which leads to improvement or degradation of its binding or manipulative properties elsewhere. For example, upon changing shape, an enzyme that normally binds to a particular stretch of DNA might release its grip. Or an enzyme that usually does not bind might shift its shape so that it does. Configuration is reversible; whether monkey or jackhammer or enzyme, whatever you turn off you can turn back on and vice versa. We have seen that the basic binding and catalytic behaviors of enzymes—the coupling of perception to action—can be found at all biological levels, including humans and our societies. It is equally true that allosterism scales to higher levels. “We have yet to comprehend the full extent to which allosterism is woven into the fabric of biology,” says Woese. “Molecular allosteric behavior is not confined to protein; in fact, evolution seems to bring forth allosteric elements on all levels of biological organization.”13 One well-studied allosteric enzyme called p21ras is implicated in cancer. p21 is normally inactive (turned off ) because it is bound to a molecule called GDP. But when a regulator shows up and also binds to the p21, this causes it to release its grip on the GDP. p21 can then bind to a related molecule called GTP and, once in this form, the enzyme becomes active and can interact with other cellular targets.14 If Peter is holding his fork in one hand and his knife in the other, he doesn’t have a free hand to pass the pepper. Like p21 dropping the GDP and picking up the GTP, he has to put down one or the other before he can pick up the pepper grinder. Your request regulates that behavior, triggering the change in his shape. A regulator is the simplest example of an allosteric constraint. Other allosteric constraints include traffic lights, trail pheromones of ants, holophrases like monkey warning calls, icons used in signage, and gestures like pointing. They are not sequences; they lack one-dimensional patterning. Nonetheless they operate as specific constraints.15 The function of an allosteric constraint is to configure an interactor, to get the ant to follow a different path, to direct the gaze to a particular affordance, or to inhibit or promote an enzyme’s function.
116 The Grammar of Abstraction
Allosteric enzymes give the cell functional f lexibility and responsiveness that it does not get with simple enzymes. For example, if a function is performed by a simple enzyme, a cell needing that function on short notice must scramble to transcribe and translate the relevant gene; it must construct the enzyme from scratch. But an allosteric enzyme can be kept in reserve, like a bench player, present but inactive, to be switched on temporarily by deploying a suitable regulator. Construction is not needed, only configuration. Like a jackhammer, such an enzyme has an on/off switch. The same holds true if an enzyme’s catalytic function is no longer needed. There is only one way to disable a simple enzyme, and that is to dismantle it physically by breaking it apart into its constituent amino acids. But if the unneeded function is performed by an allosteric enzyme, it can be disabled temporarily by deploying a suitable regulator. It then joins the reserve f leet until it is needed again. Construction and destruction are permanent, but configuration is reversible. What is the origin of the regulatory molecules that cause allosteric enzymes to reconfigure themselves? For now, let’s just say that some are generated by the cell itself and some are found in the environment. The important takeaway is that in almost all cases, regulators and enzymes bind to one another based on their complementary three-dimensional shapes, much like the lockand-key binding of a substrate. But not in all cases. Sometimes the allosteric shape change is induced not by three-dimensional pattern matching, but by onedimensional pattern matching, in other words, by a sequence. One-dimensional allosterism is found in three old and ubiquitous enzymes, the workhorses at the heart of sequence processing in the living world. The three—DNA polymerase, RNA polymerase, and the ribosome—are responsible for DNA replication and for constructing the cell’s protein interactors.16 They are allosteric, but their allosteric behavior is constrained not by the shape of a regulator but rather by the input of very short patterns within longer DNA or RNA sequences.17 Each of the three has its own equivalence class of several closely related substrates—nucleotide bases or amino acids—so the enzyme must distinguish slight differences among a group of molecules that are similar in shape and size. Any substrate can be called for at any time. The enzyme’s structure “determines only the class of substrates and external signals that are acceptable for use by the enzyme,” write geneticist David Galas and colleagues, “and not the specific substrate in this class that is to be accepted.”18 If it were a police lineup, all of the suspects would be similar in height and weight and have the same hair, eye, and skin color. For example, DNA polymerase leads the molecular team that replicates the sequence of the entire genome in cell division. As it ratchets itself along the DNA molecule, its job is to match the base it encounters with the complement of that base so that the sequence is replicated accurately. DNA polymerase is
The Grammar of Abstraction
117
really four enzymes in one, with four possible substrates, each roughly similar in size and shape. If it reads a C in the sequence, that C configures it to select a G and ignore the As, Ts, and other Cs. But if it reads an A, then it selects a T and ignores the rest. At one step it might select a G and ignore an A, but at the next step it might have to select an A and ignore a G.19 “In one situation, the active site . . . must discriminate sharply against a certain substrate,” say Galas and colleagues, “and in another situation must discriminate in favour of this same substrate with the same degree of accuracy” (emphasis theirs).20 The most complex example of allosterism is the ribosome, which executes the mapping of the genetic code from sequences of RNA to sequences of amino acids. The ribosome’s regulator is not a single base in a sequence but rather a codon, a three-base sequence of messenger RNA. When configured by this short pattern, the ribosome selects its substrate, a transfer RNA (tRNA), from an equivalence class of at least 31 similar molecules. It selects one and ignores the other 30, even though the next input codon will most likely specify one of the others. It has at least 31 allosteric states. Most allosteric enzymes, however, are regulated by individual molecules. Because they respond to the three-dimensional shape of their regulator, we can call them shape readout. But in sequence-based allosterism, like the ribosome, configuration is regulated by sequence segments, short patterns within a longer sequence. These enzymes must read bases before selecting their substrates, so we can call them base readout. They are the most primitive machines that can be configured by an input sequence. Allosterism is one of the many mechanisms by which the cell’s genome organizes the behavior of its enzymatic interactors. Enzymes are configured to modify their activity by regulatory molecules or by short sequences of DNA or RNA. But this does not tell us where the enzymes came from to begin with, when and where they are constructed. Unsurprisingly, the process of enzyme construction is also subject to complex regulation.
5.3 Blue- and White-Collar Sequences When he’s not eating, Peter likes to drive around in his truck. He also enjoys playing chess. As we saw in Chapter 1, these are two very different kinds of activities. Driving is a rate-dependent, dynamic behavior in which small changes in rate can have a big effect on the outcome. Chess, on the other hand, is rate-independent.21 The time it takes for a chess match to be played has no effect on the outcome; only the sequence of moves is important.22 John Searle explains the distinction like this. Some rules, like the rules of the road, govern what he calls “antecedently existing forms of behavior,” like driving. “For example, the rule ‘Drive on the right-hand side of the road’ regulates driving in the United States,” he says, “but driving can exist independently of this rule.”23 Allosterism is like this. Regulatory molecules configure
118 The Grammar of Abstraction
antecedently existing enzymes. Leopard calls configure antecedently existing monkeys. But the rules of chess are different. They “do not just regulate,” Searle says, but they also create the possibility of the very behavior they regulate. So the rules of chess, for example, do not just regulate pushing pieces around on a board, but acting in accordance with a sufficient number of rules is a logically necessary condition for playing chess, because chess does not exist apart from the rules.24 In that sense chess is like mathematics, a set of self-referential rules for manipulating one-dimensional patterns. Sequences that constrain rate-dependent activities, like driving, are profoundly different from those that constrain only other sequences, like the rules of chess.25 This distinction exists wherever we find sequences, and it is the first clue to how we might explain the emergence of abstraction. What is the job description for the general manager of a factory? Blue-collar workers on the factory f loor have well-defined job descriptions, often repetitive physical tasks. But the general manager’s job is not of the same kind. Her job description can be explained only in reference to the boundary conditions she places on the behavior of the workers she manages. She may not tighten a bolt directly, but she can regulate how and when bolts are tightened. And she can impose these detailed boundary conditions on hundreds or even thousands of blue-collar workers. In the cell, molecular biologists group enzyme interactors and their governing genes into two classes: structural and regulatory.26 Structural enzymes, whether simple or allosteric, perform catalytic manipulation; they tighten the bolts. Regulatory enzymes, on the other hand, orchestrate the magnitude and timing of structural enzyme activity. If the cell is a factory, structural enzymes are the blue-collar workers and regulatory enzymes the white-collar workers, or labor and management.27 Blue-collar enzymes engage directly with the molecular world of the cell. But white-collar enzymes do not effect change themselves; instead, they coordinate the work of other enzymes. They are boundary conditions on boundary conditions.28 Some white-collar enzymes operate as general managers, master regulators capable of unleashing a cascade of other white-collar enzymes, which in turn can unleash further cascades. When we attempt to describe their function, we are carried far away from the straightforward one enzyme-one reaction model of the blue-collar interactor. Here the cascade triggered by a master regulator can lead to a dramatic change in the developmental fate of organs and body parts. The genes for these general manager enzymes, the master regulators, are called homeotic. More generally, a gene with multiple downstream effects is called pleiotropic.
The Grammar of Abstraction
119
Despite having identical genomes, during development cells differentiate into bone cells and nerve cells and pancreas cells. Given that all cell types in your body are built by a common set of blue-collar enzymes, how is this possible? The answer is regulation. Functional differences across tissues arise when the blue-collar enzymatic workforce is organized into a control hierarchy by white-collar sequences and their interactors. François Jacob and Jacques Monod won the Nobel Prize in 1965 for the discovery of gene regulation in the bacterium E. coli.29 They observed an intriguing fact about the three-enzyme metabolic pathway responsible for converting lactose into the more useful sugar glucose. What they found intriguing is that the pathway kicks into operation only when lactose is abundant. The pathway is on call to do its work only when conditions are right, when its function is needed by the cell. No lactose around? Why should the bacterium waste time and energy transcribing the genes and constructing the enzymes? The system they discovered is called the lac (for “lactose”) operon, and it remains the standard model for teaching students how gene regulation works in bacteria. (An operon is the genetic regulatory unit of bacteria.) In the lac operon, three blue-collar enzymes govern the lactose-to-glucose pathway, a three-step process with two intermediate products. Conveniently, their respective gene sequences are organized as next-door neighbors, laid out end-to-end along the chromosome.30 Because they are sequential neighbors, they are easy to transcribe and regulate as a group. Just down the block in the same neighborhood is a fourth gene, but this one describes a white-collar regulatory enzyme called a repressor, which governs the expression of the other three. The substrate of the repressor is not an individual molecule but rather a short, specific pattern of DNA. The repressor clamps itself to this snippet (called the operator), creating a roadblock that prevents the three blue-collar genes from being transcribed. This is the system’s normal state. With no lactose present, the roadblock is in place and transcription of the three blue-collar genes in the pathway is turned off. However, the repressor is allosteric. This means one particular regulatory molecule can come along and cause the repressor to change shape and release its grip on the DNA, thereby removing the roadblock to transcription. And exactly which molecule is that regulator that causes it to change shape? Conveniently, it just so happens to be a molecule of lactose. The lactose molecule switches on the pathway responsible for its own metabolism. It triggers the repressor to release its grip on the DNA so that the genes in the lac pathway can be transcribed. As Searle would say, it creates the possibility of the very behavior it regulates. The lac operon was the first evidence of a regulatory control hierarchy among sequences of DNA. The sole function of the white-collar repressor sequence is to describe an enzyme that regulates transcription of three other sequences. Enzymes like this, that clamp to DNA (and sometimes to each other), are called
120 The Grammar of Abstraction
transcription factors, and there are thousands of different ones in every cell. Since the discovery of the lac operon, researchers have found this kind of genetic selfreference and hierarchical control to be standard operating procedure throughout the living world. Most metabolic pathways are called to action only when needed. In a canonical rumination on life called Chance and Necessity, Monod puzzles over the question of how an everyday sugar molecule like lactose came to control expression of the pathway responsible for its own metabolism. While “physiologically useful or ‘rational,’ this relationship is chemically arbitrary— ‘gratuitous,’” he says (emphasis mine). “This fundamental concept of gratuity [is] the independence, chemically speaking, between the function itself and the chemical signals controlling it.”31 As Richard Dawkins would say, the blind physical forces are being deployed in a very peculiar way.32 Monod’s gratuity principle permeates biological and cultural systems. There is no physical or chemical reason that expression of an enzyme should be controlled by the availability of its substrate. There is no physical or chemical reason that a wolf should react one way when it spies a moose and another way when it encounters another wolf. Or that “please pass the pepper” should result in an animal following an improbable series of trajectories. We are seeing, says Searle, “the imposition of a special kind of function on brute physical entities that have no natural relation to that function.”33 Searle says this of language, but it is true of all systems of sequences. Life does not require any new laws of physics or chemistry, just a set of peculiar constraints. We start with physics and chemistry and we get biology for free, gratuitously. By the same token, human culture does not require any new biological principles, just another set of peculiar constraints. We humans start with biology and we get civilization for free. In both cases the peculiar constraints assume the form of sequences.
5.4 When Is a Sequence Not a Sequence? Consider the ubiquitous yellow highlighter, beloved tool of many scholars. If I am reading a paper in hard copy, I always have one nearby. Perhaps I am reading about self-reference and highlight the sequence self-reference each time the author uses it. When I reread the paper and wish to find the passages where self-reference is discussed, I don’t have to match the sequential pattern selfreference word-for-word, letter-for-letter. Why should I, when I can scan for the color yellow, a faster and simpler means of randomly accessing the sequence of interest? But if I am reading an electronic copy of the same paper, I use the control-F search function to locate each instance of self-reference. The computer scans the entire text for the pattern of zeros and ones that corresponds to self-reference. The computer cannot use a highlighter; the bits stored in memory are impossible to
The Grammar of Abstraction
121
locate based on color. Either there is a precise bit-for-bit match of the sequence or there is not. This is how computers do random access, strictly through sequential pattern-matching, like base readout in the cell. The ability to locate a short pattern within a longer sequence is crucial to the development and operation of cells and civilizations. We met some of these short sequences in the previous chapter when we discussed the grammar of extension. In the lac operon, for example, the repressor needs to locate and bind to a specific snippet (the operator) in order to disable the pathway. Finding a short, unique pattern among millions of bases in the genome is quite a trick, and it is made possible only by rate independence. Practically speaking, you can locate a short pattern only if the sequence is stable enough to search. In the cell DNA is a stable molecule and, among humans, a written text is more stable than a spoken utterance, and therefore much more amenable to random access. When we use texts in everyday situations, sometimes we are concerned not only with the one-dimensional patterns of the sequences, their base readout, but also with their physical characteristics, their shape readout. Yellow highlighters are one example of shape readout, but underlining, boldface, italics, all-caps, and font size are all used to call our attention to certain sequences. Instead of saying “this is important” or “pay attention to this,” we emphasize a sequence by changing some physical property of the elements themselves. When a handwriting expert testifies in court, her testimony concerns the shape of the sequences, not their meaning or interpretation. Morse Code operators can identify one another through subtle rate-dependent differences in timing, what they call an operator’s fist. When typewriters were ascendant, forensics experts could confirm that a particular document came from a particular typewriter by examining the characteristics of the letters themselves. All of these cases center on physical properties of sequences rather than their meaning or semantics. It is the difference between the sentences “That essayist writes beautifully” and “That calligrapher writes beautifully.”34 It would be a strange computer indeed that slowed down its processing for emphasis when it came to the important part of a computation. However, when we speak, you and I have access to a range of rate-dependent tools to achieve emphasis. We gesture with our hands and bodies, change our tone of voice, pause, or make a face. In most cases we have no choice but to gesture; ratedependent gestures and rate-independent sequences of speech are fundamentally entangled. Changing the typography of a written sequence is our way of preserving this entanglement in our texts. Computers have no rate-dependent options for emphasis. They are stuck with sequential pattern-matching, with base readout, which they are very good at. One in ten genes in the human genome encodes a transcription factor.35 The binding target of a transcription factor, like the lac repressor, is a specific short pattern of As, Cs, Ts, and Gs that must be recognized quickly and with precision. Many kinds of enzymes bind to specific snippets of DNA or RNA,
122 The Grammar of Abstraction
not only transcription factors like the lac repressor but also ligases and restriction enzymes and spliceosomes and others. For these and similar white-collar enzymes, write molecular biologists Mair Churchill and Andrew Travers, “the ability to discriminate between a DNA-binding site of a specific sequence and a random sequence is absolutely essential for correct function.”36 When a transcription factor searches through millions of base pairs to locate its target DNA sequence, is this more like a human recognizing a yellow highlight or is it more like a computer searching memory for a specific pattern of bit strings? Does the transcription factor recognize some physical character of the sequence, like its shape, or does it align to it base-by-base? Are we witnessing rate-independent base readout or rate-dependent shape readout? It may be both. So entangled are these modes that in many cases researchers cannot tell which is dominant. “The role of DNA shape may be biochemically inseparable from base readout,” write molecular biologist Namiko Abe and colleagues. “As DNA shape is a function of the nucleotide sequence, it is difficult to tease apart whether a DNA binding protein favors a particular binding site because it recognizes its nucleotide sequence or, alternatively, structural features of the DNA molecule.”37 Imagine that the sequences CGCG and ATAA were carved out of wood so that you could hold one in each hand. If you closed your eyes and I handed them to you, how would you know which is which? Would you use base readout or shape readout? Perhaps you would respond, “Well, I feel a C and here is a G and here is another C and here is another G, so this one must be CGCG.” That would correspond to base readout. Or you might respond, “Well, all of the letters are round and not pointy, so this one must be CGCG.” That would be shape readout.38 Transcription factors face the same quandary in recognizing short sequences. Two properties inf luence whether the transcription factor recognizes its target more by shape readout or more by base readout. One is length. The shorter the sequence the more amenable it is to recognition by shape alone. Most sequences targeted by transcription factors number only in the dozens of bases in length. A sequence 100 bases long would be almost impossible to recognize strictly by its shape. The longer the target pattern, the more likely it can be identified only by its sequence. The other character is prevalence. A sequence commonly found within the cell and across species in evolutionary time is more likely to be recognized by shape. Most DNA snippets targeted by transcription factors are in widespread use in the cell and have been conserved throughout the history of life. These are the consensus sequences, the closed-class deictic field of the cell. “Evolutionary processes would typically prefer careful perception to complex cognition,” write Rob Withagen and Anthony Chemero.39 Shape readout requires careful perception, base readout complex cognition. As with the yellow highlighter, shape readout is faster and more reliable.
The Grammar of Abstraction
123
This is also true of natural language. The ten most common verbs in English are irregular (be, have, do, go, say, can, will, see, take, get). Irregularity is linguistic shape readout and regularity is base readout. There are no uncommon irregular verbs; the pressure to regularize, to evolve from shape readout to base readout, is too intense. In fact, this pressure appears lawful: “Irregular verbs regularize at a rate that is inversely proportional to the square root of their usage frequency,” write computational biologist Erez Lieberman-Aiden and colleagues.40 Successful regulation depends on allosteric enzymes like transcription factors recognizing specific short sequences within much longer sequences. It is not always clear whether recognition is constrained by the physical shape of the sequence or by the order of bases. But it is clear that this kind of random access is only reliable when the overall sequence structure is stable, like a written text or a DNA molecule.
5.5 Who Constrains the Constraints? Regulatory self-reference manifests itself in three processes of the living world: (1) in ongoing metabolism, such as the lac operon; (2) in development, the construction of multicellular organisms from single-celled zygotes; and (3) in the long process of evolution itself. Evolutionary and developmental biologists have shown that in most cases new species are created and constructed by reclassifying the control hierarchies governed by regulatory sequences.41 The diversity of life has more to do with white-collar genes than with blue-collar genes. In 1969 molecular biologists Roy Britten and Eric Davidson were first to point out that evolution “might indeed be considered principally in terms of changes in the regulatory systems.” These, they continue, “make possible the origin of new functions and possibly even of new tissues and organs. The model supplies an avenue for the appearance of novelty in evolution by combining into new systems the already functioning parts of preexisting systems.”42 Just as alternative splicing allows the modular functional domains of proteins to be recombined to yield new enzyme functions, existing white-collar regulatory systems can be rearranged to yield new body plans and developmental pathways. This point was brought home in 1975, when Mary-Claire King and Allan Wilson showed that certain protein sequences in humans and chimpanzees are 97 percent identical. How can two species be so different despite sharing almost identical sequences? They conclude that most differences are due to white-collar regulation of the same underlying set of blue-collar enzymes.43 Subsequently it has been shown that alternative splicing accounts for some of the differences44 and regulation of transcription for the balance.45 Bacteria are simple compared with wolves and moose. They do not worry about building out multicellular body plans or differentiating into muscles,
124 The Grammar of Abstraction
nerves, and pancreases or coordinating communication among cells to yield coherent high-level behavior. The lac operon is amazing, but its function is confined to metabolism. Even more amazing are the regulatory networks and control hierarchies that govern development in multicellular organisms.46 In all cases, the basic model of transcriptional regulation remains the same. A gene or group of genes are transcribed (or not) depending on the presence or absence of allosteric transcription factors that bind to short snippets of DNA called promoters, upstream from the gene.47 “Only a relatively small, finite number of protein structural motifs have evolved which have the capacity to recognize and bind to a specific DNA sequence,” write Davidson and molecular biologist Isabelle Peter. “With few exceptions, all animals utilize the same families of transcription factors, representatives of which were evidently present in their common evolutionary ancestor.”48 They, too, form a closed class. Unlike bacterial operons, however, multicellular organisms often regulate transcription with not just one but multiple transcription factors. Usually it is not just a single white-collar enzyme clamping to the DNA, but a scrum of transcription factors, typically two to four but sometimes as many as 15. They clamp not only to their DNA target but also to each other, resulting in a unique shape for each combination.49 Transcription is fine-tuned depending on which factors are present, how they fit together, and how they achieve random access. In multicellular organisms it takes a village to transcribe a gene. One further twist on the regulation of transcription is worth a brief mention because it demonstrates another way in which the three-dimensional and onedimensional properties of sequences can be entangled. In the lac operon and in most gene regulation in multicellular organisms, the promoter sequence to be recognized by the relevant transcription factors is near the gene or genes being transcribed. How could it be otherwise? The promoter provides a randomly accessible physical attachment point to block or accelerate nearby transcription. But it is not always so neighborly. In some cases, the promoter is a great distance, in sequence terms, from the gene or genes it regulates. Researchers have discovered genes that are regulated at promoter sites one million or more bases from the gene itself. This regulation at a distance constitutes an “unexpected uncoupling of regulatory DNAs from their target promoters,” write molecular biologist Michael Levine and colleagues.50 How is this possible? How can a promoter regulate a gene that is hundreds of thousands or millions of base pairs away, perhaps even on another chromosome?51 Imagine a coiled garden hose lying on a patio. Take a red marker and mark two points spatially close to one another but on adjacent coils of the hose. If you uncoil the hose you will find the two marked points separated by a large linear distance. When the hose is coiled, their three-dimensional distance is negligible, but when uncoiled their one-dimensional distance is substantial. Regulation at a distance does not result from adjacency in a sequential sense but rather from adjacency due to the dynamic three-dimensional twisting of the
The Grammar of Abstraction
125
DNA molecule. Like the red marks on the garden hose, the promoter sequence regulates transcription only when the coiling of the chromosome brings the promoter spatially close to the gene it regulates. Successful gene transcription is hostage to a physical folding procedure which is out of its control. “Chromatin folding is a multi-scale problem and all scales need to be understood, as regulatory information resides at all levels,” say molecular biologists Boyan Bonev and Giacomo Cavalli.52 Here we meet another level of folding, of the entanglement of the ratedependent and rate-independent. Just as catalytic function depends on the rate-dependent folding of an enzyme, the expression of a gene can depend on the rate-dependent folding of a chromosome. “Why organisms employ such a complex, large-scale mechanism to regulate gene expression remains unknown,” write molecular biologist Takanori Amano and colleagues.53 The answer, Traut would say, is because “every possible strategy has evolved and is actively used.”54 Simon’s successful watchmaker built her watch by constructing a series of stable, self-contained subassemblies which she then put together to produce the finished product. Simon coined this metaphor in 1962, and molecular biologists have since shown that this is how natural selection builds multicellular organisms. We have seen that enzymatic interactors are constructed from Lego building blocks through alternative splicing. In a similar fashion, organisms are constructed through a control hierarchy of what are called gene regulatory networks, or GRNs. Evolution proceeds by tweaking and rearranging these modular GRNs. Gene regulatory networks are organized and deployed during development by a finely tuned white-collar workforce of transcription factors. These modular, self-contained networks build the basic body plans of all multicellular organisms. “The basic control task is to determine transcriptional activity throughout embryonic time and space,” write Peter and Davidson, “and here ultimately lies causality in the developmental process.”55 Most gene regulatory networks are extraordinarily stable; some have remained unchanged for hundreds of millions of years. “Because development of the body plan is caused by the operation of GRNs,” Davidson says, “evolutionary change in the body plan is change in GRN structure occurring over deep time. Evolution and development emerge as twin outputs of the same mechanistic domain of regulatory system genomics.”56 The basic idea is centuries-old. “It is intriguing to realize,” Peter and Davidson write, “that the intuitive hierarchical arrangement of animals generated by Linnaeus 200 years ago ref lects patterns of evolutionary conservation of certain features of the underlying developmental GRNs.” As a result, “species-specific characters are essentially highly adaptive decorations superimposed on more widely utilized common body plans.”57 Much remains to be learned about how the cell’s white-collar bureaucracy of regulatory enzymes choreographs the activity of its blue-collar enzymatic
126 The Grammar of Abstraction
workforce across time and space. There are several mechanisms to remember. First is genes in pieces. An interactor is defined by its ability to recognize a salient feature of the environment and respond with an appropriate behavior. With genes in pieces and alternative splicing, binding and catalytic domains are combined and recombined to construct individual interactors from a set of molecular Legos. Linear DNA patterns evince a grammar of interaction that independently describes the perceptual and behavioral elements of affordances. A second mechanism is extension. A closed-class set of grammatical markers allows the cell to recognize and respond to affordances outside the context of here-and-now. A third mechanism is allosterism. A regulatory molecule— an allosteric constraint—configures the catalytic behavior of an antecedently existing interactor. No construction is necessary; the allosteric enzyme was built at some other place and time. Allosteric configuration may be orchestrated by transcription factors, but they are not active players in the transaction. Deployment and coordination of interactors are orchestrated by gene regulatory networks. Stable across evolutionary time, these networks depend on a conserved vocabulary of short-sequence binding sites for combinations of white-collar transcription factors. Gene regulatory networks guide the realtime regulation of metabolism and behavior, and the build-out of the body plan during development. And as the networks evolve higher and higher levels of hierarchical control, their apex sequences drift further and further from the nuts and bolts of chemistry and metabolism. They become more abstract.
5.6 The Grounding of Abstraction Potato chips afford eating, but what does freedom afford, and how do you perceive it? What about oppression? What about something?58 Although these sequences fit the lexical category of noun and are freely used as nouns in English texts, they do not refer to perceivable objects in the same way as potato chip or pepper grinder or violin. And whatever actions they afford are not clear-cut motor behaviors like eating, passing, or playing. Setting aside any ideal Platonic referents of freedom and oppression, we are left with the question of how abstractions come to constrain our behavior. How do sequences that do not describe tangible affordances gain access to the machinery of our perceptual and behavioral systems? In cells and civilizations, the mechanisms of sequence storage, copying, and expression are indifferent to what is being stored, copied, or expressed. Useful sequences, nonsense sequences, destructive sequences, abstract sequences, all are treated equally without fear or favor. “‘Copying’ can be relatively insensitive to the function of the copied element,” says geneticist Eva Jablonka. “DNA polymerase copies with the same fidelity both nonsense and sense sequences, and nonsense and sense symbolic messages can be copied through some forms of blind imitation (through devices like a photocopying machine, for example).”59
The Grammar of Abstraction
127
As Claude Shannon says, “the semantic aspects of communication are irrelevant to the engineering aspects.”60 The universality of the mechanisms of storage, replication, and expression gives systems of sequences the ability to grow in complexity. They allow for language to both describe and instruct, for statements in the metalanguage to use the same alphabet and grammar, and for the deictic and lexical fields to interleave. To this mechanism, description and instruction are indistinguishable, as are language and metalanguage, and deictic and lexical sequences.61 How do abstract sequences like freedom connect to the world? Or do they? Some cognitive scientists argue that thought can be modeled by computers using algorithms to transform sets of sequences into other sets of sequences, with no necessary connection to the material world.62 Stevan Harnad explores this question in a paper called “The Symbol Grounding Problem.” “Suppose,” he says, you had to learn Chinese as a second language and the only source of information you had was a Chinese/Chinese dictionary. The trip through the dictionary would amount to a merry-go-round, passing endlessly from one meaningless symbol or symbol-string (the definiens) to another (the definiendum), never coming to a halt on what anything meant.63 A dictionary relates sequences to one another but not to the world. Harnad concludes that even the most abstract sequences of symbols, those farthest removed from material reference, “turn out to depend on behavioral considerations.”64 Relating sequences of language to the world requires the perceptual and motor behavioral interactors called human bodies. No matter how abstract, all sequences ultimately are grounded in the material environment in which our bodies perceive and act. How can we have such sequences that are lexically and grammatically nouns, yet which have no material counterpart in the world and are not perceived by human sense organs?65 The answer requires another excursion to the cell. We have seen that systems of sequences evolve complex laminates of hierarchical self-reference and regulation, layers of constraints reclassifying other constraints. The genetic regulatory apparatus includes thousands of transcription factors which govern the timing and levels of gene expression, plus myriad options for alternative splicing that recombine subsequences of genes after they have been transcribed but before they are translated into protein. Although transcription factors are proteins like the blue-collar enzymes that manage metabolism, and although they are the result of the same transcription and translation procedures, they never engage in the metabolic chemistry of the cell. Transcription factors are enzymes whose transcription can be regulated by other transcription factors, whose transcription can in turn be regulated by yet other transcription factors. They operate at a higher level of abstraction,
128 The Grammar of Abstraction
sometimes many regulatory layers away from the nuts and bolts of metabolism and development. When the ancient Israelites demanded freedom, they could not point to freedom the way they could point to a brick or a whip. Nonetheless, at an abstract level, their demand for freedom cascaded through many hierarchical layers of constraint until it finally inf luenced the perception and behavior of thousands of individual interactors in the real world. As Harnad would say, freedom turned out to depend on behavioral considerations. Were it a gene, we would say freedom is pleiotropic. In the cell, certain high-level transcription factors can regulate expression of an entire cascade of other transcription factors that in turn regulate the expression of batteries of blue-collar structural genes.66 In this sense the genes for high-level transcription factors are like freedom or oppression; they do not refer to any material object in the world, or to any molecule in the cell. Like freedom or oppression, they are pleiotropic. They can have broad and profound effects on the real-world perceptions and behaviors of many enzymatic interactors simultaneously. This is the grammar of abstraction. For example, one white-collar transcription factor is described by what is called the optix gene. This sequence of DNA regulates the color of butterf ly wings. “Optix behaves like a paintbrush that can be applied anywhere on a butterf ly to modulate color,” write evolutionary biologist Linlin Zhang and colleagues. “The ‘hands’ guiding optix can be any number of upstream patterning agents. In one species, these agents might allow optix to disperse orange across large portions of the wing, while in another they may decorate eyespots with fine chromatic filigree.”67 By disabling (“knocking out”) optix across many butterf ly species, researchers found that it regulates melanin production and iridescence, among other colors and patterns. “Optix operates as an ‘or’ switch to toggle between discrete pigmentary and structural color fates for scale cells,” say Zhang and colleagues. “We were surprised to find that a complex and multifaceted trait like color identity, which is interwoven with so many different genetic, biochemical, and morphological features, has such a simple and discrete regulatory underpinning.”68 Asked to describe the function of optix, what could you say but it depends? The meaning of optix is only known by its effects on downstream sequences, some of which are grounded in real metabolism. It is an abstract, pleiotropic sequence like freedom, constraining a spectrum of patterns across a variety of circumstances. “Optix demonstrates how a single master regulator can be selectively redeployed to radically alter a range of independent traits,” say Zhang and colleagues. Closer to home for me personally is a pleiotropic gene called FMR1, which encodes an RNA binding protein. I am the father of four. The youngest is Jonah, who does not express the FMR1 sequence. Clinically, this means he is
The Grammar of Abstraction
129
affected by Fragile X Syndrome.69 An X-linked disorder, it affects males most seriously. The Fragile X phenotype is far from uniform, but it includes cognitive impairment, language disorders, autism spectrum behaviors, f lat feet, protruding ears, f lexible joints, a high palate, and large testicles. The FMR1 gene has been shown to increase or decrease the expression of 435 other genes.70 How does one gene have such different effects in so many body parts and cognitive functions? Like optix and other abstract regulatory sequences, FMR1 choreographs many other genes in a variety of developmental and metabolic pathways. We explain its meaning or function only by reference to a specific context, maybe speech, maybe ears, maybe feet. In the sequences of human language, abstract nouns like freedom govern our ability to recognize and respond to a variety of affordances in a variety of contexts. Grammatically they behave like concrete nouns, but functionally they are not. They, too, are pleiotropic. We think we understand the meaning of abstract nouns and verbs, and perhaps we can come up with dictionary definitions or even offer examples. However, their functionality ultimately depends on how they constrain human behavior in response to real-world affordances. “Human language must have been produced by evolution,” says C. H. Waddington. “Evolutionary forces— and that means natural selection, the only evolutionary force we know of— could have produced it only if it produced effects.” 71 Like all sequences of symbols, even the most abstract nouns and verbs must be grounded. Symbolic information must actually get control of physical systems. “The most plausible source for substantive, functional, and formal universals of language is a universal semantics, determined in turn by the specific nature of our interactions with the world,” says Mark Steedman.72 Through the grammar of interaction, sequences can construct and configure interactors to perceive affordances—salient features of the environment— and to respond appropriately. The grammar of extension allows sequences to describe affordances that are distant in time and space and which have varying degrees of probability. But it is the self-referential power of sequences, the ability of sequences to be affordances for other sequences, that lets them grow without limit in scale and complexity. This is the grammar of abstraction.
Notes 1. Sherrington 1953 2. Traut 2008 3. Linguists reading this discussion of transitivity may recognize its resemblance to the operation called merge in Chomsky’s Universal Grammar. Merge is defined as “a primitive operation that takes n objects already constructed, and constructs from them a new object” (Chomsky 2005). Unlike the merge operation, transitivity as discussed here applies to real interactors in the physical world rather than to purported mental objects related to sequences. 4. Tsukamoto et al. 1983; Smith 1994 5. Brenner 2010
130 The Grammar of Abstraction
6. Maynard Smith & Szathmáry 1995 7. For example, an enzyme called methane monooxygenase can hydroxylate 150 substrates besides methane (Copley 2003). 8. Monod et al. 1963 9. Woese 1973 10. Traut 2008 11. Kendrew 1967 12. The allosteric enzyme meets the three criteria used by Pattee to define a switch (1974). We usually envision switches physically as simple mechanical or electrical devices, but a switch is better defined on the basis of its function. First, a switch has “at least one time-independent stable configuration,” which for the enzyme is its tertiary structure. Second, a switch has “a selective, sensitive, but reliable trigger which causes a sudden change of state or action,” which for the enzyme is the act of binding and catalysis. Finally, a switch must “return to the initial state—reset, repeatability,” which is a characteristic of any catalyst. 13. Woese 1973 14. Traut 2008 15. Allosteric constraints function only within the context of a larger system of sequential constraints. We may discuss them as discrete signals, but their meaning or function results from the robust sequential system in which they reside. 16. For our purposes, the ribosome can be considered an enzyme, although it is an enormous complex made up of both protein and RNA. 17. There is a fourth class of enzymes called reverse transcriptases that are also configured by input sequences. These enzymes transcribe sequences of RNA into DNA, the reverse of normal DNA-to-RNA transcription. They are found in retroviruses such as HIV and also perform certain specialized tasks in multicellular organisms. However, their functions are secondary to the central roles played by DNA polymerase, RNA polymerase, and ribosomes throughout the living world. 18. Galas et al. 1986 19. The RNA polymerase that transcribes from DNA to messenger RNA also has the ability to recognize one molecule as substrate and then immediately ignore that molecule and choose another in the next round. 20. Galas et al. 1986. 21. Computers are the ultimate rate-independent mechanisms, which is why they can be programmed to play chess very well, but so far have not been programmed to drive very well. 22. Moves in tournament chess are timed, but that is an external constraint. 23. Searle 2010 24. Searle 2010 25. But wait a minute, you might say. Don’t the rules of chess constrain our trajectories as we push pieces around on the chessboard? Aren’t we the interactors governed by the rules? Yes, but the board itself is a mnemonic and heuristic device for human players. Two computers can play chess without a board, as strictly a sequence vs. sequence competition. 26. Structural genes and enzymes are sometimes called effector ( Peter & Davidson 2015). 27. Charles Bennett describes this distinction as the difference between “molecular manufacturing” and “molecular politics” (1979). Richard Lewontin calls it “the transfer onto biology of the belief in the superiority of mental labor over the merely physical, of the planner and designer over the unskilled operative on the assembly line” (1992). David Baltimore says it is “the business school approach rather than the engineering approach” (1984). Economist Philip Auerswald points out that “those who author, store, and enforce [sequences of ] code by definition have more power than those who execute code” (2017).
The Grammar of Abstraction
131
28. Molecular biologist Maria Rivera and colleagues have shown, in bacteria at least, that two classes of genes comparable to blue-collar and white-collar—information processing and cell operation—have different evolutionary histories (1998). 29. Jacob & Monod 1961 30. This is the general definition of a bacterial operon: a group of functionally related genes expressed together. 31. Monod 1971 32. Dawkins 1983 33. Searle 1995 34. Daniels 1996 35. Hauser et al. 2016 36. Churchill & Travers 1991; see also Schleif 1988 37. Abe et al. 2015 38. Linguists may recognize this thought experiment as a tactile version of the “kikibouba” distinction (Ramachandran & Hubbard 2001; Aryani et al. 2020). 39. Withagen & Chemero 2009 40. Lieberman-Aiden et al. 2009 41. Carroll 2008 42. Britten & Davidson 1969 43. King & Wilson 1975 44. Calarco et al. 2007 45. Khaitovich et al. 2006 46. Alternative splicing, for example, does not exist in bacteria but is universal among multicellular organisms. 47. The promoter in eukaryotes is roughly equivalent to the operator in a bacterial operon. 48. Peter & Davidson 2015 49. Peter & Davidson 2015 50. Levine et al. 2014 51. Furlong & Levine 2018 52. Bonev & Cavalli 2016 53. Amano et al. 2009 54. Traut 2008 55. Peter & Davidson 2011 56. Davidson 2010. When biologist Ernst Haeckel first proposed his famous if f lawed maxim “ontogeny recapitulates phylogeny,” he was observing that the same set of stable subassemblies that emerged in evolutionary time also emerge in the course of development (Haeckel 1903). 57. Peter & Davidson 2015 58. Of the most general level of abstraction, linguist Leonard Talmy observes: “The five English words someone, something, do, happen, and be, plus a few grammatical morphemes for tense, modality, and the like, can in construction encompass virtually all conceptual phenomena with sentences like Someone did something, Something happened, and Something is” (2000, emphasis his). 59. Jablonka 2004. This is a fundamental property of communication systems. As mathematician Warren Weaver writes: “from the point of view of engineering, a communication system must face the problem of handling any message that the source can produce” (1949). 60. Shannon 1948 61. This is what it means for sequences to be energy-degenerate. “For informationcarrier macromolecules of the same class (e.g., all DNA molecules) it is crucial that they be indistinguishable from each other except for the spatial order in which the molecular units that embody the information are arranged,” says physicist
132 The Grammar of Abstraction
62. 63. 64.
65.
66. 67. 68. 69. 70. 71. 72.
Juan Roederer. “This means that chemistry has to be indifferent to the particular sequence involved and that information-carriers of the same class all have to be energetically equivalent” (2005). Newell 1980; Pylyshyn 1984 Harnad 1990 Harnad 1990. In his paper “Minds, Brains, and Programs,” John Searle puts the matter more bluntly: “No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding would understand anything?” (1980). “Providing a convincing account of [abstract concepts] has become particularly urgent” in the field of embodied cognition, write psychologist Anna Borghi and colleagues, “given that they are more complex and less directly related to sensorial experience” (2019). See also Borghi et al. 2017. Krumlauf 2018 Zhang et al. 2017. See also Lewis et al. 2016. Zhang et al. 2017 Hagerman et al. 2017 Greenblatt & Spradling 2018 Waddington 1972a. As William Croft writes, “Success in communication can only be evaluated by the subsequent behavior of the addressee” (2002). Steedman 2009
6 THE CONUNDRUM OF REPLICATION1
6.1 What Is Self-Replication? For the last three chapters we have examined how sequences function through interactors, how they actually get control of physical systems. In the language of biology, we have been studying the metabolism, the behavior, the phenotype. We have paid scant attention to reproduction, how the one-dimensional patterns in genetic sequences survive intact from generation to generation, even as their interactors perish, or how the sequences of language persist despite the mortality of their speakers. From antiquity through the medieval period, scribes were among the most learned and respected scholars in any community. Our literate technological civilization has little use for scribes, and much of their daily work sounds to the modern ear like pure drudgery: painstakingly accurate manual reproduction of the sequences of texts. The technologies of printing, movable type, photocopying, and electronic storage have made scribes redundant; the replication of texts is inexpensive and effortless. These days if you want to witness the tedious letter-by-letter copying of sequences, the best spot to visit is a kindergarten classroom. This is where young human primates are introduced to the sequential mysteries of the “three Rs,” reading, ‘riting, and ‘rithmetic. One-dimensional patterns are easy to copy. It is reasonable to assume that this is why the living world has settled on sequences as the scaffolding of choice for natural selection. To replicate the linear arrangement of a parent sequence to a daughter sequence, all that is needed is a rote physical mechanism, like a DNA polymerase or a photocopier or a five-year-old. Sequences of DNA have been replicating for almost four billion years using a biochemical process common to all living things. Sequences of human speech have been around for much less time, perhaps a few hundred thousand years.
134 The Conundrum of Replication
They, too, are replicated using a common mechanism, the human being. But only for the last 5,000 years have technologies of literacy enabled accurate replication and archival storage of linguistic sequences. With speech your ability to replicate a sequence you have heard depends upon your memory. But with written texts, scribes can copy sequences of texts they have never seen, even of languages they do not understand, or we can deploy machines like printing presses and computers. But we are getting ahead of ourselves. Sequences are one simple solution for replicating patterns in the natural world, but can we rule out other possibilities? It is reasonable to ask why living things don’t just replicate themselves directly, without any use of sequences at all. Replication is simple in principle; it is a kind of constraint. To reproduce an object means that the structure of one object constrains the structure of another; the original imposes boundary conditions on the duplicate. Three-dimensional copying is easy to imagine for a stable solid object of uniform composition, such as a horseshoe. To reproduce a horseshoe, you need one machine to measure the shape and analyze the composition of the parent object and a second machine, something like a 3-D printer, to use those measurements to craft an identical daughter object. But it gets complicated quickly when we move from direct replication of solid, homogeneous objects like horseshoes to objects with varying compositions and multiple moving parts, like horseshoe crabs. Problem number one is choosing when to make the initial measurement. If the parent is a dynamic system with constantly changing states, how does the reproductive apparatus decide which state is appropriate for measurement and replication? Time must freeze for the state to be copied. This is easy for a rate-independent sequence that does not move, where time is already frozen, but not so easy for a ratedependent mechanism where everything is in motion. Further, how can you make all of the required measurements without demolishing the function?2 To freeze the motion of a functioning machine, take it apart, measure and analyze all of its components, and reassemble it to its original dynamic state is a formidable challenge. Even a simple organism like a bacterium could not survive being rudely disassembled and reassembled like that. Reverse engineering is a well-understood technical discipline, but it is rarely accomplished without destroying multiple instances of the target object. Imagine reproducing a horseshoe crab through direct replication. There are billions of individual parts, like enzymes and cells. Inspecting and reproducing each part without killing the organism would require an apparatus many times more complicated than the organism itself. As we shall see in the next section, it seems that to reproduce the somewhat complex, you need the even more complex. Francis Crick points out that constructing an enzyme using only other enzymes leads to an infinite regress because those enzymes would need to be synthesized by even more enzymes.3
The Conundrum of Replication 135
Some simpler mechanism is required for reproduction of a complex system, especially if random changes are to be introduced and preserved through natural selection. Today, in the words of psychologist Susan Blackmore, we know this as copying-the-instructions rather than copying-the-product.4 Instead of trying to replicate a horseshoe crab directly, nature copies the instructions for building a horseshoe crab and then uses the copy of the instructions to construct the replica crab.
6.2 Von Neumann and the Problem of Complexity Hungarian polymath John von Neumann is known for his contributions to computer science, game theory, economics, and quantum mechanics. Less well known is his work in biology, in formalizing the logical requirements for selfreplication. This work is summarized in The Theory of Self-Reproducing Automata (“TSRA”), which was compiled and published posthumously by von Neumann’s colleague Arthur Burks in 1966.5 Although publication of TSRA was delayed until the field of molecular genetics was well underway, von Neumann’s theory was largely complete by 1950, a few years before Watson and Crick described the structure of DNA. Nonetheless, von Neumann correctly predicted much of the process of how construction and replication work in the cell.6 “So far as we know, the basic design of every microorganism larger than a virus is precisely as von Neumann said it should be,” writes physicist Freeman Dyson. “Everything we have learned about evolution since 1948 tends to confirm that von Neumann was right.” 7 Do not be put off by the term automaton, which was a favorite of von Neumann’s but sounds vaguely archaic to modern ears. Outside the field of discrete mathematics, computer scientists rarely use it anymore. Science fiction writers and filmmakers sometimes use the word interchangeably with robot and social scientists sometimes invoke it when describing a cog-in-the-machine vision of the contemporary industrial workplace. Von Neumann’s meaning, though, is less loaded. Historically, automata were elaborate mechanical devices designed to behave in a lifelike way, like cuckoo clocks or Victorian mechanical toys. To von Neumann and other computer scientists of his generation, an automaton was, by Merriam-Webster’s definition, a “machine or control mechanism designed to follow automatically a predetermined sequence of operations or respond to encoded instructions.” Thus, an automaton is any device, including a computer, which can be programmed or made to operate algorithmically via encoded instructions, otherwise known as sequences. In von Neumann’s view, living things are special kinds of automata, able to reproduce, self-construct, and, especially, evolve. During the last decade of his life, von Neumann became intrigued with the study of complexity or, as he often calls it, complication. His view of
136 The Conundrum of Replication
complication refers not to structure, how a mechanism is built, but rather to its function, how it behaves: “I am not thinking about how involved the object is, but how involved its purposive operations are,” he says. “In this sense, an object is of the highest degree of complexity if it can do very difficult and involved things.”8 A microprocessor is a complicated device, but its complexity pales in comparison to the factory full of equipment needed for its fabrication. An automobile is a complicated device, but not as complicated as the automated lines responsible for its components and final assembly. It is clear that the more complex can readily build the less complex. We can imagine a factory designed to build an automobile; it is less clear how an automobile might evolve into such a factory. By this logic, building a horseshoe crab would seem to require a mechanism far more complicated than a horseshoe crab. “If an automaton has the ability to construct another one, there must be a decrease in complication as we go from the parent to the construct,” says von Neumann. If A can produce B, then A in some way must have contained a complete description of B. In order to make it effective, there must be, furthermore, various arrangements in A that see to it that this description is interpreted and that the constructive operations that it calls for are carried out. In this sense, it would therefore seem that a certain degenerating tendency must be expected, some decrease in complexity as one automaton makes another automaton.9 But living systems, of course, have figured out not only how to reproduce but also how to build descendants of greater complexity. Decay of complexity from original to duplicate “is in clear contradiction with the most obvious things that go on in nature,” admits von Neumann. “Organisms reproduce themselves, that is, they produce new organisms with no decrease in complexity. In addition, there are long periods of evolution during which the complexity is even increasing. Organisms are indirectly derived from others which had lower complexity.”10 This problem inspires von Neumann to model how a system might increase in complexity over time. He identifies a tipping point, a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.11 This transition from the degenerative to the explosive Pattee calls the threshold of complication.12
The Conundrum of Replication 137
But before an automaton can build something more complicated than itself, it should at minimum be able to build something exactly as complicated as itself, in other words, to reproduce without degeneration. So, von Neumann sets about trying to formalize self-reproduction, to identify and explain this explosive threshold of complication. In so doing, he logically demonstrates the dual role of sequences as constraints in a self-reproducing system. One role is construction, the other replication.
6.3 Turing’s Pristine Sequences One source of von Neumann’s interest in the puzzles of complexity was the Turing Machine.13 This breakthrough in the theory of computation was studied by many of the great mathematical minds of his generation.14 Von Neumann saw the Turing Machine as a useful model for explaining complexity, and a high-level understanding of the Turing Machine will set us up to join von Neumann as he probes the problem of self-reproduction. Although it is called a machine, and although mathematicians often use mechanical diagrams as visual props to communicate the idea, the Turing Machine is not a machine at all. It is a formalized, self-referential system of sequences, and it has no necessary connection to the physical world of matter and energy. And, popular movies notwithstanding, the Turing Machine has nothing to do with the Enigma encryption device the Nazis used during World War II, which mathematician Alan Turing and his colleagues famously managed to decipher. A Turing Machine comprises three elements: •
•
•
A tape divided into cells into which one-dimensional sequences of symbols are written (typically, 0 and 1). The length of this tape is unlimited in both directions. (That the tape is infinite is our first clue that the Turing Machine is not a real device.) A read/write head to read the tape sequence one symbol at a time, either leaving the symbol unchanged or replacing it with its binary complement, and then moving the tape one cell to the left or to the right. A table of state transitions to define the specific calculation to be performed. The table determines the behavior of the Turing Machine at each step: read the symbol, change the symbol or leave it unchanged, move the tape left or right, and switch to some specific next state. There is also no upper limit on the size of the table of state transitions.
The Turing Machine is deterministic. The unique pairing of the symbol on the tape with the internal state specifies what happens next: whether the symbol is overwritten or left alone, whether the tape moves right or left, and the transition to a new state. To add two numbers, for example, they would be placed on the input tape and a table of state transitions would be defined for addition. The
138 The Conundrum of Replication
calculation would proceed and once completed the answer would appear on the tape. The procedure repeats until either it completes its calculation by reaching some final state (halting) or, for some computations like calculating the value of π, it continues indefinitely. As tedious and repetitive as it sounds, the power of the concept is its universality. With a suitably defined table of states, a Turing Machine can compute any mathematical function that can be computed, period. “Turing’s machines may be clumsy and slow,” writes computer scientist John Kemeny, “but they present the clearest picture of what a machine can do.”15 If you can express a problem in mathematical symbols, then a Turing Machine can be designed to compute it. This is called the Church-Turing Thesis; for computer scientists it is the definition of computability.16 As conceived earlier, Turing Machines do not scale well. Each kind of problem needs its own machine designed for that problem, just like the cell with its army of special-purpose enzymes. To solve all mathematical problems would demand an infinitely large f leet of Turing Machines. The cell partly solves this challenge with allosterism; enzymes can be configured to do more than one thing.17 Turing solves his scalability problem in much the same way, through what he calls the Universal Turing Machine (“UTM”). It eliminates the need for a f leet of unique machines; one machine can do it all. The UTM can be configured to behave like any special-purpose Turing Machine. Universality is possible because of a clever trick of sequential self-reference: the internal table of state transitions for a Turing Machine can be written in linear arrangements of the same zeros and ones used for its inputs and outputs. In other words, any Turing Machine can be completely described by a sequence on the tape of another Turing Machine, a UTM.18 Take a simple computation like addition on a desktop calculator. This calculation could be performed by a special-purpose Turing Machine, but it can also be performed by a UTM configured to behave like a desktop calculator. All that is needed is for the table of state transitions for the desktop calculator to be written as a sequence of zeros and ones on the input tape. This configures the UTM to behave as though it were a desktop calculator. It can then perform the desired computation, like addition or subtraction. If this reminds you of the computers we encounter every day, it should. The UTM is the basis for every general-purpose stored-program digital computer in the world, from smartphones to servers in the cloud. The table of state transitions is what we call a program. Every time we open a program or fire up an app, we are configuring the universal general-purpose computer to behave as though it had been designed for some specific task. We even have little programs that cause our computers to behave like desktop calculators. The Universal Turing Machine leads us back to von Neumann’s work on complexity. The UTM is an abstract, self-referential system of sequences.
The Conundrum of Replication 139
Its inputs are one-dimensional patterns, its outputs are one-dimensional patterns, and it is itself written as a one-dimensional pattern. This means selfreproduction is not a conundrum for a UTM; all it needs to do is copy the zeros and ones of its own table of state transitions. Von Neumann sees the Turing Machine as a good starting point. “For the question which concerns me here, that of ‘self-reproduction’ of automata, Turing’s procedure is too narrow in one respect only,” he writes. “His automata are purely computing machines. Their output is a piece of tape with zeros and ones on it. What is needed for the construction to which I referred is an automaton whose output is other automata.”19 In other words, the computer, the physical UTM, is qualitatively different from the medium of its inputs and outputs. Reproducing the one-dimensional patterns of the software is not much of a challenge; the trick is reproducing the hardware. “I do not consider it essentially different whether the medium is a punched card, a magnetic wire, a magnetized metal tape with many channels on it, or a piece of film with points photographed on it,” von Neumann says. “In all these cases the medium which is fed to the automaton and which is produced by the automaton is completely different from the automaton.”20 Computers output patterns of zeros and ones; they do not output other computers. However, when von Neumann looks at the biological world, he sees living things with the ability to produce outputs like themselves, to copy their hardware. “A complete discussion of automata,” he concludes, “can be obtained only by taking a broader view of these things and considering automata which can output something like themselves.”21
6.4 Enter the Self-Reproducing Automaton Von Neumann begins his quest for the logical requirements of self-reproduction by envisioning a f leet of all-purpose automata, little self-contained machine shops equipped with all of the necessary tools. He imagines them to be f loating in a reservoir filled with elementary parts.22 As they move among the parts, these automata recognize, select, and assemble the pieces needed to build more automata. Think of the automata as interactors, recognizing the salient features of the parts and assembling them appropriately. Von Neumann’s automaton is a thought experiment—to be imagined, not actually built. He uses the universality of the Turing Machine as a foundation by envisioning his automaton as a universal constructor which, when properly programmed or configured, can construct any conceivable automaton, including itself. This automaton is the Universal Constructor A. In essence, von Neumann “replaced the [Turing] computational hardware with construction hardware,” writes Pattee (emphasis his).23 Just as a UTM can be configured to calculate any function, the automaton A can be configured to construct any automaton.
140 The Conundrum of Replication
Like a Universal Turing Machine, von Neumann’s A is programmed by an input sequence, a rate-independent pattern that configures A to select parts from the reservoir and assemble them in a certain order. If there is some automaton called X, the recipe for building X is found in the one-dimensional pattern of the input sequence, which von Neumann calls Φ(X). With Φ(X) as its input, the Universal Constructor A is configured to construct X.
Universal Constructor A Let X = An automaton Let Φ(X) = Sequence of instructions for constructing X Step 1: Input the sequence Φ(X) to configure A Step 2: A constructs X This is one way in which von Neumann anticipates how cells work. The Universal Constructor A is like a ribosome, which is constrained by an input sequence of messenger RNA (Φ(X)) to select and assemble the amino acids comprising the target protein X. So far so good. But how does the Universal Constructor A “output something like” itself ? For this it obviously needs an input sequence that describes itself which, in von Neumann’s notation, would be Φ(A). Using Φ(A) as an input recipe constrains A to construct a copy of itself (A′). At first blush this seems to be a pretty good model of self-reproduction: just provide A with the input sequence describing itself and out pops a hardware replica. Nothing to it. But it turns out this is not quite enough. Von Neumann recognizes that selfreproduction logically requires not only a second copy of A but also a second copy of the sequence describing A, a copy of the software as well as the hardware.24 So he envisions a second automaton, the Universal Copier B. Like a scribe, this automaton makes an identical copy of any input sequence.
Universal Copier B Let Φ(X) = Sequence describing the construction of automaton X Step 1: Input the sequence Φ(X) into B Step 2: B outputs a copy of the sequence Φ(X) Adding B to the system solves the problem of how to reproduce not only A itself but also Φ(A), the software as well as the hardware.25 However, we now have a second automaton to deal with, which itself will need to be reproduced in order to reproduce the whole system. Besides A we now also have B, or A+B in combination. Thus, the sequence describing A+B would be Φ(A+B), and
The Conundrum of Replication 141
to achieve self-reproduction, Φ(A+B) is input to (A+B). This creates both the replica A′+B′ of the automaton and the replica Φ(A′+B′) of the recipe. Here things start to get tricky. Von Neumann recognizes that the input sequence Φ(A+B) has two distinct and complementary roles to perform, one active and the other passive. In its active role it configures A to construct the automaton A′+B′. In its passive role it serves as a template so that B can copy the sequence. In this it is like the cell, where the mechanism of gene expression is responsible for constructing proteins while a completely separate mechanism is responsible for replicating the genome when the cell divides. The input sequence “is roughly effecting the functions of a gene,” von Neumann writes, and B “performs the fundamental act of reproduction, the duplication of the genetic material, which is clearly the fundamental operation in the multiplication of living cells.”26 The complicating factor is that logically the input sequence cannot perform both roles at the same time. It cannot simultaneously direct the construction of another automaton and serve as a passive template for its own copying. Thus A+B must be able to decide which task to perform. Which will it be: construction first or copying first? Does it reproduce the hardware or the software? Who decides? This conundrum yields a breakthrough. Von Neumann envisions a new level of hierarchical control, a white-collar Control Automaton C to regulate the activities of A+B. The job of automaton C is to tell the system whether it is to operate in construction mode or copying mode. It ensures that the input recipe Φ(A+B) assumes the correct role at the correct time. Of course, once von Neumann adds C to the mix there are now three automata A+B+C requiring description. The new descriptive sequence Φ(A+B+C) is input to A+B+C, and the result is a copy of the A+B+C hardware plus a copy of the Φ(A+B+C) software. This self-reproductive process can continue indefinitely. The only limit on population growth is the supply of parts, which is an issue for economics.27 With this formulation, von Neumann solves the first half of his problem.28 Nonetheless, the second half persists. While capable of producing other automata exactly like itself, A+B+C cannot produce “other automata which are more complex and of higher potentialities than itself.” It is one thing to replicate existing function and quite another to develop and retain novel function, to cross the threshold of complication. The real need, von Neumann says, is for something “actually one degree better than self-reproduction.”29 To achieve that one degree better, the completely reliable pattern of self-reproduction must be capable of change. Suppose some innovative automaton D can be described by the sequence Φ(D). If the new sequence is somehow spliced into the existing recipe, the result is a mutated sequence, namely Φ(A+B+C+D). When this new hybrid
142 The Conundrum of Replication
recipe is input to the original automaton A+B+C, the result is a new automaton A+B+C+D plus a second copy of Φ(A+B+C+D). This preserves the new function in both phenotype and genotype, which in turn solves von Neumann’s second problem of how to evolve automata that are “more complex and of higher potentialities.” (Perhaps A+B+C+D is carnivorous, able to disassemble other automata and use their parts for its own purposes.) Crucially, von Neumann assumes that the mutation arises through the genotype sequence Φ(D) and not the interactive phenotype D. Had D spontaneously appeared as a physical automaton, there would be no way to work backwards from the device itself to create its description. “Copying instructions is relatively easy,” writes David Hull. “Inferring the instructions from the product is extremely difficult.”30 In this von Neumann anticipates Francis Crick’s Central Dogma of molecular biology, which will be discussed later.31
6.5 Theory vs. Reality in Self-Reproduction I have not described von Neumann’s TSRA just to show how a great mind tackles a hard problem. He goes a long way toward explaining how sequences can not only constrain the physical world but also make material copies of themselves, how reproduction must be explained in the context of interaction, and vice versa. Combining both into one mechanism, he says, yields “what is probably a—if not the—typical gene function, self-reproduction plus production—or stimulation of production—of certain specific enzymes.”32 Let’s compare von Neumann’s formalization with what actually happens in evolving systems of sequences. He gets many of the concepts right, but not all.33 Von Neumann successfully extends abstract Turing Machine concepts like universality and programmability but makes no attempt to build a self-reproducing automaton, thereby avoiding the material world of physics and chemistry. The results of natural selection are messy, contingent, and entangled, as opposed to the formal purity of the TSRA. The details of implementation in living things are not crucial to his formalization. He is content to stop there.34 Von Neumann acknowledges these issues, remarking that “by axiomatizing automata in this manner, one has thrown half of the problem out of the window, and it may be the more important half ” (emphasis mine). He adopts a principled ignorance of “the most intriguing, exciting, and important questions of why the molecules or aggregates which in nature really occur in these parts are the sort of things they are.”35 Now that we know more about how things really work, it is up to us to fill in the blanks left by von Neumann. How does von Neumann’s TSRA compare with the “molecules or aggregates which in nature really occur?” What follows is not exhaustive, but offers a few examples to show the utility of the TSRA model:
The Conundrum of Replication 143
•
•
•
•
•
The ribosome as automaton A: The ribosome f loats in a reservoir of tRNA-bound amino acids, which are the elementary parts. The ribosome accepts mRNA sequences Φ(X) as inputs and uses the descriptions to select the parts needed to construct X, which is a special-purpose automaton/ interactor, an enzyme. The ribosome is also responsible for constructing its own protein elements from the description Φ(A). DNA polymerase as automaton B: Replication of the DNA sequence is accomplished by DNA polymerase and a suite of helper molecules. These comprise the automaton B. When provided with an input DNA sequence Φ(X), B makes a copy of the sequence. In cell division, B copies the entire genome Φ(X) all at once. In this it differs from the ribosome A, which accepts input sequences one gene at a time. In the cell, replication and expression operate at different scales. Replication happens once and all at once, while expression happens continuously in a piecemeal fashion. (As we shall see in Chapter 8, replication, too, can happen continuously in a piecemeal fashion.) The literate human as automaton A: Given a sequence of written instructions and a reservoir of suitable parts, humans can construct a large variety of physical objects. Common examples include building a birdhouse, cooking an omelet, or erecting a tent. In these and many other cases, the literate human reads the input recipe Φ(X) and selects and assembles the parts to construct X. IKEA has built a large business on this principle. The literate human as automaton B: Prior to the development of technologies like printing and photocopying, replication of textual sequences was carried out by scribes. Given an input sequence Φ(X), the scribe could produce a copy. (In some cultures, sacred texts are still hand-copied this way.) The scribe does not need to understand the meaning of Φ(X); replication is not interpretation. The scribe’s function is automaton B, not automaton A. The literate human as automaton A configured as automaton B: To create a scribe, one starts with an illiterate human and trains her to read and write. In other words, building automaton B entails configuring automaton A. Given a suitable set of input sequences, automaton A is configured to function as automaton B.36 Cultural institutions called schools are charged with this task.
Crucial to all of these is the control automaton C, which regulates the activities of automata A and B. Is it time to construct or to copy? Even we humans often need our own C to regulate us. If handed the instructions for assembling a bookcase, what are you supposed to do, construct the bookcase or make a copy of the instructions? In the living world, this control is exercised through a process called the cell cycle, which lets the cell know whether to focus its efforts on metabolism or replication. This decision requires a new level of hierarchical control.
144 The Conundrum of Replication
To construct is to fabricate a machine from parts, but implicit in the construction is the previous configuration of the constructor. To configure is to input a description of some behavior to an existing machine, but implicit in the configuration is the previous construction of the machine being configured. The hardware implies the software and the software the hardware. These processes are complementary. Universal Constructor A constructs automaton X from parts, but A itself has been configured by input sequence Φ(X) to perform the construction. Whether the behavior of a system is governed by construction or configuration depends on which element of the system you wish to examine. Allosteric enzymes are constructed by ribosomes based on mRNA input sequences but then are configured by regulatory molecules, which themselves may have been fabricated by ribosomes. The predator calls that configure vervet monkeys are innate, described by sequences in the same genome responsible for the vervets’ construction. Humans, however, are asymmetric.37 Our construction is guided by one set of sequences, the DNA in our genomes, and we are then configured by a completely different set, the speech and text we find in our environment.
6.6 Lessons Learned From von Neumann What are the important takeaways from von Neumann’s argument, which Pattee acknowledges to be “entirely abstract and by no means logically complete?”38 He says the TSRA illuminates three points bearing on systems of sequences: (1) open-ended expressive power, (2) underdetermination, and (3) the decisive separation of interpretation and replication. First, it is not enough for the domain of input sequences to describe only existing automata; they must be able to describe automata that have never existed. They need “the expressive power necessary to describe an endless number of new machines,” says Pattee. The system of sequences must be “rich enough in the context of a changing environment to allow open-ended evolution within that environment.”39 Or, in the context of James Gibson, sequences must be able to describe latent affordances, like magnetoreception. The self in self-reproduction is only one of many potential selves, most of which have never existed, but all of which can be replicated using the same universal mechanism. “Beyond a certain point, you don’t need to make your machine any bigger or more complicated to get more complicated jobs done,” says Dyson. All you need is to give it longer and more elaborate instructions. . . . In evolving from simpler to more complex organisms you do not have to redesign the basic biochemical machinery as you go along. You only have to modify and extend the genetic instructions.40 Second, Pattee emphasizes von Neumann’s intuition that sequential descriptions in the living world often underdetermine. “The natural gene does probably not
The Conundrum of Replication 145
contain a complete description of the objects whose construction its presence stimulates,” writes von Neumann. “It probably contains only general pointers, general cues.”41 As Sydney Brenner says, if you want specificity, you have to pay for it with sequence. We know this to be the case in the cell, where a gene guides construction of the primary sequence of amino acids in the protein, but function emerges through folding, which is a dynamic activity under control of physical law. Genetic descriptions cannot be axiomatized like Turing Machines, which do not fold. “Any analogy of a formal Turing Machine with a material organism would break down just because algorithmic computation requires complete, unambiguous instructions for every step,” says Pattee, “while genetic description is not complete and therefore not algorithmic. Genes provide only enough explicit information to survive by constructing proteins.”42 A pepper-passing robot needs its motion specified in advance, but asking Peter to perform the same function requires just a general pointer, a short vocal sequence. Finally, Pattee recognizes von Neumann’s crucial separation of the construction and copying functions. They are not separate as a matter of convenience; it is a matter of necessity. A description Φ(X), when input into A, results in automaton X, but the same description, input into B, results in a second copy of Φ(X). “In order to replicate the whole system, this description must be read twice: once as a state description, and once as a process description,” writes Pattee. “That is, in order to copy the description it must be read once as the state of a material structure without regard to its meaning. Then the description must be read again and interpreted (decoded) as a process description” (emphasis his).43 The literate human faces this quandary when handed a set of written instructions: is the job to obey the instructions or to copy the instructions? Suppose I hand you a recipe for baking a cake. Were you a pastry chef your response might be: “Ah, he wants me to make this cake.” A chef reads the recipe as a process description. But were you a clerk your response might be: “Ah, he wants me to make a copy of this recipe.” The clerk reads it as a state description, without regard to its meaning. A sequence, “whatever form it may take, has a physical structure that is independent of its interpretation,” says Pattee. “To read the description means to interpret the description. To copy the description means not to interpret the description but to copy only its physical structure” (emphasis his).44 Sequences lead double lives; they are interpreted to constrain the material world and they are copied so that the constrained system can reproduce and evolve. “The machine reads the instructions twice: first as commands to be obeyed, and then as data to be copied,” says nanotechnologist Erik Drexler and, crucially, adding more data does not increase the complexity of the data-copying process, hence the set of instructions can be made as complex as is necessary to specify the rest of the system. By the same token, the instructions transmitted in a replication cycle can specify the construction of an indefinitely large number of other artifacts.45
146 The Conundrum of Replication
This is the logical reason the cell uses separate polymerase molecules for transcription (RNA polymerase) and replication (DNA polymerase). If it used only a single universal polymerase then a separate controller C would be required to switch the polymerase into the correct mode at the correct time. This principle underlies the genotype-phenotype distinction in evolutionary biology and Crick’s Central Dogma.46 How this distinction is maintained in language expression and replication in humans is not known.
6.7 Description, Construction, and the Central Dogma The Central Dogma circumscribes the f low of one-dimensional patterns in the cell, but it also has a strong affinity with the dual modes of interpretation and replication inherent in von Neumann’s model.47 What does it mean to circumscribe the f low of one-dimensional patterns in the cell? It means that “once ‘information’ has passed into protein it cannot get out again,” writes Crick. In more detail, the transfer of information from nucleic acid [DNA] to nucleic acid [RNA] to protein may be possible, but transfer from protein to protein, or from protein to nucleic acid is impossible. Information means here the precise determination of sequence, either of bases in the nucleic acid or of amino acid residues in the protein.48 Later, Crick refines the problem as “the general rules for information transfer from one polymer with a defined alphabet to another,” and he introduces this explanatory diagram showing the possibilities:
DNA
RNA
PROTEIN
FIGURE 6.1
Source: Crick, Francis H. C. 1970. Central dogma of molecular biology. Nature 227: 561–563.
The Conundrum of Replication 147
The solid arrows show the three common, generally accepted linear pattern transfers, from DNA to DNA in replication, from DNA to RNA in transcription, and from RNA to protein in translation. The three dotted arrows show what Crick calls “special transfers” which “do not occur in most cells, but may occur in special circumstances.”49 Of the three, RNAto-RNA and RNA-to-DNA are established; DNA directly to protein is not. Meanwhile, the diagram prohibits any transfers of sequence patterns directly from protein; they are presumed to be impossible. This is because, as Richard Dawkins says, “the information in a protein is inaccessibly buried inside the knot which the protein ties in itself.”50 Getting a linear pattern out of a folded protein would first require a mechanism to unfold it back to the one-dimensional amino acid sequence. Then it would require a second mechanism to map that sequence further back to a sequence of RNA. This entails reversing the genetic code of three nucleic acid bases for each amino acid. But the genetic code is degenerate. For example, six different triplet codons map to the amino acid leucine; were translation to be reversed, which of the six would be selected? Further, genes themselves are often not continuous, but split. How could the noncoding introns be reconstructed and reinserted into the coding exons? There is clearly no plausible mechanism for getting from a folded protein back to the gene sequence of DNA.51 Crick and von Neumann both argue for the strict separation of interpretation and replication.52 For Crick, the everyday replication of a linear pattern in DNA takes place under the auspices of DNA polymerase and its molecular entourage. There is no way for the pattern to be translated into protein and then back-translated to DNA. For von Neumann, the replication of Φ(X) is performed by automaton B, which is distinct from automaton A. Furthermore, once automaton A has constructed X using the Φ(X) instructions, the one-dimensional pattern of Φ(X) cannot be retrieved through examination of X. This distinction re-emerges at the level of the organism. The genotype comprises the rate-independent sequential pattern of DNA that is replicated from generation to generation. The phenotype is the rate-dependent interactor constructed based on the linear arrangement of the genotype. The phenotype functions in its environment, where its success or failure determines whether its genotype is replicated in the next generation. The genotype has two roles, to guide the construction and operation of the phenotype and to replicate its one-dimensional pattern for the next generation.53 Crick and von Neumann both tell us that separate mechanisms are needed for each or, to reaffirm Pattee, “in order to replicate the whole system, this description must be read twice: once as a state description, and once as a process description” (emphasis his).54
148 The Conundrum of Replication
6.8 The Central Dogma of Civilization? Crick’s Central Dogma and von Neumann’s TSRA agree on the need to separate replication from interpretation. Sequences are either replicated as rateindependent patterns or expressed as rate-dependent interactors, but not both by the same mechanism.55 In the cell the distinction is clear because we know how sequences of DNA are expressed and replicated. Outside the cell, it is more difficult, given how little we know about the mechanisms of the interpretive and replicative processes in literate humans. Despite our ignorance of the mechanisms responsible, sequences of language do in fact get replicated and do in fact behave as dynamic constraints. When you ask Peter to pass the pepper, he might hear you and pass the pepper. But if it is a large and noisy table, some intermediary might need to replicate your request. How do the underlying perceptual, motor, and neural systems of the human phenotype operate so that replication and constraint are possible? Humans must somehow embody versions of automata A, B, and C, but we cannot yet explain how they work, and our self-conscious intuitions are not of much help. A systems-level approach is better. With respect to spoken and written language, the cognitively normal human being operates as a universal mechanism. When exposed to the vocalized input sequences of any language, typically developing human children learn to comprehend and speak that language. When tutored in the written form of a language, children learn to read and write the language. Like Universal Turing Machines, we have been configured by input sequences, but Turing’s device only reads a tape; it has no input that is comparable to human speech. This is where humans and UTMs diverge; learning to listen and to speak are dynamic, socially mediated activities. A child receives vocalized input sequences amidst a cascade of gestures, facial expressions, and the exaggerated intonations known as motherese.56 Human children do not passively accept input sequences; they do so in a dynamic behavioral and social context in the presence of parents, teachers, and other children. “The most significant moment in the course of intellectual development,” says psychologist L. S. Vygotsky, “which gives birth to the purely human forms of practical and abstract intelligence, occurs when speech and practical activity, two previously completely independent lines of development, converge” (emphasis his).57 We cannot be axiomatized any more than the cell. For reasons that will become clear in Chapter 7, we will narrow the scope of the question by limiting the discussion to sequences of written language. Unlike speech, writing is not as entangled with the physiology of the brain and body; it is an external fact, often a public one. We can observe written sequences being produced. We see the original and we compare the copy. We can also observe written sequences constraining human behavior. We see the sequence and we witness the behavior. Furthermore, literacy is optional. Unlike speech, it is not built into our phenotypes and only persists in cultures where institutions are dedicated to its preservation.58
The Conundrum of Replication 149
Reading and writing are skills taught and learned; they are not picked up from the environment the way speech is. Furthermore, literacy is dependent on speech, and not just any speech. We do not learn to read before we learn to speak, and we learn to read and write the spoken language we already know. We would be surprised if a child whose native tongue is Spanish were first taught to read and write Estonian, or if the child were taught to read in one language and write in another.59 We would also be surprised if the child were taught only to read or only to write, although the manual skills needed to copy a written sequence are independent of the comprehension skills needed to interpret one. A five-year-old learns to read and write in the presence of a teacher who trains the child in the dynamic motor behaviors of copying sequences (penmanship) and the semantic mappings between the written and spoken language. These provide written sequences with the functionality needed to constrain behavior. With the sequences of arithmetic, children learn the formal rules that map input sequences to the special output sequences known as the correct answers. Once these five-year-old universal mechanisms have been configured to read and write, they are able to: •
•
•
•
•
•
Accurately replicate any written sequence they are given. The input sequence does not have to be in a language they know as long as it uses the alphabet of a writing system in which they have been trained. Copying a sequence in an unknown language is a labored affair, but copying a sequence in an unknown alphabet is difficult, if not impossible.60 Constrain their own dynamic behavior using input sequences. Examples include instructions for assembling a bookcase, directions to a destination, recipes for making soup, traffic signs, legal judgments, and the rules for playing basketball. Produce and subsequently re-input sequences to auto-constrain their own dynamic behavior. Auto-constraints often take the form of lists: to-do lists, shopping lists, etc.61 Read input sequences requiring output sequences to be written in a specific form. These include mathematical computations, filling in forms and applications, and taking written tests and examinations. This includes the phrasal templating discussed in Chapter 3. Produce sequences to constrain the dynamic behavior of others. These take many forms, including lists, recipes, instructions, directions, commands, procedure manuals, etc. Perceive aspects of the environment and output sequences. This is the process of measurement and results in descriptions, quantities, metrics, etc.62
Good penmanship requires the same assemblage of fine motor skills, eyehand coordination, and kinesthetics as do sewing or surgery. However, though
150 The Conundrum of Replication
writing itself is a rate-dependent dynamic process, it is no more the point of the exercise than the formation of molecular bonds is the point of DNA replication.63 Reading or writing sequences can always be viewed in physical terms, but the meaning or function of a sequence cannot be interpreted at that level. The teacher of reading and writing configures a five-year-old to perform two independent activities, to replicate and to interpret. To paraphrase Pattee, a written text has a physical structure that is independent of its interpretation. To read the text means to interpret the text. To copy the text means not to interpret the text but to copy only its physical structure. The crucial separation of replication and interpretation demanded by von Neumann and Crick holds in the case of reading and writing, although we are far from explaining the details of the mechanisms responsible. Most literate humans encounter written sequences every day, and over an individual’s lifetime the cumulative number of encounters is vast. At one extreme of scale we have the case of an individual confronting some particular text at some particular time. How do the sequences of text function as constraints? How are they to be replicated? How can we tell the difference? At the other extreme we have the cumulative effects of texts across entire literate populations. How does a corpus of sequences, especially vast collections such as religious or legal texts, constrain the behavior of human groups? How does replication occur at a population level? And how do these processes tie into what we know about the operation of control hierarchies? We shall return to these questions in Chapter 8.
Notes 1. Parts of this chapter are based on Waters 2011. 2. As physicist and Nobel laureate Niels Bohr points out, “we should doubtless kill an animal if we tried to carry the investigation of its organs so far that we could describe the role played by single atoms in vital functions” (1933). 3. Crick 1958 4. Blackmore 1999 5. Von Neumann did not leave behind a finished manuscript but rather fragments of lecture notes and partially completed papers. To compile the finished product, Burks combined these with notes taken by others during several lectures von Neumann gave on the subject in the late 1940s and early 1950s. An alternate summary of the theory is found in the paper “The General and Logical Theory of Automata,” which was based on a 1948 lecture and published in 1961 in Volume V of von Neumann’s Collected Works. 6. Much of TSRA is taken up with von Neumann’s discussion of what are now known as cellular automata, like Conway’s Game of Life (Gardner 1970). While these have proven valuable in computer simulations of evolutionary processes, my concern here is with his formalization of automata which construct other automata, what Burks called von Neumann’s “kinematic model” of self-reproduction, most of which is found in the Fifth Lecture of Part One of TSRA, pp. 74–87. See Arbib (1969), Ransom (1981), and Freitas and Merkle (2004). 7. Dyson 1979
The Conundrum of Replication 151
8. Von Neumann 1966. TSRA was published around the time that Howard Pattee was beginning to focus his research on the difference between the living and the nonliving (Pattee et al. 1966). Pattee considers TSRA a breakthrough work, his “most profound historical inf luence” (2001). “Von Neumann’s points made sense to me,” he writes, “first, because of their sound logic, second, because they explained why none of my computer models of self-organizing systems evolved, and third, because his argument described how all existing cells actually replicate and evolve” (2008). 9. Von Neumann 1961 10. Von Neumann 1961 11. Von Neumann 1966 12. Pattee 1977. In the context of modeling the evolution of the earliest cells, Carl Woese offers a comparable threshold, which he calls the Darwinian Threshold. “The cell is a complex dynamic system,” he writes. “As its connectedness increases such a system can reach a critical point, where a phase change occurs, where a new, higher level organization of the whole emerges. That, I suggest, is what the Darwinian Threshold represents, a hitherto unrecognized phase change in the organization of the evolving cell” (2002, references omitted). 13. Turing 1936 14. Von Neumann himself did much of the important work in applying Turing’s concept to the real world in the form of the programmable digital computer, still known today as the von Neumann architecture (Goldstine 1972). 15. Kemeny 1955 16. Herken 1994 17. The cell also does not have to catalyze every possible reaction, only those it needs to survive and reproduce. 18. This echoes Zellig Harris on sequences: “the grammar which describes the sentences of a language can be stated in sentences of the same language” (1968). 19. Von Neumann 1961 20. Von Neumann 1966 21. Von Neumann 1966 22. Manfred Eigen and Ruthild Winkler envision the reservoir as a warehouse (1981). 23. Pattee 2008 24. Von Neumann’s model assumes the input sequence Φ(A) is destroyed in the process of constructing A′, thus the need for a second copy of Φ(A). However, even if Φ(A) were not destroyed, the population of copies of A would grow slowly if there were only one Φ(A) to be shared among the growing population of automata. A system containing such a bottleneck lacks the explosive quality von Neumann is seeking. 25. The universality of B means it cannot favor certain sequences over others; it must be prepared to copy whatever sequence it receives. This is another way of looking at the property of energy-degeneracy. 26. Von Neumann 1961 27. Von Neumann also did not discuss how the automaton is powered. Energy is tacitly assumed. 28. In the logical build-out of his automaton, von Neumann gives no indication whether he views physical construction, replication, and control as discrete functional building blocks with discrete sequential descriptions. Were A, B, and C to be constructed independently and then expected to self-assemble into a functioning automaton? Or was the complete (A+B+C) constructed as a single unit? He provides no clue. 29. Von Neumann 1966 30. Hull 2000 31. Crick 1958, 1970 32. Von Neumann 1961
152 The Conundrum of Replication
33. For instance, there is no way he could have known the input sequence to the ribosome is the end product of a long process of transcription, splicing, etc., or that the sequence to be copied (DNA) is a precursor to a second sequence (mRNA) that constrains construction. 34. One important detail he gets wrong is the one least open to explicit description. He assumes the elementary parts in the reservoir are functional primitives, little muscles and joints and connectors and so forth. In the cell the real elementary parts, the amino acids, are individually without function and gain biological meaning only when their primary sequences fold into functional proteins. Global function emerges through the dynamics of folding rather than the assembly of functional primitives. He does make one correct observation concerning the elementary parts: “one has to use some common sense criteria about choosing the parts neither too large or too small” (1966). He calls this the parts problem. If the parts are themselves large and complex, then “you obviously have killed the problem, because you would have to attribute to these parts just those functions of the living organism which you would like to describe or to understand” (1966). On the other hand, if the parts are too small and simple then the automaton requires a corresponding increase in complexity to manage them: “Any statement on the number of elementary parts required will therefore represent a common-sense compromise, in which nothing too complicated is expected from any one elementary part, and no elementary part is made to perform several, obviously separate functions” (1961). He conjectures “10 or 12 or 15 elementary parts” are needed (1966), which is not far from the 20 amino acids cells actually use. 35. Von Neumann 1966. We hear an echo of von Neumann’s lament about throwing “half of the problem out of the window” in B. F. Skinner’s complaint about the exclusive focus on text rather than context (“axiomatizing”) in linguistics: “Until fairly recently, linguistics and literary criticism confined themselves almost exclusively to the analyses of written records. . . . By dividing such records into words and sentences without regard to the conditions under which the behavior was emitted, we neglect the meaning for the speaker or writer, and almost half the field of verbal behavior therefore escapes attention” (1974). 36. Von Neumann considers A and B to be separate automata. He does not discuss the possibility that A could be configured to operate as B. 37. Richerson & Boyd 2005 38. Pattee 2005 39. Pattee 2009 40. Dyson 1979 41. Von Neumann 1961 42. Pattee 2009 43. Pattee 2008 44. Pattee 2007 45. Drexler 1992; “Replicators carry two sorts of information,” write Kim Sterelny and colleagues. “First, they carry information about the interactions in virtue of which a new generation of replicators are constructed. Secondly, they carry information about the next generation of replicators” (1996). In the context of cultural evolution, “if a replicator passes on its structure directly,” says anthropologist Mark Lake, “then replication must be a process in which symbolic structure is transmitted without decoding. For sure, the symbolic structure often is decoded, but it is part of the process of interaction, not replication” (1998). 46. Crick 1958, 1970 47. Crick christens the Central Dogma in his 1958 paper “On Protein Synthesis,” one of the foundational documents of molecular genetics. In response to criticisms engendered by subsequent research, he refines and restates his definition in a 1970
The Conundrum of Replication 153
48. 49. 50. 51.
52.
53.
54.
55.
paper, “Central Dogma of Molecular Biology.” His proposal remains controversial half a century after its initial formulation; some researchers argue it has been refuted (Shapiro 2009; Koonin 2012), others the opposite (Morange 2006, 2008). Its 50th anniversary was marked by a special issue of the journal History and Philosophy of the Life Sciences (Antonarakis & Fantini 2007). One reason the Central Dogma remains contentious is it comes in both narrow and broad forms, with the result that scholars sometimes find themselves arguing past one another. In its broad sense, as interpreted by James Watson and Cyrus Levinthal in their textbook Molecular Biology of the Gene, the Central Dogma refers to the cellular f low of sequential information from DNA to RNA to protein (1965). The dogma part insists this is the only way sequential information f lows. Crick’s own formulation is more limited. While Watson’s DNA-RNA-protein sequential f low holds overwhelmingly in the living world, the discoveries of phenomena like reverse transcriptase and prion reproduction have introduced exceptions to the rule at the margin. The reverse transcriptase enzyme reverses the conventional information f low by allowing RNA to be transcribed into DNA; this is done by retroviruses like HIV. Prions, meanwhile, are protein molecules that constrain the conformation of other protein molecules without the intervention of sequences, thus introducing a proteinprotein f low. This is not the place for an extended discussion of these exceptions; suffice it to say that they pose a theoretical headache for the broad interpretation of the Central Dogma. Nonetheless, the Central Dogma remains a pillar of molecular biology. “There have been only two great theories in the history of biology that went more than a single step beyond the immediate interpretation of experimental results; these were organic evolution and the central dogma,” writes Gunther Stent (1968). Crick 1958; emphasis his. Crick 1970 Dawkins 2004 “It was realized that forward translation involved very complex machinery,” writes Crick. “Moreover, it seemed unlikely on general grounds that this machinery could easily work backwards. The only reasonable alternative was that the cell had evolved an entirely separate set of complicated machinery for back translation, and of this there was no trace, and no reason to believe that it might be needed” (1970). Von Neumann provides a general case of the problem. If the Universal Constructor A uses the input sequence Φ(X) to construct automaton X, can the Φ(X) description be retrieved from X once constructed? The problem is more general in von Neumann’s model because, unlike the enzyme, the automaton X does not necessarily contain sequence information. Sexual reproduction puts even greater distance between the germline sequences to be replicated and the somatic sequences to be translated into phenotype. The vast majority of cells in a sexually-reproducing organism are not in the germline, which frees them from direct reproductive responsibility and allows for far greater specialization. This additional degree of separation, historically known as the Weissman barrier (Sabour & Schöler 2012), further reinforces the distinction between the state description and process description. Pattee 2008. This description of the genotype-phenotype distinction does not admit the possibility of Lamarckian feedback from phenotype to genotype; somatic changes acquired during the organism’s lifetime cannot modify the sequential information in the genotype. The corollary to this is that the two evolutionary steps of variation and selection cannot be collapsed into a single step. Germline mutations occur independently of their phenotypic consequences; organisms cannot pre-select their mutations. Variation is blind to the effects of selection (Campbell 1965). In Wittgenstein’s discussion of reading-machines in Philosophical Investigations, he touches on the interpretation-replication distinction: “I am not counting the
154 The Conundrum of Replication
56. 57. 58. 59.
60. 61. 62.
63.
understanding of what is read as part of ‘reading’ for purposes of this investigation: reading here is the activity of rendering out loud what is written or printed; and also of writing from dictation, writing out something printed, playing from a score, and so on” (1958). Here he treats sequences as state descriptions rather than process descriptions. Worshipers often accurately recite religious liturgies in Latin, Arabic, or Hebrew with minimal knowledge of the meaning. They, too, are treating the sequences as state descriptions. Newport et al. 1977 Vygotsky 1978 Linguist Andrew Davidson says literacy “is a culturally constructed cognitive niche and which language evolves phylogenetically and ontogenetically” (2019). There are and have been rare exceptions to this involving classical languages used in religious contexts, such as reading sacred texts. In these traditions, children are taught to read and write a language different from their vernacular, like West African children being taught to read and write Arabic, South Asian children Sanskrit, or European children Latin (Goody 2000). In addition, deaf children who learn American Sign Language are, in effect, learning a new language when they learn to read and write English. This has led to significant gaps in achievement, with half of US deaf children reading and writing at a fourth-grade level when they graduate from high school (Strong & Prinz 1997; Hoffmeister & Caldwell-Harris 2014). Zellig Harris calls this the distinction between replication and imitation (1968). These textual artifacts fall under the rubric of situated cognition or the extended mind (Clark & Chalmers 1998; Robbins & Aydede 2008). Most of these processes take one of two forms: either a sequence is read and a sequence is written (sequence in-sequence out), or a sequence is read and a dynamic behavior is constrained (sequence in-behavior out). In some cases, these are concatenated, like when a sequence is written by one person and read by another as a constraint on some future behavior (transitivity applies here). Measurement takes the form of behavior in-sequence out. Calligraphy and manuscript illumination elevate the replication of written sequences into an art form and, until recently, fine penmanship was widely admired. However, esthetic appreciation for the physical characteristics of sequences exists independently of their interpretation. As historian Michael Clanchy writes: “Gutenberg succeeded in automating the scribe but not the artist” (1983). An exquisitely gilded “R” is still just an “R.”
7 THE THRESHOLD OF COMPLICATION1
7.1 What Makes a Threshold? Inherent in John von Neumann’s view of self-reproduction is this threshold of complication, a tipping point, as he puts it, “above which the phenomenon of synthesis, if properly arranged, can become explosive.”2 Once across the threshold, systems can give rise in time to other systems far more complex than themselves. Below the threshold, self-reproduction presumably can continue to chug along, but without much growth in scale, complexity, or diversity. How, then, does a system become properly arranged? Von Neumann is content to theorize about this without diving into the details, although he acknowledges that the details matter. “By axiomatizing automata in this manner,” he says, “one has thrown half of the problem out of the window, and it may be the more important half.”3 In this chapter we will dig into this more important half in order to understand the requirements for crossing the threshold. These are “the most intriguing, exciting, and important questions,” says von Neumann, “of why the molecules or aggregates which in nature really occur in these parts are the sort of things they are.”4 Von Neumann is not alone in proposing the existence of evolutionary thresholds. In their book The Major Transitions in Evolution, John Maynard Smith and biologist Eörs Szathmáry identify no fewer than eight important transitions in the evolution of complexity, each resulting in a change “to the way in which information is transmitted between generations” (emphasis mine).5 Carl Woese concurs, writing that “in the evolutionary course there have been a few great junctures, times of major evolutionary advance. Their hallmark is the emergence of vast, qualitatively new fields of evolutionary potential, and symbolic representation tends to underlie such evolutionary eruptions” (emphasis mine).6
156 The Threshold of Complication
If, as Woese says, there have been “a few great junctures,” one obvious question is “well, how many?” How many times has the threshold of complication been crossed on Earth, revealing boundless new affordances to be perceived and acted upon? We know the answer is at least once; the explosive creativity and diversity of the living world exhibit exactly the open-ended potential von Neumann was trying to explain. But perhaps it could be as many as eight transitions, as Maynard Smith and Szathmáry maintain. Here I will make the case for exactly twice. The first of these took place about four billion years ago at the dawn of life, but the second only about 5,000 years ago with the invention of writing. Maynard Smith and Szathmáry do not include literacy as one of their transitions. Their eighth and final transition is from primate societies to human societies through the evolution of language. Although spoken language was an extraordinary invention, this chapter will argue that it was insufficient to get us across the threshold. It was only the development of writing—and related technologies of measurement and manipulation—that truly pushed human culture across the threshold of complication. Literacy led culture to become civilization.
7.2 And Before DNA There Was? Let’s begin with the story of the very first threshold. In 2000, researchers in the lab of Thomas Steitz at Yale University published a pair of papers describing the structure of the ribosome at the finest degree of detail ever achieved.7 Their work revealed that the functional core of the ribosome is composed exclusively of RNA rather than protein. While the enormous ribosome molecule comprises both proteins and RNA, it is the RNA apparatus at its center that makes it tick. For this work, Steitz went on to win the Nobel Prize in 2006. Why was this finding important? In previous chapters, whenever we have discussed catalytic function, we have done so in the context of protein enzymes. And when we have discussed RNA it has been in its role as a sequence. The onedimensional pattern of messenger RNA, for example, provides the template for assembling amino acids into proteins. With the RNA apparatus of the ribosome, however, we are talking about (1) a catalyst that is not protein and (2) an RNA molecule that is not behaving like a passive template. This is because RNA has a split personality. Like DNA, it can store and replicate one-dimensional patterns but, like protein, it can also fold directly into a three-dimensional shape and behave as an interactor. An enzyme made of RNA rather than protein is called a ribozyme, and ribozymes can do almost anything protein enzymes can do, albeit not as well. “RNA has a breadth of catalytic potential similar to protein enzymes,” write Nobel laureate biochemists Jennifer Doudna and Thomas Cech. “Furthermore, like protein enzymes, ribozymes must fold into specific three-dimensional structures to function catalytically.”8
The Threshold of Complication 157
Researchers studying the origin of life had long speculated that today’s living world, built on its tripartite foundation of DNA, RNA and protein, had been preceded by a simpler biochemical system based on RNA alone.9 In this view, life started as an RNA world which only later incorporated DNA and protein to yield life as we know it. Steitz’s discovery that the ancient machinery of translation was itself made of RNA confirmed what many scientists had been thinking. “There is now strong evidence indicating that an RNA world did exist on the early Earth,” write origin of life researchers Gerald Joyce and Leslie Orgel. “The smoking gun is seen in the structure of the contemporary ribosome.”10 Subsequently, researchers have determined that RNA also forms the active mechanism of the spliceosome, which controls alternative splicing. “The spliceosome, like the ribosome, uses RNA to effect catalysis,” write biochemist Sebastian Fica and colleagues. “Our findings thus support the idea that modern ribonucleoprotein enzymes evolved from a primordial ‘RNA world’, in which catalysis was performed exclusively by RNA.”11 So we now know that some of the most ancient and conserved mechanisms in the living world are tiny RNA machines. RNAs are more than just sequences; as Joyce puts it, they are “the only molecules known to function as both genotype and phenotype.”12 This entangled nature of RNA—its capacity for both rate-independent pattern storage and rate-dependent molecular manipulation—makes it the likely platform for the earliest stages of life on Earth. The idea was first proposed a half-century ago,13 but it was Walter “genes in pieces” Gilbert who bestowed its name. “One can contemplate an RNA world,” he writes, “containing only RNA molecules that serve to catalyze the synthesis of themselves” (emphasis mine).14 Speculation about the RNA world centers on the split personality of RNA and the ribozyme. There are many approaches to the question, but most researchers agree on three principles: (1) Early in the evolution of life, genetic continuity was assured by the replication of RNA sequences rather than DNA sequences; (2) only ribozymes performed catalytic manipulation, not genetically encoded proteins; and (3) replication of the linear patterns of RNA occurred through complementary base pairing, just like today.15 One of the earliest and most important ribozyme functions, then, must have been replication of RNA sequences, the first self-catalyzed replication. However, this does not mean that some individual ribozyme literally would replicate its own molecular self. One limitation of RNA’s split personality is that a ribozyme must be unfolded into one dimension in order to be copied but can function as a catalyst only when it is folded. Therefore, it cannot do both at the same time. “A catalyst spends a part of its lifetime replicating other molecules,” write computational biologist Nobuto Takeuchi and colleagues, “and during these times, the catalyst itself cannot be replicated.”16 As with von Neumann’s automaton, expression and replication are decisively separated.
158 The Threshold of Complication
With the critical replication function in place, we can begin to envision the outline of an RNA world: a large and diverse set of interacting ribozymes orchestrating a primitive self-sustaining metabolism. “One could imagine that all of the reactions of central metabolism, now catalysed by protein enzymes, were once catalysed by ribozymes,” writes Joyce.17 This stable RNA-guided metabolism could then recruit companion molecules: DNA to provide more reliable storage, replication, and random access, and proteins for more precise and potent molecular manipulation. Then and only then could the system cross the threshold. Ribozymes persist to this day, and not just in the ribosome and spliceosome, but “thousands of ribozymes in thousands of organisms,” write Cech and molecular biologist Joan Steitz.18 In fact, an explosion of experimental results has demonstrated the great diversity of roles that RNA sequences continue to play in contemporary metabolism. For most of the history of molecular biology, RNA was understood exclusively in terms of genetic transcription and translation. But researchers have been surprised to discover many new classes of RNAs lurking within the genome which, say Cech and Steitz, “had completely escaped detection and then, suddenly, were found to be widespread and abundant.”19 These new varieties of RNA are changing our understanding of the regulatory landscape within the genome. “The emerging evidence suggests that there are more genes encoding regulatory RNAs than those encoding proteins in the human genome,” say molecular biologists Kevin Morris and John Mattick, “and that the amount and type of gene regulation in complex organisms have been substantially misunderstood for most of the past 50 years.”20 This adds yet another layer of sequential complexity to the control hierarchy of transcription factors, alternative splicing, and gene regulatory networks we reviewed in earlier chapters.21 As Morris and Mattick see it, RNA in modern organisms oversees “an ancient, widespread, and multilaterally adapted system that controls many cellular processes, the dimensions of which are still being explored.”22 In the RNAs and metabolic pathways of modern organisms researchers can trace the ghostly features of the long-gone RNA world.23 “It is as if a primitive civilization had existed prior to the start of recorded history,” says Joyce, “leaving its mark in the foundation of a modern civilization that followed.” 24 Of course, primitive civilization did exist before the start of recorded history, but we are getting ahead of ourselves.
7.3 Living by RNA Alone There may be ribozymes and regulatory RNAs in modern organisms, but why are there no longer any free-living descendants of the RNA world?25 Why do even the simplest organisms rely on DNA and protein for sequence storage and
The Threshold of Complication 159
catalytic function? The answer seems to be that systems built exclusively of RNA sequences and ribozymes are subject to limitations which compromise their ability to cross von Neumann’s threshold of complication. In particular, the RNA world suffered from three handicaps. First, RNA is not a stable molecule. Second, RNA replication is subject to a high error rate. And third, as an enzyme, the ribozyme is not terribly potent. These disadvantages do not mean the RNA world never existed—on the contrary, the evidence is strong that it did—but they did place limits on its scalability and evolutionary potential.26 The first handicap is instability. In The Wizard of Oz, the Wicked Witch of the West is unstable in water. So is RNA. Unlike DNA, which is stored as two complementary strands that reinforce each other, RNA is stored as a single strand. The single-stranded RNA molecule is susceptible to hydrolysis; in water the backbone of the sequence can cleave spontaneously.27 Given that water is ubiquitous and essential to life, this is not a good thing. It is especially problematic in replication, which must be fast enough to outrun decay. “In order for a seed copy of replicating RNA to germinate,” says Joyce, “it must produce additional copies of itself faster than the existing copies become degraded.”28 This leads to the second handicap, a high mutation rate in replication. In the living world there is an ever-present tradeoff between speed and reliability; the faster the replication, the lower the fidelity. Compared with DNA, the copying of RNA sequences is intrinsically error-prone.29 The evolutionary impact of this is to limit genome size and complexity. “If copying is very error ridden, and selection is weak,” say Kim Sterelny and colleagues, “then noise can swamp selection, and cumulative selection will be unable to build complexly adapted interactors.”30 In the RNA world, sequences were under selection pressure to develop error-correcting mechanisms to improve replication fidelity.31 We don’t know how well they succeeded. “When self-replication is first established, fidelity is likely to be poor and there is strong selection pressure favoring improvement of the fidelity,” write Joyce and Orgel. As fidelity improves, a larger genome can be maintained. This allows exploration of a larger number of possible sequences, some of which may lead to further improvement in fidelity, which in turn allows a still larger genome size, and so on.32 Here we begin to get a glimpse of the runaway explosive quality von Neumann is after. RNA may be unstable and may suffer from poor replication fidelity, but it gets worse. The third handicap is that, compared to protein enzymes, ribozymes are just not very good catalysts. Modern proteins are “intrinsically a millionfold fitter as catalysts than RNA,” write Steven Benner and colleagues.33 This
160 The Threshold of Complication
is partly because ribozymes are so much smaller than proteins but also because the chemical bonds that give a folded ribozyme its shape are not that strong. Remember that ribozymes need to fold for function but also must unfold for replication. There is a “tradeoff between favorable folding stability and ease of unfolding,” write computational biologist Abe Pressman and colleagues. “More effective ribozymes, with likely higher folding stability, are expected to be less prone to unfold for use as a replication template, and would therefore have lower fitness.”34 These built-in handicaps kept a lid on the evolutionary potential of the RNA world. “The replicating RNA enzyme is the only known molecule that can undergo self-sustained Darwinian evolution,” write Joyce and biochemist Michael Robertson, “but it has limited genetic complexity, and therefore limited capacity for the invention of novel function.”35 The diversity and complexity we see in the living world today only arose when these problems were solved. DNA was recruited to improve storage stability and replication fidelity, and proteins were recruited to improve catalytic function. Division of labor allowed the RNA world to cross the threshold of complication. 36,37 “In the beginning, the same entities had to perform both functions necessary for selection,” writes David Hull. “However, because replication and interaction are fundamentally different processes, the properties which facilitate them tend also to be different. None too surprisingly, these distinct functions eventually were apportioned to different entities.”38 As a storage medium for one-dimensional patterns, DNA has advantages over RNA. First, the DNA backbone is more stable and less prone to hydrolysis.39 Second, DNA is stored in a double-stranded form in which complementary base pairing adds another layer of stability and error correction.40 DNA also replicates with higher fidelity than RNA, which allows for a substantial increase in genome size and complexity. “The primary advantage of DNA over RNA as a genetic material is the greater chemical stability of DNA,” says Joyce, “allowing much larger genomes based on DNA. Protein synthesis may require more genetic information than can be maintained by RNA.”41 And crucially for regulatory function, a stable sequence like DNA enables random access; if you are searching for a specific one-dimensional pattern, it is there to be found. DNA sequences were, in Woese’s words, “something unique in the (RNA) world, namely, nucleic acids whose primary value lay in their coding capacity.”42 Replacing RNA with DNA for storage and replication of linear patterns was one key step in the division of labor. The other was the replacement of weakly catalytic ribozymes with strongly catalytic protein enzymes, which improve upon ribozymes in three ways. First, their size and binding specificity enable them to differentiate among a much larger class of potential substrates. Second, their size and complexity enable a larger range of allosteric conformational changes. Finally, their stronger bonds and consequent ability to remain folded allows their catalytic power to increase by orders of magnitude.
The Threshold of Complication 161
Proteins are so much more versatile and powerful that once nature figured out how to use protein enzymes, there was no way any legacy ribozyme-based system could remain dominant. “A life form that did not exploit proteins as catalysts could not have competed with life that did,” write Benner and colleagues.43 This is why there are no free-living RNA-only descendants of the RNA world. Concludes Joyce: “The invention of protein synthesis, instructed and catalysed by RNA, was the crowning achievement of the RNA world, but also began its demise.”44 What are we to make of the RNA world? While remarkable, it was limited in its ability to achieve scale and explosive evolutionary creativity. It “should be viewed as a milestone,” say Robertson and Joyce, “a plateau in the early history of life on earth.”45 We do not know how long it took for life to upgrade from the ancient RNA-only system to the more robust and versatile DNA-RNAprotein system. Nor do we know the steps involved, whether DNA or protein came first or if they were incorporated simultaneously. But we do know this transition allowed the system to cross von Neumann’s threshold of complication, making possible the diversity of living forms on Earth today.
7.4 Entangling the Central Dogma Today’s division of labor between DNA and protein was absent from the RNA world; one molecule had to perform both functions. “RNA,” as Cech puts it, “is both an informational molecule and a biocatalyst—both genotype and phenotype.”46 How does this mesh with the work of von Neumann and Crick discussed in Chapter 6? Did the RNA world obey the Central Dogma? Did it fit von Neumann’s model of self-reproduction? Von Neumann’s TSRA cannot construct and replicate at the same time; it needs the control automaton C to toggle between the two. Similarly, a ribozyme cannot catalyze and replicate at the same time. It must fold up in order to manipulate other molecules and unfold to be replicated as a one-dimensional sequence. But the dynamic self-folding of a ribozyme is not the same as the piece-by-piece construction of an automaton from an input sequence. In some sense, the unfolded ribozyme itself is the input sequence. What von Neumann sees as independent steps—the division between genotype and phenotype—are entangled in a single step in the ribozyme. A ribozyme must routinely fold and unfold: folding in order to catalyze and unfolding to one dimension for its sequence to be copied. How is this decision regulated? What determines whether it should fold for function or unfold for replication? In other words, what fulfills the mission of the control automaton C? We do not know. Presumably natural selection discovered some balance between time spent catalyzing and time spent replicating, a balance which varied for different ribozymes at different times.47 The ribozymes responsible for replication also faced the question of how much time to spend replicating
162 The Threshold of Complication
each other, increasing their own population, versus time spent replicating other kinds of ribozymes.48 Similar issues must have arisen in the regulatory systems of the RNA world. Regulation in modern organisms includes mechanisms like allosterism, where the interactor is regulated directly, and transcription factors that govern whether a gene sequence is expressed. How could regulation be managed when the gene sequence and the interactor are the same molecule? Some ribozymes have been shown to exhibit allosterism; they are called riboswitches.49 But how could something like a transcription factor exist when there is no transcription? Could a regulatory molecule bind to an unfolded ribozyme to prevent it from folding, or would all regulation take place in the folded state? All of this folding and unfolding would have presented an obstacle to the development of complex, precisely timed regulatory networks. The larger the network, the lower the likelihood that all its constituent ribozymes were in the correct state, folded or unfolded, at the same time. The entanglement of rate-independent genotype and rate-dependent phenotype also provided an opportunity for the RNA world to violate the Central Dogma, although we do not know whether this actually happened. In the DNA-RNA-protein world the Central Dogma states that one-dimensional patterns do not f low backwards from the folded protein to genomic DNA. However, when the folded molecule and the unfolded molecule are the same molecule, any changes the ribozyme acquires while in its folded state would be preserved in its sequence. It’s downright Lamarckian. If a ribozyme had a bit of itself snipped off during some catalytic encounter, that acquired characteristic would be inherited when the time came to unfold and replicate. Although there are some resemblances, it should be clear that the RNA world violated Crick’s Central Dogma and is not a good fit with von Neumann’s TSRA. For Crick and von Neumann to take hold, the system had to journey across the threshold of complication. This happened only when the functions were decisively separated into molecules optimized for each, DNA for genetic storage and replication, and protein for catalytic power and precision.50 The RNA world was a steppingstone to today’s living world and, despite its limitations, must be given its due. Guided by replicable sequences and operated by catalytic interactors, it was already a far cry from the inexorable physical activity elsewhere on the early Earth. But it reached a plateau, unable to evolve past the threshold of complication. Crossing the threshold required the separation of genotype from phenotype with the addition of stable sequence storage, greater capacity, random access, more reliable replication, and more powerful, f lexible, and selective interactors. The upgrade was provided by DNA and protein. Was the threshold destined to be crossed? Was the explosive complexity afforded by DNA and protein somehow inevitable once the RNA world was in place? We don’t know. Absent crossing the threshold, it certainly seems possible
The Threshold of Complication 163
that the Earth might yet be home to the RNA world and nothing else. Complex as it might be, such a world would be impoverished compared to the diversity of the living world as we know it. And we would not be here to observe it.
7.5 A Civilized Threshold We live in a complex civilization, as diverse and innovative in its own way as the living world, explosively so, von Neumann might say. Somewhere in the many generations between our primate ancestors and ourselves, human culture crossed its own threshold of complication. The balance of this chapter will try to pinpoint the location of our threshold, using what we have learned about the RNA world as a guide. Many would claim that speech is our secret sauce, but speech by itself was insufficient; we did not cross our threshold until we became literate. What is it like to live in a human culture that lacks writing? Between five and ten thousand years ago every member of our species, creatures biologically identical to you and me, lived in this kind of culture. And prior to the colonial era of recent centuries, there existed thousands of human groups whose contact with the civilized world was so limited they lacked not only writing but even the concept of writing. “There have been whole communities, whole cultures,” says linguist Roy Harris, “which did not know—and scarcely could have imagined—what it is to be able to write.”51 Despite the pervasiveness of written texts in the world we inhabit, literacy has been a comparative rarity in the cultural trajectory of our species, an outlier in terms of numbers but hegemonic in terms of power and inf luence. “Of the thousands of languages spoken at different periods in different parts of the globe, fewer than one in ten have ever developed an indigenous written form,” says Roy Harris. “Of these, the number to have produced a significant body of literature barely exceeds one hundred.”52 Imagine a world in which everything you and your family and friends know about history, nature, and practical skills is learned through personal interaction, through what Jack Goody and literary historian Ian Watt call “a long chain of interlocking conversations between members of the group.”53 In this world all knowledge is internalized by individuals and lost through death and forgetting. If you want to know about some event that happened last year, or about the utility of a particular plant, or about the nature of the lands beyond your immediate territory, there is nowhere to turn except to the fallible memories of others.54 And if you desire exposure to the sequences of speech, you strike up a conversation. “It is often hard for the literate world to remember that the core ecology for language use is in face-to-face interaction,” write linguists Stephen Levinson and Judith Holler. “This is the niche in which languages are learnt and where the great bulk of language use occurs.”55 At its root, human speech is time bound and space bound.56
164 The Threshold of Complication
In this world, you never ask how to pronounce a word because you never encounter a word except as it is pronounced. You never know how to spell a word because there is no alphabet. You never have your grammar corrected because standardization is unknown. There is no confusion between “your” and “you’re.” You don’t even possess the concepts of word or sentence because constructing a grammar, parsing the sequences of the speech stream, is a profoundly literate activity. “Pre-writers had no such concepts,” insists psychologist David Olson.57 The affordances available to you in this world are limited. Your perception is restricted to what your naked eye can see, and your power to manipulate the physical environment is limited to what your hand can hold or your back can carry, or what you might convince or coerce another person or group to do on your behalf. In this world, our ancestors had the same capacity for spoken language that we do. Their bodies were as strong as ours and their brains just as big. Their thumbs were opposable and their grips just as versatile. Their vision and hearing were as acute as ours and their taste buds just as sensitive. So why did they use flaked rocks for tools while we transplant hearts and land space probes on comets? Throughout the living world, more complex behavior always seems to require more complex animals or animal societies. How, then, did human culture become more complex and sophisticated without a parallel need for increasingly intelligent and creative individuals?58 The answer, as Freeman Dyson writes, is that “beyond a certain point, you don’t need to make your machine any bigger or more complicated to get more complicated jobs done. All you need is to give it longer and more elaborate instructions.”59 In this we see parallels to von Neumann’s self-reproducing automaton. For the universal constructor to build a more complex automaton, all that was needed was a more complex set of instructions. Human culture, though, is less like construction and more like configuration. We the people are constructed, of course, by our biology. Once born, however, we are configured by our culture, our unique set of input sequences. For human culture to cross the threshold, major improvements were needed both in our sequences and in us, their interactors. As with the RNA world, crossing the threshold required the addition of sequence storage with greater capacity and stability and more reliable replication, plus more powerful, f lexible, and selective interactors, our equivalents of DNA and protein. Writing gave us a stable, versatile, and cumulative storage and replication medium for the sequences of speech, a medium that made random access possible. Equally important were measuring instruments that extended the range of our perceptual systems, and sophisticated tools and machines to amplify and extend the power of the human body, especially the hand. Few archaeologists or anthropologists would quibble with the importance of these factors in the transition to civilization,60 but this is less about Neolithic history than about how improvements in sequences and interactors can
The Threshold of Complication 165
explain the evolution of complexity more generally. As we witness human culture crossing its threshold of complication, we sense echoes of the way living systems crossed their own threshold billions of years ago. That is because the limitations which restricted the scale and complexity of preliterate human culture are the same limitations that kept a lid on the evolutionary potential of the ancient RNA world. The creative benefits conferred upon humans by writing, measuring devices, and technologies of manipulation are the same benefits conferred on the RNA world by adding DNA as a storage/replication medium and proteins as enzymes.
7.6 Get It in Writing Charles Hockett includes among his design features of language rapid fading; “All linguistic signals are evanescent,” he says. Once something is spoken, if not remembered it is gone forever; we cannot go back and review what was said.61 “Without writing there is virtually no storage of information outside the human brain,” says Goody.62 Not only is speech evanescent, but our ability to replicate the sequences of speech with fidelity is not very good, as demonstrated in the parlor game of “telephone.” Like RNA, sequences of speech are unstable and don’t replicate faithfully. In preliterate human groups, the evanescence of speech produced cultures in which knowledge could not accumulate beyond the aggregate memories of the group’s members. “Before the development of text,” write information scientist Andrew Madden and colleagues, “knowledge was inseparable from the knower.”63 Sequences were dependent on their own interactors for their preservation. Writing provided what archaeologist Denise Schmandt-Besserat calls “an extrasomatic brain not liable to human memory failure.”64 In the absence of external record-keeping, there was no way to compare past and present descriptions of the world. Had Odysseus’s sailors argued about his instructions, there would have been no definitive way to settle it. “The very conditions of [speech] transmission operate to favor consistency between past and present,” say Goody and Watt, “and to make criticism—the articulation of inconsistency—less likely to occur; and if it does, the inconsistency makes a less permanent impact, and is more easily adjusted or forgotten.”65 With knowledge and knower entangled, what we call objectivity was not valued, probably not even understood.66 The instability of speech kept preliterate cultures local; the obstacles imposed by face-to-face communication and imperfect memory made complex, largescale social organization impossible.67 “Social institutions are much affected by the limitations of the oral channel,” says Goody. Oral communication in the political field obviously restricts the buildup of bureaucratic government. While it does not prevent the rise of states,
166 The Threshold of Complication
the relationship between the center and the periphery is likely to remain a weak link in the chain of messages.68 Writing changed all of this. Knowledge was decisively separated from knower; sequences could be preserved without the brains and bodies of interactors. Equally as important, scribes could guarantee fidelity of replication. It was the first step in the requisite division of labor. “Complex information transmitted through written instructions has lower rates of error than if it were transmitted verbally or if the recipient is able to only visibly watch someone perform an activity,” write anthropologist Jelmer Eerkens and archaeologist Carl Lipo. “Relative to the magnitude of the incoming signal, the human visual system is much more accurate than the auditory system.”69 That said, some important properties of speech are imperfectly captured in written texts. Two of these are prosody and gesture, what literacy researcher Anne Dyson calls the “sensual qualities of speech.” 70 Face-to-face conversation is characterized by dynamic variation in the speech stream. Coupled with bodily and facial motion of speakers and listeners, these are the rate-dependent components of speech. The speaking human is like a ribozyme, sequential patterns and dynamic behavior entangled in an indivisible package.71 Say Levinson and Holler, “language production always occurs with the involvement of not only the vocal tract and lungs, but also the trunk, the head, the face, the eyes and, normally, the hands.” 72 The link between speech and bodily motion is strong. “Speakers gesture when no one is watching,” writes gesture researcher Susan Goldin-Meadow. “Indeed, blind speakers gesture routinely, even though they themselves have never seen gesture—and they do so even when they are knowingly speaking with listeners who are themselves blind.” 73 Gesture researcher Adam Kendon calls this semantic coherence. He continues: “The gestural component and the spoken component of an utterance are produced under the guidance of a single plan.” 74 So strong is the link between speech and gesture that some researchers contend that early speech evolved out of a primitive gestural system.75 In Chapter 5 we discussed the difference between shape readout and base readout. When a transcription factor recognizes a short snippet of DNA, is it responding to its physical shape or to its explicit sequence of bases? Often we can’t tell; the two modes seem to be entangled. We also encounter a shape readout vs. base readout question when we attend to someone speaking. If the mouth is speaking yes but the head is shaking no, which mode do we believe? Should we focus on the sequence being spoken, the base readout, or on the speaker’s body language and tone of voice, the shape readout? Here, too, the modes seem to be entangled. Sometimes the body language reinforces the message of the sequence, but sometimes it contradicts it. The dynamic properties of speech vanish in written texts; writing is strictly rate-independent. When reading a transcript of speech, we don’t know if the
The Threshold of Complication 167
speaker was yelling or whispering, smiling or frowning, intoning quizzically or emphatically, grimacing or raising an eyebrow, shrugging the shoulders or waving the arms, stamping the feet, or pointing. “Writing readily preserves the lexical and syntactic properties of speech,” says Olson, but loses the voice-qualities of the speaker including stress and intonation, the ‘silent language’ revealed in bodily clues manifest in the eyes, hands, and stance as well as the cognitively shared context, all of which in oral contexts indicate how an utterance is to be taken.76 This is easily demonstrated by asking Siri to tell you a joke. When the missing rate-dependent context is crucial to interpreting the written message, it must be reconstructed using punctuation, typography, and even more text, what L. S. Vygotsky calls deliberate semantics.77 This drive to make the dynamics explicit helps account for the creative potential of texts. The “bias of written language toward providing definitions, making all assumptions and premises explicit, and observing the formal rules of logic produces an instrument of considerable power for building an abstract and coherent theory of reality,” says Olson.78 Once again, we witness Sydney Brenner’s mantra: if you want specificity, you pay for it with more sequence.79 “There is a transition from utterance to text both culturally and developmentally,” says Olson. “This transition can be described as one of increasing explicitness, with language increasingly able to stand as an unambiguous or autonomous representation of meaning.”80 Exceptions exist, notes literacy researcher James Gee: “Lectures, which though spoken, are writing-like in some ways, or family letters, which though written, are spoken-like in some respects.”81 The iconography of emojis in electronic communication is also an attempt to add a bit of shape readout to the base readout of texts. Their popularity speaks to its importance.
7.7 Stability, Random Access, and Self-Reference In preliterate cultures, speech was a dynamic, context-dependent activity rarely involving more than a few people. Unlike vervet monkeys, humans can talk about things distant in time and space, but the act of talking itself is still anchored to the here and now. For our ancestors, utterances were evanescent; they never stayed around to be examined or analyzed. With literacy, the human world became less time bound and space bound. “The time world was extended beyond the range of remembered things and the space world beyond the range of known places,” says political economist Harold Innis.82 Thus, linguist Peter Daniels concludes, “humankind is defined by language; but civilization is defined by writing.”83
168 The Threshold of Complication
Part of the difficulty we have imagining life without written language is rooted in the feedback loop of grammatical standardization which exists between writing and speech. When literate humans began to scrutinize written sequences and came to understand more about the structure of language, this knowledge filtered back into speech itself in the form of standardized grammars. “Once writing came to represent language, it caused ref lection upon language,” write linguist Malcolm Hyman and historian of science Jürgen Renn, “and this ref lection in turn altered patterns of use in language, thus restructuring language.”84 Writing became a tool for the exploration of language, fostering new hierarchical levels of self-reference. “Rather than viewing writing as the attempt to capture the existing knowledge of syntax,” says Olson, “writing provided a model for speech, thereby making the language available for analysis into syntactic constituents, the primary ones being words which then became subjects of philosophical ref lection as well as objects of definitions. Words became things.”85 This was an important step on the road to abstraction, what Michael Tomasello calls the drift to the arbitrary.86 In other words, written language became a paradigm for how spoken language should be used. “Once a script is taken as a model for speech,” writes Olson, “it becomes possible to increase the mapping between the two, to allow relatively close transcription of speech and conversely, to speak like a book.”87 This is why we have the tools to correct each other’s grammar, usage, and pronunciation, even in everyday discussions. “Writing is the specialization of one of the resources of speech, its capacity for self-reference,” says Olson.88 In other words, writing allows sequences to become affordances for other sequences. Human memories are fallible. It is not unusual for two different individuals to have quite different recollections of past events in which they both participated. Were you in a preliterate culture, you might wish to discuss some plans you made last week with friends. But random access to bits of last week’s conversation is impossible because there is no physical trace. Your reference has meaning only if your memory of what was said is the same, or at least comparable, to your friends’ memories. If your recollections differ then the reference fails. Odysseus’s command to “follow these instructions and ignore certain future instructions” is about as complex as the oral channel allows. Had he added more contingencies, his crew would have become confused, with the potential for disagreements about what was said and what was meant. “The complexity of a long, articulated chain of rational argument,” Roy Harris writes, “cannot simply be ‘carried in the head.’”89 Writing frees the individuals involved from holding interlocking sets of instructional sequences in their personal memories. Today’s instructions can
The Threshold of Complication 169
be compared with last week’s to search for contradictions, and then a decision made as to which are obeyed and which are ignored. “When an utterance is put in writing it can be inspected in much greater detail,” says Goody, in its parts as well as in its whole, backwards as well as forwards, out of context as well as in its setting; in other words, it can be subjected to a quite different type of scrutiny and critique than is possible with purely verbal communication. Speech is no longer tied to an ‘occasion’; it becomes timeless. Nor is it attached to a person; on paper, it becomes more abstract, more depersonalized.90 If you are at the dinner table and you want the pepper, you address your request to a specific individual, Peter. This is a social form of random access. Street addresses are another form, allowing us to pick a certain dwelling or office out of a large population. Among sequences, random access is made possible by rate independence. A stable pattern is a readily located pattern. Like DNA, the rate-independent stability of text allows sequences to reference other sequences explicitly, regardless of age or location, and thereby regulate their interpretation and replication. “Literate thought is premised on a selfconsciousness about language,” writes Olson.91 Writing expanded the capacity of the sequences of human language to modify, regulate, and reclassify one another, to evolve new hierarchical levels of self-reference without limit. Not only did random access become possible, but tools like indexes, footnotes, and marginalia were developed to speed the process along. This is evident in the great document-based regulatory systems of law and religion, what Olson calls literate artifacts.92 Texts are scrupulously compared with one another in search of inconsistencies which, when found, are resolved through the hierarchical intervention of judges, legislatures, or ecclesiastical authorities.93 Written interpretations in turn become the raw material for further hermeneutical reclassification, a process that exhibits open-ended evolutionary creativity. Such sprawling text-based institutions could not exist in a preliterate culture or, as Goody puts it, “it would be impossible to imagine a purely oral society as a member of the United Nations.”94 The invention of writing was a reclassification, adding a new layer of constraint to the social behavior of preliterate human groups. “Humans evolved in small hunter-gatherer bands facing a wide range of collective action problems,” write anthropologist Daniel Mullins and colleagues. “Many such problems were overcome through the evolution of psychological mechanisms designed to promote prosociality or detect and punish defectors within the group.”95 These psychological mechanisms originally evolved in face-to-face societies in which everyone knew everyone else. As a result, they did not scale well, a limitation overcome by literacy.
170 The Threshold of Complication
Mullins and colleagues write: Writing and recordkeeping helps to solve the problem of cooperation in large groups by transcending the severe limitations of human evolutionary psychology through the elaboration of four cooperative tools— reciprocal behaviours, reputation formation and maintenance, social norms and norm enforcement, group identity and empathy. . . . Writing and recordkeeping systems function to efficiently and accurately identify interactants, decreasing the anonymity of individuals in large groups and providing a means of detecting and tracking transgressors.96 The invention of writing served to reclassify the existing time bound and space bound social constraints, adding a new level of hierarchical control and making possible the modern state. The emergence of writing was one of three inventions that allowed human culture to cross the threshold of complication. Just as DNA allowed genomes to grow in size by orders of magnitude, writing made possible the accumulation of cultural knowledge on a global scale. The RNA world and human culture both began with locally organized systems of unstable sequences, RNA and speech, that were subject to high mutation rates in replication. Rate-dependent and rate-independent elements were entangled, limiting their potential for complex self-reference and regulation. The upgrade to DNA and writing provided greater storage stability and fidelity of replication, abstracted away most rate-dependent properties, and enabled random access, allowing sequences to construct complex control hierarchies. DNA and texts are better for storage, self-reference, and replication than RNA and speech, but we know they are only half of the story. They are mere sequences, and sequences themselves are inert. To actually get control of physical systems, they need interactors. In crossing the threshold from the RNA world to life as we know it, protein-based enzymes emerged as interactors, coupling precise pattern recognition and powerful catalysis. Sequences of human language, whether speech or text, interact with the world using human bodies, which already possess powerful perceptual systems, particularly vision, and remarkable hands that afford precise manipulation of the environment. But our preliterate ancestors also had these abilities. What changed? Simply put, humans became more powerful and capable interactors. We invented better tools and technologies of measurement and manipulation. These tools enabled us to become protein enzymes rather than ribozymes. “There are tools that help our senses, spectacles, telescopes, microphones, which we may call perceptual tools,” writes biologist Jakob von Uexküll. “There are also tools used to effect our purposes, the machines of our factories and of transportation, lathes and motor cars. These we may call effector tools.”97 We became more powerful and capable, first, by building instruments
The Threshold of Complication 171
to extend human perception well beyond its biological capacities and, second, by building machines to increase the precision of the human hand and the power of the human back.98 This division of labor gave rise to our literate technological civilization.
7.8 What Does the Thermostat Say? Perception is how living things ascertain the salient features (affordances) of their environments so they can respond appropriately. This is true from the simplest to the most complex evolutionary levels, from the binding sites of enzymes that decide if a given molecule is a suitable substrate to the visual and tactile systems of preliterate humans that decide if a rock is suitable for f laking. Perception allows organisms to classify behavioral alternatives; if there are no alternatives, there is no decision to be made and consequently nothing meaningful to perceive. The perceptual abilities of humans are enabled by our five biological senses of vision, hearing, smell, taste, and touch. These evolved in a Goldilocks world that was about our size, not too big and not too small.99 Extending our phenotype to perceive and manipulate the very small and very large required the invention of technology to supplement the five senses which, in turn, required enhancements to our grammar of extension. In some cases, we can augment our senses directly. Lenses placed in front of the eye make the small bigger and the distant closer, and a stethoscope makes louder the faint sound of a heartbeat or a breath. In these cases, our eyes and ears are still doing most of the work. We can also build instruments like microphones, cameras, and audio-visual recorders to extend our biological senses of vision and hearing across space and time.100 But again, we are seeing what we see and hearing what we hear. Our sense of touch can also be extended through remote-controlled robotic arms. However, our molecular senses of smell and taste have not been successfully augmented. We can only smell what is over the hill if the wind is right. While we can directly perceive environmental features like temperature, size, and color, we invented thermometers, micrometers, and spectrometers to let us measure these features with greater precision than our corresponding senses can muster. Instruments afford greater sensitivity and selectivity and do not need a direct connection to our nervous systems. Thermometers, micrometers, and spectrometers deliver output sequences of numbers so that measurements can be memorialized. But we can also build instruments to measure attributes of the environment well out of range of our biological endowments. We may not have the sense of magnetoreception, but we can build compasses and magnetometers to measure magnetic fields. Similarly, voltmeters quantify electrical properties, Geiger counters measure radioactivity, and litmus paper determines acidity and
172 The Threshold of Complication
alkalinity. In these cases, the instruments measure features of the world that we cannot perceive, and convert the measurements into something we can, like a change in color, the movement of a pointer, or a series of audible clicks.101 “Everyone can feel which is the heavier of two objects held in the two hands,” says archaeologist Anna Michailidou, but outsourcing that task to an instrument like a balance enables uniformity and repeatability.102 “Absolute measurement,” she continues, started from the moment when a stone was placed on one of the pans, to balance the commodity placed on the other pan. This provided a visual and more permanent witness to the mass measured, since the stone could be kept and the weighing repeated or the initial result be checked.103 The earliest measurements were taken in relation to the human body. The Egyptian cubit was the length of the forearm, subdivided into palms and digits. As for weight, the most widely used unit was the load that a person can carry on the shoulders, which is about 30 kilograms.104 And for distance the only practical metric is the time taken to travel it.105 Consider also “within earshot,” a “handful,” and a “stone’s throw.”106 An innovation like metallurgy required humans to perceive new affordances in the environment; Gibson calls this the education of attention.107 Rocks were assessed not only as raw material for stone tools but also for their potential as ores for smelting copper or tin. The development of ceramics and agriculture changed how the affordances of soils were perceived. The ground afforded more than just support for walking; it became fertile soil for planting crops or a potential source of clay for creation of storage and cooking implements. What is the difference between perception and measurement? In our context, perception is a dynamic biological process arising in the relationship between an interactor and its environment and, much like speech, specific to that interactor at that moment, time bound and space bound. Measurement entails the intervention of sequences, the conversion of a percept into a linear pattern. Measurements are also social; they can be shared, even if only with oneself at a future time.108 You cannot perceive what I perceive, but I can describe what I perceive in the form of a measurement, and you can respond appropriately. Today, outputs of measuring instruments are typically transferred to digital storage systems and computer programs which in turn present them as sequences of numbers. Automated systems measure aspects of the environment and use those measurements to control machines to respond without human intervention. A common example is the thermostat, which measures temperature and activates HVAC equipment when the temperature drifts away from its set point. Technologies of measurement made us more precise and versatile interactors; no other creature has such a complex extended phenotype or access to such
The Threshold of Complication 173
complex affordances. “New materials, material cultures and practices enabled and encouraged people to focus attention on, and engage with, aspects of the surrounding world that had previously gone unnoticed,” write archaeologist Vesa-Pekka Herva and colleagues. Neolithic materials, things and practices associated with them can thus be understood to introduce new dimensions of reality in a broadly similar manner as telescopes, microscopes and scientific practices did much later, affording new points of connection between people and the world.109 The technologies of measurement enabled discovery and exploitation of latent affordances.110 Writing itself emerged in the context of measurement. During its first centuries, writing was used exclusively for accounting, for the recording of simple measurements like quantity.111 Five goats would be expressed as five instances of the token for goat. Only later were there separate tokens for five and goat so that the quantity symbol could be used independently of the symbol for the thing being counted.112 This crucial separation of the measurement from the thing being measured allowed written sequences to branch into two distinct systems. One evolved into the abstract symbol manipulations of mathematics and computer code with their need for an explicit, deliberate semantics. A numeral connects to the material world only when you specify what is being counted or measured, and it connects to other numerals only through a system of explicit rules. The other, informal, branch evolved into the system of written language, which remains grounded in the physical world through the tacit semantics of human perception, action, speech, and social constraints. One branch led to texts destined to be read by people, the other to sequences destined to be read by computers. “In the light of the evolution of counting, the origin of writing can be viewed as a by-product of the abstraction of numbers,” says Schmandt-Besserat. “Namely, the split between the notion of numerosity and that of the item being counted created the necessity for two systems of recording. From this point on, numerals and counted items were recorded by different types of signs.”113 Self-reference in written sequences originated with what she calls this “extraordinary event when the concept of number was abstracted from that of the item counted.” This split persists to this day. All around the world, mathematicians use the same numeric alphabet (1, 2, 3, 4 . . .) and notation (+, =, ≥, ∫ . . .), even though the numerals and symbols themselves have different names in local vernaculars.114 Mathematicians who cannot understand each other’s speech or writing can communicate effectively through their specialized notation. This ultimately enabled the universality of computation. Written language, on the other hand, remains grounded in the idiosyncrasies of speech, which has led to
174 The Threshold of Complication
a diversity of alphabetic and ideographic systems. There is no universality in written language, except for the universality of the world being written about and the universality of the creatures doing the writing.115 Writing enabled the recording and sharing of measurements, so a sequence preserving a measurement made at one place and time could be referenced to guide behavior at some other place and time. Converting individual perceptions into permanent sequences enabled the development of record-keeping systems to orchestrate societal activities over large areas and long time periods. Standardization provided for replicability of artifacts and architecture.116 Adding large protein enzymes to the RNA world vastly increased the number of potential substrates that could be recognized in the molecular environment and allowed the system to evolve past the limitations imposed by weak ribozymes. In the same way, supplementing our ancient perceptual systems with technologies of measurement revealed a vast array of latent affordances for our species to exploit. Measurement “is the basis for any complex economic system,” write archaeologists Colin Renfrew and Iain Morley. “It is one of the foundations of all urban civilisations.”117
7.9 Hand Me the Pliers Detailed measurements are salient only if they can guide appropriate behavioral responses. An enzyme molecule may bind to a certain substrate, but unless a functional catalytic action follows, the binding is of no biological significance. Likewise, a carpenter does not get too far with only a pencil and a tape measure; she also needs a saw. New technologies of measurement depend for their utility upon the simultaneous emergence of new technologies of manipulation to extend human behavior beyond its somatic limitations. This was the final enabling step for humans to become more powerful and versatile interactors, better enzymes. The primary somatic tool of our biological lineage is the hand, which gives the human phenotype more manipulative degrees of freedom than any other mammal.118 Not only is the hand f lexible, but it also has the strength to maintain a precision grip “while at the same time withstanding large external forces” such as hammering.119 Unlike our primate ancestors we do not need our hands for locomotion; they are available for us to deploy at any time. Humans are not the only tool-using species; there is plenty of evidence of tool use among species of sea urchins, insects, spiders, crabs, snails, octopuses, fish, birds, and mammals (including us).120 But there is a sharp difference between tool use and tool manufacture. Tool use entails employing objects found in the environment, which are often discarded after they are used. Tool manufacture entails collecting suitable materials from the environment, shaping them into a more functional form, and keeping them around for as long as they
The Threshold of Complication 175
remain useful. Our hominin ancestors were manufacturing tools in this fashion as long as 2.5 million years ago.121 For most of these millennia of tool making, however, human tools were simple, one-piece affairs, exemplified by the palm-sized f laked stone implement known as the Acheulean hand axe, which remained the state of the art in technology for a million years.122 Simple as it was, the Acheulean hand axe was a powerful extension of the human phenotype. “The huge advantage that a stone tool gives to its user must be tried to be appreciated,” writes anthropologist Sherwood Washburn. Held in the hand, it can be used for pounding, digging or scraping. Flesh and bone can be cut with a f laked chip, and what would be a mild blow with the fist becomes lethal with a rock in the hand.123 Only in the last 100,000 years did complex tools made of multiple parts emerge, perhaps in conjunction with the evolution of speech. “What is special about making a bow-and-arrow?,” asks Robert Aunger. It requires an individual to invest work in the production of an artefact, say the arrow f letchings, which is finished but not used immediately because, by itself, it does not have a function; it is only in combination with the other parts of an arrow that it achieves utility.124 Fabricating a multi-part tool requires not only making artefacts that have no intrinsic value, that are useful only as a means to an end, but also planning a sequential process, a series of steps.125 Like domains and motifs being mixed and matched to build different proteins, assembling elements “in different configurations produces functionally different tools,” says anthropologist Stanley Ambrose. “This is formally analogous to grammatical language.”126 Tools and technology were slow to develop in the transition from the Middle to the Upper Paleolithic 50,000 years ago. Cultural evolution took a long time to get traction. Innovations build upon pre-existing culture, so when very little culture is present, there is little opportunity for rapid, cumulative innovation.127 Culture had the potential to grow exponentially, but the initial stages took their time. Our species, though biologically modern, was still far from its threshold of complication. Starting about 10,000 years ago, the technology of agriculture was invented. It came along several millennia before writing, but it required new concepts of measurement, like determining how much land was needed to produce a crop of a given size or how much of a crop was to be given over to a taxing authority.128 And it set the stage for what followed. “Most food-producing societies are sedentary and can thus accumulate stored food surpluses,” write biogeographer
176 The Threshold of Complication
Jared Diamond and anthropologist Peter Bellwood, “which were a prerequisite for the development of complex technology, social stratification, centralized states, and professional armies.”129 Agriculture allowed humans to supplement their direct manpower with new energy sources like draft and pack animals and irrigation systems for water power. The millennia after the dawn of agriculture saw not only the invention of writing and subsequent discovery of new energy sources and mechanical principles but also the invention of tools to exploit them. The lever, the pulley, the inclined plane, the screw, and the wheel and axle were developed to amplify human and animal muscle power. The construction of rafts and boats from reeds, skins, or logs was followed by the development of sails and oars to power them and rudders to steer them.130 Today the human hand has been extended to manipulate things as small as atoms and as large as 747s and aircraft carriers. Just as speech extended itself through writing, and perception through measurement, the human hand built machines to extend its own strength and versatility. These developments were interdependent. Constructing complex machines and measuring instruments is not possible without written instructions and diagrams, and precise measuring instruments have little utility without any means to record the measurements. All were necessary for human culture to cross its threshold of complication.
7.10 Complication Great and Small The living world and human civilization are two great systems orchestrated by sequences and capable of open-ended evolutionary creativity. Each in its own way crossed von Neumann’s threshold of complication and, for each, crossing the threshold required a division of labor. The transition from preliterate human culture to our literate technological civilization paralleled the transition from the RNA world to modern DNA-RNA-protein biology. While both precursor systems were governed by sequences, those sequences were unstable and impermanent, with poor replication fidelity, and they did not allow much accumulation of functional novelty. Impermanence limited the random access needed for self-referential regulation of sequences by other sequences. The precursor sequences were also entangled with dynamic processes, ribozyme folding and paralanguage, limiting their ability to function as rate-independent abstractions. Finally, their ability to interact with their environments was limited by the poor binding and catalytic power of ribozymes and by the time bound and space bound perceptual and motor behavioral capacities of the human phenotype. DNA replaced RNA and writing replaced speech for sequence storage, enabling the systems to improve their physical stability and replicative fidelity. This made possible random access and open-ended, scalable self-reference, the hierarchical white-collar regulation of sequences by other sequences. And
The Threshold of Complication 177
it made the sequences abstract and rate-independent by purging the entangled dynamics. By developing protein biocatalysts to supplement ribozymes and technologies of measurement and manipulation to supplement the human phenotype, the systems were able to increase the power, accuracy, and versatility of their interactors. Both established a division of labor between sequence storage/ replication and biocatalysis. They could recognize a larger set of environmental affordances and respond with the precise application of unprecedented power. Having crossed the threshold of complication, they outcompeted and established hegemony over the remnants of the precursor systems. RNA and speech retain fundamental roles in their respective systems, but they are overshadowed by the elaborate, cumulative effects of DNA and writing. To be sure, locating von Neumann’s threshold solves neither the underlying origin problem of life or of human culture, but it does break each problem into two parts: pre-threshold and post-threshold. We do not know how the first sequences of RNA and the first ribozymes emerged from the rate-dependent physical world, nor do we know how human speech and toolmaking emerged from the social milieu of our primate ancestors. These remain profound problems. If stable, abstract sequences and more precise and powerful interactors had not come along, who knows what might have happened? Suppose the RNA world had not discovered DNA sequences and protein. Could that world have persisted for four billion years in its primitive state? Assuming it achieved sufficient scale to survive geologic upheavals, why not? If an RNA world were found on Mars, astrobiologists would triumphantly announce that life exists elsewhere in the universe. Yet it would be life which stops short of crossing von Neumann’s threshold: complex, potentially, but not explosively so. And suppose humans had discovered neither written sequences nor the technologies of measurement and machines. The preliterate cultures of Africa and the Americas had urban centers, agriculture, ceramics, weaving, trade, art, and political quasi-states encompassing large territories. Could those cultures have persisted indefinitely? Absent major environmental shocks, like European conquest, there is no reason to think they couldn’t. Yet they, too, would have stopped short of von Neumann’s threshold.131 “The evolutionary process of social differentiation, cultural complexification, and political stratification appears to reach a dead end unless writing is developed or adopted,” says linguist Paul Kay.132 Gerald Joyce is deploying a metaphor when he writes of the RNA world: “It is as if a primitive civilization had existed prior to the start of recorded history, leaving its mark in the foundation of a modern civilization that followed.”133 I think he nails it, and more than metaphorically. Recorded history was enabled by the invention of writing, which was necessary for preliterate human culture to cross the threshold of complication.
178 The Threshold of Complication
Notes 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
17. 18. 19.
20. 21. 22. 23.
24. 25. 26.
27. 28. 29.
Parts of this chapter are based on Waters 2012a. Von Neumann 1966 Von Neumann 1966 Von Neumann 1966 Maynard Smith & Szathmáry 1995; see also Szathmáry 2015. Woese 2002 Ban et al. 2000; Nissen et al. 2000 Doudna & Cech 2002 Atkins et al. 2011; Joyce 2002; Pearce et al. 2017 Joyce & Orgel 2006 Fica et al. 2013 Joyce 1989. Rich 1962; Crick 1968; Orgel 1968 Gilbert 1986 Robertson & Joyce 2011 Takeuchi et al. 2011. The primeval replication of RNA sequences by ribozymes did not entail reproducing a complete genome all at once; it was a piecemeal process: “the earliest RNA polymerases need not have been responsible for replicating entire RNA genomes, but merely for generating RNAs that enhanced the fitness of preRNA-based life” (Joyce 2002). The difference between piecemeal and all-at-once replication will be discussed further in Chapter 8. Joyce 2002 Cech & Steitz 2014 Cech & Steitz 2014. The menagerie of recently discovered RNAs is large. For reviews see Breaker and Joyce (2014), Huang and Zhang (2014), and Morris and Mattick (2014). The most prominent new classes of RNAs include microRNAs (miRNAs), short, non-coding sequences which play an important role in gene expression in eukaryotes, and riboswitches, segments of mRNA that can bind allosterically to metabolites to self-regulate their own expression. They also include long non-coding RNAs (lncRNAs), longer sequences which play a role in differentiation and development, and in mammals are chiefly expressed in the brain. Morris & Mattick 2014 In another variation on Dawkins’s extended phenotype, RNA sequences have even been weaponized to neutralize genes in other organisms (LaMonte et al. 2012; Cai et al. 2018). Morris & Mattick 2014 Steven Benner and colleagues view “modern macromolecular catalysis as a ‘palimpsest’ of an earlier metabolic state, with features that arose recently (‘derived traits’) superimposed upon features that are remnants of ancient life (‘primitive traits’)” (1989). Joyce 2002 The genetic material of many viruses contains only RNA, but viruses are not freeliving. They survive and replicate only by borrowing the translation and replication equipment of cells. Viruses will be taken up in Chapter 8. C. H. Waddington anticipated this difficulty in principle in 1969: “In practice—and perhaps because of a profound law of action-reaction—it is difficult (impossible?) to find a [material structure] which is stable enough to be an efficient store [of information] and at the same time reactive enough to be an effective operator [catalyst]” (Waddington 1969a). Benner et al. 2011, Eigner et al. 1961 Joyce 2012 Leu et al. 2011
The Threshold of Complication 179
30. Sterelny et al. 1996 31. The point at which the error rate in replication is high enough for mutated copies to overwhelm functional copies is called the error threshold, and it imposes “an upper limit to the frequency of copying errors that can be tolerated by a replicating macromolecule” (Robertson & Joyce 2011). Today, for example, viruses made solely of RNA operate close to the error threshold of “roughly one mutation per genome replication” (Leu et al. 2011). 32. Joyce & Orgel 1999 33. Benner et al. 1999 34. Pressman et al. 2015 35. Robertson & Joyce 2014 36. Maynard Smith and Szathmáry did recognize the shift from “RNA as gene and enzyme” to “DNA + protein (genetic code)” as one of their eight major transitions (1995). 37. Like the first sequences, the first modern cells emerged from a world in which they did not exist. Woese calls this transition the Darwinian Threshold. “There would come a stage in the evolution of cellular organization where the organismal genealogical trace (recorded in common histories of the genes of an organism) goes from being completely ephemeral to being increasingly permanent,” he writes. “On the far side of that Threshold ‘species’ as we know them cannot exist. Once it is crossed, however, speciation becomes possible” (2002). What is the relationship between Woese’s Darwinian Threshold and von Neumann’s Threshold of Complication? Perhaps they are the same threshold approached from different angles. Woese emphasizes heredity and the origins of the three kingdoms of life, Archaea, Bacteria, and Eukarya. Before the modern cell, there was no cell division, no organismal reproduction as we know it. The earliest cell, says Woese, “was more or less a bag of semi-autonomous genetic elements” (1998). Individual sequences replicated, but evolution was communal, driven, in the words of virologist Patrick Forterre, “by exchange of experiences between individuals, via lateral gene transfers” (2012). 38. Hull 1988 39. Leu et al. 2011 40. RNA can also exist in a double-stranded form, but the double-stranded version does not fold into catalytic ribozymes; only the single-stranded version folds. It is unlikely the double-stranded version played a meaningful role in the RNA world. 41. Joyce 2002. One of the critical events in the transition from the ancient RNA world to the contemporary DNA-RNA-protein world was what Joyce calls “The Great Reverse Transcription” (2012). Sequence information stored in RNA was transcribed into DNA, and there it has remained ever since (Leu et al. 2011). 42. Woese 2002 43. Benner et al. 1999 44. Joyce 2002 45. Robertson & Joyce 2011. Some origin-of-life researchers argue the RNA world probably had a simpler and even-earlier precursor with sequences composed of something other than nucleic acids, perhaps something like a mineral chemistry. These researchers are seeking “a glimpse of the widely accepted, though hypothetical, initial replicator, whose biomolecular activity initiated Darwinian evolution on Earth” (Yarus 2011; see also Robertson & Joyce 2011). 46. Cech 2011. Interestingly, Pattee anticipates the ribozyme from first principles in 1966: “The memory and transcription functions must occur at different times; that is, the sequence of copolymers cannot act as a degenerate memory and a nondegenerate selective structure at the same time. Therefore there must be two states of this copolymer; perhaps a straight chain which serves as a memory, and a folded conformation which can selectively interact with the environment.” 47. Boza et al. 2014
180 The Threshold of Complication
48. Replicating a ribozyme requires several steps. First, the ribozyme has to unfold. However, the unfolded sequence cannot be copied directly to create a second ribozyme. The ribozyme is made of single-stranded RNA. As a result, the ribozyme sequence is first copied into its complementary sequence, which in turn is copied back to the original ribozyme sequence. The existence of this intermediate template adds a layer of complexity that will not be dealt with here (Takeuchi et al. 2011; Boza et al. 2014). 49. Jose et al. 2001; Breaker 2002; Breaker 2018 50. Another argument for dividing the labor by splitting replication and interaction into separate molecules is made by Benner: “Genetics versus catalysis place very different demands on the behavior of a biopolymer. According to theory, catalytic biopolymers should fold; genetic biopolymers should not fold. Catalytic biopolymers should contain many building blocks; genetic biopolymers should contain few. Perhaps most importantly, catalytic biopolymers must be able to catalyze reactions, while genetic biopolymers should not be able to catalyze reactions and, in particular, reactions that destroy the genetic biopolymer” (2014, emphasis his). The assignment of genetic functions to DNA and catalytic functions to protein gave the system “the potential to catalyze hard-but-desired reactions without the potential to catalyze easy-butundesired reactions” (Benner 2014). 51. Harris 1986 52. Harris 1986 53. Goody & Watt 1963 54. As historian Patricia Crone writes, “We all take the world in which we were born for granted and think of the human condition as ours. This is a mistake. The vast mass of human experience has been made under quite different conditions” (1989). 55. Levinson & Holler 2004 56. Technologies of vocal communication overcome these limitations. Telephone conversations are not space bound and voicemail is not time bound. 57. Olson 1994. Simply by using language, preliterates were evincing some implicit “knowledge” of phonology and grammar, but this is not the same as the explicit, conscious awareness of language as understood by literates. As Stevan Harnad points out: “Wittgenstein emphasized the difference between explicit and implicit rules: It is not the same thing to ‘follow’ a rule (explicitly) and merely to behave ‘in accordance with’ a rule (implicitly)” (1990). 58. Enquist et al. 2008 59. Dyson 1979 60. How large-scale social organization emerged remains a significant topic of research in anthropology and archaeology. See, for example, Kosse 2000, Powers et al. 2016, Turchin et al. 2018, and Shin et al. 2020. 61. Another Hockett design feature is displacement. Speech allows discussion of “things remote in time or space, or both, from the site of the communication” (1966). Writing adds another hierarchical level, what we might call second-order displacement; not only are the things or events under discussion remote in time and space, but so are the writer and reader. 62. Goody 2000. For some this is a feature, not a bug. “The wonderful thing about language is that it promotes its own self-oblivion,” writes Maurice Merleau-Ponty (2002). Corporate malfeasance is often accompanied by an injunction to leave no paper trail. 63. Madden et al. 2006 64. Schmandt-Besserat 1987 65. Goody & Watt 1963 66. See also Olson 2016, 2020; Davidson 2019. 67. Scalability was also limited by the conversational demands of speech production and reception. There is a practical upper limit on the number of people who can
The Threshold of Complication 181
68. 69. 70. 71. 72. 73. 74. 75.
76. 77. 78. 79. 80. 81.
82. 83. 84. 85. 86. 87.
88.
participate simultaneously in one conversation. Evolutionary psychologist Robin Dunbar and colleagues conclude “a maximum clique size of around four is an inherent property of human speech mechanisms” (1995). Larger speech-based events, such as lectures, plays, or town meetings, require that turn-taking and other pragmatic norms be agreed to in advance. Goody 2000 Eerkens & Lipo 2007 Dyson 2013 Entanglement is widespread and eclectic. Systems of sequences in the natural world do not depend on universal hardware, like digital computers. They depend on a menagerie of entangled, idiosyncratic mechanisms. Levinson & Holler 2004 Goldin-Meadow 1999 Kendon 2004 See Hewes 1973, Corballis 2003, Armstrong & Wilcox 2007, Tomasello 2008, and McNeill 2012. There is also evidence the relationship between sound and meaning in speech is not entirely arbitrary, which adds another level of entanglement. “Despite the immense flexibility of the world’s languages,” say linguist Damián Blasi and colleagues, “some sound—meaning associations are preferred by culturally, historically, and geographically diverse human groups” (2016). They continue: “A substantial proportion of words in the basic vocabulary are biased to carry or to avoid specific sound segments, both across continents and linguistic lineages. Given that our analyses suggest that phylogenetic persistence or areal dispersal are unlikely to explain the widespread presence of these signals, we are left with the alternative that the signals are due to factors common to our species, such as sound symbolism, iconicity, communicative pressures, or synesthesia” (2016). Olson 1994; Note that the goal of speech recognition software is to ignore the dynamics and extract only the rate-independent sequential information from the speech stream. Vygotsky 1962 Olson 1977 Brenner 2009 Olson 1977 Gee 2006. In his book Class, Codes and Control (1971), sociologist Basil Bernstein uses the terms elaborated code and restricted code to distinguish speech which is more writing-like from that less so. Speakers using informal restricted code assume a shared context and implicit understandings between speaker and listener, whereas speakers using formal elaborated code make no assumptions and as a result need to make such context and understandings explicit. Speakers using restricted code make many assumptions about the listener and speakers using elaborated code make few. See also Kay 1977. Innis 1950 Daniels 1996. See also Olson 1977. Hyman & Renn 2012. It is the rare preliterate, tribal language that even has a word for “word” (Dixon & Aikhenvald 2002). Olson 1994 Tomasello 2008 Olson 1994. In his 2005 book The Written Language Bias in Linguistics, linguist Per Linell argues that its emphasis on text-based analysis has caused modern linguistics to view language as “inventories of abstract forms, rather than as aspects of meaningful action, interaction, and practices in the world.” This point is echoed by Talmy Givón: “there is something decidedly bizarre about a theory of language (or grammar) that draws the bulk of its data from well-edited written language” (2002). Olson 2006
182 The Threshold of Complication
89. 90. 91. 92. 93. 94.
95. 96. 97. 98. 99.
100. 101. 102.
103. 104. 105. 106. 107. 108.
109. 110.
111.
Harris 1986 Goody 1977 Olson 1994 Olson 1994 Bazerman 2006 Goody 2000. One aspect of “the power of the written word” that impressed Goody was “the power it gives to cultures that possess writing over purely oral ones, a power that enables the former to dominate the latter” (2000). This echoes Benner and colleagues when they write that “a life form that did not exploit proteins as catalysts could not have competed with life that did” (1999). Mullins et al. 2013 Mullins et al. 2013 Von Uexküll 1957 As philosopher Karl Popper writes, “man, instead of growing better eyes and ears, grows spectacles, microscopes, telescopes, telephones, and hearing aids. Instead of growing swifter and swifter legs, he grows swifter and swifter motor cars” (1972). “We can cope with thinking about the world when it is of comparable size to ourselves and our raw unaided senses,” says mathematician Richard Hamming, but “when we go to the very small or the very large then our thinking has great trouble. We seem not to be able to think appropriately about the extremes beyond normal size” (1980). James Gibson calls this the “more or less direct perception of the very distant and the very small by means of simple instruments” (Gibson 1982). For Gibson’s take on various kinds of what he calls “indirect apprehension,” see Gibson (1982) in Reasons for Realism, a collection of his essays and miscellaneous writings. Michailidou 2010. Similarly, with another important measure used by merchants, “the beveled-rim bowls common from Syria to Iran during the Uruk period, 3500– 3000 BC, may provide the first evidence for the standardization of units of capacity” (Schmandt-Besserat 2010). Michailidou 2010 Michailidou 2010 Morley 2010 Scott 1998 Gibson 1979 A modern molecular biology laboratory is a case in point. Its inputs are sequences (the existing scientific literature) and biological samples. The samples are processed and measured, with the measurements output as sequences—in many cases sequences of text that represent sequences of nucleotides in the DNA of the sample. These sequences are then processed further and the lab outputs even more sequences in the form of scientific papers (see Latour & Woolgar 1986). Herva et al. 2014 Measurement and numeracy also brought new precision to the description of affordances by language. “Language does not typically represent the dimensions of objects in analog fashion, but rather digitizes them,” say Barbara Landau and Roy Jackendoff. “Thus, dimensional adjectives such as big/small, thick/thin, and tall/short refer to continuous dimensions of size, but the linguistic terms bifurcate these dimensions into pairs of relative contrasting terms.” They continue: “Precise metric information is simply not encoded in the language’s stock of spatial terms. . . .It is possible to be precise in expressing distances and orientations, but to do so, one must invoke a culturally stipulated system of measurement that operates by counting units such as meters or degrees (go 30 meters, turn 30 degrees)”(1993, emphasis theirs). Schmandt-Besserat 1996. Just as writing emerged for purposes of accounting, evolutionary biologist Stephen Jay Gould has argued DNA’s chief function in the cell is
The Threshold of Complication 183
112.
113. 114.
115. 116. 117. 118. 119. 120. 121.
122. 123. 124. 125. 126.
127. 128. 129. 130. 131.
132. 133.
“bookkeeping” (2002), as have philosopher William Wimsatt (1980) and evolutionary biologist George Williams (1985). See also Basu and Waymire (2006). The abstract split between tokens for quantity and tokens for things being counted was not always complete. Linguist Igor Diakonoff reports on a language with different numeral systems for different classes of things being counted: “The most curious numeral system which I have ever encountered is that of Gilyak, or Nivkhi, a language spoken on the river Amur. Here the forms of the numerals are subdivided into no less than twenty-four classes, thus the numeral ’2’ is mex (for spears, oars), mik (for arrows, bullets, berries, teeth, fists), meqr (for islands, mountains, houses, pillows), merax (for eyes, hands, buckets, footprints), min (for boots), met’ (for boards, planks), mir (for sledges) etc., etc.” (1983, emphasis his). Schmandt-Besserat 2010 Goody 1977; “Throughout the world, local developments of writing and arithmetic have interacted with each other in various ways,” says science historian Peter Damerow. “In the case of arithmetic, the final outcome is a relatively unified system of arithmetical notation and calculation methods” (2012). “Today, as a result of globalization processes, writing is used all over the world,” says Damerow, “but neither the languages nor the writing systems have been unified by these processes” (2012). “One of King Darius’s greatest achievements,” writes Schmandt-Besserat, “was to give some uniformity to the weights and measures within the Persian Empire” (2010). Renfrew & Morley 2010 Schieber & Santello 2004 Marzke 2013 Shumaker et al. 2011 Ambrose 2001. Humans not only fabricate tools. We also exchange tools and parts of tools with each other, something no other animal does. In Robert Aunger’s words, “animals actively share food and other resources, but not made things” (2010a, emphasis his). Ambrose 2001 Washburn 1960 Aunger 2010b Aunger 2010b Ambrose 2001. Some researchers have argued human speech evolved hand-in-hand with tool manufacturing (Gibson & Ingold 1993). “The very intimate way in which gesture is integrated with speech,” says Kendon, “could suggest that speech itself is intimately linked to manipulatory activity” (2004). Lind et al. 2013 Scott 2017 Diamond & Bellwood 2003 See Cotterell & Kamminga 1990. The tension between the accomplishments of pre-threshold and post-threshold systems is captured by anthropologist Claude Lévi-Strauss. Writing, he says, was the “essential acquisition of culture” and the “source of our civilization.” Nonetheless, he continues, “we must never lose sight of the fact that certain essential forms of progress, perhaps the most essential ever achieved by humanity, were accomplished without writing” (1969). Kay 1977 Joyce 2002
8 THE INSTITUTION OF SEQUENCES
8.1 Of Constraints and Chimeras Imagine you are driving home late at night. You are in the countryside and there is little traffic. You come to a red light at a crossroads and stop. Asked why you stopped, you might answer, “because the light was red.” Perhaps, after reading this far in this book, you might even say, “because the traffic signal placed a boundary condition on my trajectory.” In terms of physics or basic biology, you know this makes no sense. No f lying frisbee, no rolling ball, no speeding bullet would stop for the red light. And no animal is going to stop either, no wolf, no moose, no horseshoe crab, nothing except perhaps a trained service animal. In fact, most humans who have ever lived would not stop. Our preliterate ancestors might have been curious about the signal as a physical object, but absent the institutional constraints inscribed in the one-dimensional patterns of the traffic code, there would have been no specific effect on their behavior. The reality is that a traffic light is a f limsy constraint; it can incline but not necessitate. If traffic engineers wanted to stop you cold using nothing but light, they would need something blindingly bright that would render you unable to proceed. They could also install a physical constraint, perhaps a gate like those found at toll booths and railroad crossings, but even those can be breached with a little momentum. Stopping you decisively would require bollards like those used to protect important government buildings. But why go to all that trouble and expense when a simple red-yellow-green signal will do just fine? Traffic engineers know your code and have access to your machinery. Of course, the traffic light is not the ultimate constraint in this scenario. To be sure, it is the proximate constraint, the local switch that cycles the
The Institution of Sequences
185
intersection through its affordances—stop, go, stop, go. But lurking in the background is the ultimate constraint. Of course, it’s a sequence, a small part of the large corpus of traffic regulations that govern your behavior when you drive. For example, in the statutes of the U.S. state of North Carolina, the sequence randomly accessible by the marker §20–158(b)(2)(a) reads: When a traffic signal is emitting a steady red circular light controlling traffic approaching an intersection, an approaching vehicle facing the red light shall come to a stop and shall not enter the intersection. After coming to a complete stop and unless prohibited by an appropriate sign, that approaching vehicle may make a right turn. And that is why you stop, because of this sequence, which is itself embedded in an enormous system of interlocking sequential boundary conditions called the traffic code. Other sections of the code lay out the penalties you might suffer should you fail to be constrained by the red light and authorize certain government employees to impose more expansive boundary conditions upon you.1 But you get the idea. Behind the simple allosteric constraint of the traffic light is a complex set of sequences whose linear patterns guide your three-dimensional behavior at distant times and places. Think of it as an institutional constraint. But tell the truth. Aren’t you just a little bit tempted to drive straight through the red light? It’s late and you want to get home. The affordances all say go. There is no cross traffic, so there is no danger of collision. There are no police officers in sight, so there is no danger of institutional sanctions. Nothing about the physical or social dynamics of the situation offers any motivation for you to stop. Why are you ignoring what you perceive? Why are you just sitting there passively, wasting time, waiting for the light to change? There are many possible reasons. Perhaps your teenage daughter is sitting in the passenger seat next to you. As a parent you are concerned for her safety. You want to set a good example. You want her to develop prudent habits You are motivated to make sure she reaches sexual maturity and is able to reproduce. Were you alone you might be inclined to ignore the light, but with her there next to you, you stay put. What about that box attached to the signal pole? It’s hard to make it out in the dark. Your visual perception is being tested to its limit. Is it a red-light camera? Will it capture your transgression and mail you a citation? Maybe it’s nothing, just part of the signal equipment, but why take a chance? You are riskaverse; better safe than sorry. Or maybe your friends are in the car with you and are egging you on, teasing you for being so docile and deferential in a situation where following the letter of the law yields no benefit. Why are you such a stick in the mud? Can you resist their mocking, or will it overcome your judgment?
186 The Institution of Sequences
Perhaps you are almost ready to give in and run the light when a car pulls up alongside. You think you recognize the driver. Is that your gossipy neighbor? If you ignore the signal now the whole neighborhood will hear about it. You want to protect your reputation. You do not want to be known as a scoff law. Should you stay or should you go? In this simple scenario, there are many potential boundary conditions on your trajectory. Parental care and risk aversion are constraints rooted in your biology, ultimately guided by the sequences of your genome. Peer pressure and reputation management are chief ly social, shaped through the verbal and written sequences of culture. But the relevant boundary conditions in this and many other situations actually reside in extensive bodies of text. These institutional constraints—whether legal, religious, commercial, or academic—govern many of the day-to-day details of our behavior, although we rarely stop to think about them. To John Searle they provide “the logical structure of complex societies.”2 In the Iliad, the hero Bellerophon tamed the winged horse Pegasus and rode off to slay a fire-breathing she-monster with the body of a goat, the head of a lion, and the tail of a snake. This creature was called the Chimera. Early in the 20th century the term chimera was adopted in biology and later in medicine to refer to individual organisms possessing a surplus of gene sequences, more than one genotype within a single phenotype. Like the genes of parasites which can override the genes of their hosts, the surplus sequences of the chimera compete for expression within their common phenotype.3 As we wait for the traffic signal to change to green, we are chimeras of a sort. We may possess only one biological genotype, but our trajectory is subject to boundary conditions arising from many sources. The written institutional constraints of the traffic code say one thing. The affordances of the physical environment say another. Our biological instincts also have their say, as do the constraints of our complex primate social relationships. “A person is not an originating agent,” says B.F. Skinner. “He is a locus, a point at which many genetic and environmental conditions come together in a joint effect.”4 Sprawling, self-referential institutions like law, religion, and academia are the most complex expression of textual constraint in human culture. They have become one-dimensional worlds unto themselves. “Animals running in a pack,” Searle writes, can have hierarchies and a dominant male; they can cooperate in the hunt, share their food, and even have pair bonding. But they cannot have marriages, money, or property. Why not? Because all of these create institutional forms of powers, rights, obligations, duties, etc. and it is characteristic of such phenomena that they create reasons for action that are independent of what you or I or anyone else is otherwise inclined to do.5
The Institution of Sequences
187
What Searle calls reasons for action I am calling constraints, and in many cases, they are independent of what you or I or anyone else is otherwise inclined to do. Our personal trajectories are governed by the interplay of myriad boundary conditions, some physical, some biological, some social, and some institutional. Except for the physical, almost all are rooted in some sort of sequence. We are at once physical, biological, social, and institutional creatures, and our behavior is the result of an ongoing free-for-all6 among the relevant constraints.7 Which affordance we act upon depends on the tyranny of context as well as our developmental and cultural histories, all time bound and space bound. “Our symbolically mediated actions can often be in conf lict with motivations to act which arise from more concrete and immediate biological sources,” says Terrence Deacon. Calling some actions ‘free’ and others not oversimplifies what is really only a matter of the degree of the strengths of competing compulsions to act, some compulsions arising from autonomic and hormonal sources and others from our imagined satisfaction at reaching a symbolized goal.8 Biological chimeras result when dueling genotypes compete to construct and configure their common phenotype. But in humans the competition for control is not between sequences of the same type, like two different sets of genes but rather between two different kinds of sequences, genes and language, sequences which originate and function within different internal and external systems. Biological chimeras are first-order, but we are second-order.9 We have seen that sequences constrain interactors either by construction or by configuration, and we second-order chimeras are governed by one of each. Our biological phenotypes are the result of a developmental construction process orchestrated by DNA sequences in our genome. These biological phenotypes are then configured by the spoken and written sequences of human language. The longstanding question of nature vs. nurture, then, is mostly a question of sequence vs. sequence. Human behavior arises from the complex classifications and reclassifications of internal and external systems of constraints. “Because of inherent differences between the two channels of information transmission across generations,” says Robert Paul, “the relations between them are characterized by a certain degree of opposition, conf lict, or tension: their agendas are, in fact, to a significant degree at cross-purposes.”10 In their 1981 book Genes, Mind, and Culture, biologists Charles Lumsden and Edward O. Wilson write that “genetic natural selection operates in such a way as to keep culture on a leash.”11 In their view, variation among human cultures is subject to boundary conditions imposed by our underlying biology. The constraints of biology tend to prevail over those of culture, especially in areas like survival and reproduction, but where the biological boundary conditions are weak—where there are don’t-care conditions—culture can range off-leash.
188 The Institution of Sequences
“Different cultures are different forms that an underlying biological substructure can be manifested in,” says Searle. An institutional constraint “can only make use of what is already there in the organism, provided by its self-construction under the aegis of the DNA,” says Paul, but in order to do so it must redirect the actions of the organism from those provided by the imperatives of the genetic program and orient them toward the performance of its own priorities which are in many key areas at odds with or even opposed to those of the genetic program.12 Donald Campbell puts it pithily: “The genes say ‘Thou shalt covet;’ ultrasocial human culture says (or used to say) ‘Thou shalt not covet’” (emphasis his).13 To return brief ly to the cell, the folded enzyme combines strong bonds and weak bonds. The weak hydrogen bonds that account for the three-dimensional shape of the molecule bend but cannot break the strong peptide bonds that make up the backbone of the primary structure. Culture is a weak bond that can bend but cannot break the strong bond of biology. “There could not be an opposition between culture and biology,” says Searle, “because if there were, biology would always win.”14 Like the tangle of multiple transcription factors competing to regulate expression of a gene, internal and external constraints in our chimeric mix compete to orchestrate our trajectories. “My phenotype is some sort of quantitative polygenic compromise,” says Richard Dawkins,15 a compromise that includes not only sequences of genes but also boundary conditions expressed in the sequences of speech and text.
8.2 The Fallibility of Constraints In 2015, Leonard Wong and Stephen Gerras, two professors at the U.S. Army War College, published a disturbing paper called “Lying to Ourselves: Dishonesty in the Army Profession.”16 It shows that, contrary to traditional military values of honor and integrity, most Army officers routinely lie by sending false reports up the chain of command. This they blame not on the officers themselves, but on an out-of-control, overbearing Army bureaucracy. “Many Army officers, after repeated exposure to the overwhelming demands and the associated need to put their honor on the line to verify compliance, have become ethically numb,” they write. As a result, an officer’s signature and word have become tools to maneuver through the Army bureaucracy rather than being symbols of integrity and honesty. Sadly, much of the deception that occurs in the profession of arms is encouraged and sanctioned by the military institution as subordinates are forced to prioritize which requirements will
The Institution of Sequences
189
actually be done to standard and which will only be reported as done to standard.17 It has become routine for officers to report compliance with the Army’s ever-growing body of institutional rules and requirements even when compliance has not been achieved. The excuse is lack of time to comply with all requirements or, as we might say, to have the relevant trajectories constrained by all the sequences. “If units and individuals are literally unable to complete the tasks placed upon them,” write Wong and Gerras, “then reports submitted upward by leaders must be either admitting noncompliance, or they must be intentionally inaccurate.”18 If officers are able to lie about compliance so readily, the underlying bureaucratic constraints must be weak bonds. The officers offer rationalizations which demonstrate that stronger, more fundamental social constraints are winning the day. “Officers convince themselves that instead of being unethical, they are really restoring a sense of balance and sanity to the Army,” say Wong and Gerras. Two other rationalizations are often used as justifications for dishonesty—mission accomplishment and supporting the troops. With these rationalizations, the use of deceit or submitting inaccurate information is viewed as an altruistic gesture carried out to benefit a unit or its soldiers.19 Loyalty to the institution, loyalty to the mission, and loyalty to one’s unit are strong bonds, ancient biosocial constraints that have governed military culture as long as there have been armies. The heroes of the Iliad would have understood and honored them. In competition with these robust constraints rooted in our hominin lineage, the latest psychosocial training requirement dreamed up at the Pentagon doesn’t stand a chance. Anyone who has labored in a large institution, military, academic, industrial, governmental, or ecclesiastical, can sympathize with the officers. While they write of the U.S. Army, Wong and Gerras could be writing about any ruleheavy institutional bureaucracy. “The Army resembles a compulsive hoarder,” they say. “It is excessively permissive in allowing the creation of new requirements, but it is also amazingly reluctant to discard old demands. The result is a rapid accumulation of directives passed down, data calls sent out, and new requirements generated.”20 This phenomenon has evolved its own derisive vocabulary. Who has not complained about the deluge of administrivia? Who has not been tempted to pencil-whip some tedious report, just checking the boxes without gathering the data or performing the requirement? Who has not known fastidious co-workers—paper-pushers—who earn their peers’ scorn when they insist on doing everything by the book? Who hasn’t ignored the letter of the law while
190 The Institution of Sequences
proclaiming compliance with the spirit of the law? Charles Dickens gave voice to this through his protagonist David Copperfield: “Britannia, that unfortunate female, is always before me, like a trussed fowl: skewered through and through with office-pens, and bound hand and foot with red tape.”21 Why do the sequential constraints governing modern institutions enjoy such a terrible reputation? Why can they often be ignored with impunity and publicly disdained, along with the individuals who choose to obey them? We know from Leibniz that constraints incline but do not necessitate. But some constraints, especially the written rulebooks of large institutions, can barely manage the most modest inclination. These weak bonds can be rendered impotent in the face of strong bonds like tribal loyalties, personal ambitions, or biological imperatives. Speed limits are f louted, dogs are walked off-leash, red lights are run, and swimmers pee in the pool.22 The sequential constraints that nominally govern institutional behavior are often neither good instructions nor good descriptions. As instructions they can be ignored, and as descriptions they imperfectly capture the underlying organizational behavior. In the U.S. Army, at least, they appear to describe an alternate reality, a one-dimensional world in which the institution functions properly on paper. In daily practice, however, it actually functions under a more social and less formal set of constraints.23 At some point almost everyone has tried to get away with a behavior that defies a legal constraint, or has sought out some loophole.24 “Clearly, the mere codification, legislation, or proclamation of a rule is insufficient to make that rule affect social behavior,” says economist Geoffrey Hodgson.25 The provisions of written contracts and the tax code can be so difficult to enforce that the parties involved often turn for relief to the great temple of sequences known as the legal system. “I do not think it follows as a matter of logical necessity that everybody who recognizes an obligation is inclined to fulfill it,” says Searle. “There are lots of cases in which people recognize valid obligations but just do not do the thing that they recognize they are under an obligation to do.” 26 Procedure manuals, parking regulations, and inspection checklists are not the only textual sequences to have a poor reputation. Written constraints often fail to function as intended for planning purposes, as Robert Burns points out in his famous sequence, “The best laid schemes o’ mice an’ men gang aft agley.” Policies yield unintended consequences. Projects fail to meet deadlines and budgets. Quality control breaks down. Applications crash. Sequences best-laid to constrain one function end up constraining something quite unexpected. “Although scientists can anticipate the future,” says David Hull, “they are no more clairvoyant than are gametes.”27 We can try to place complex boundary conditions on the future behavior of groups of people, but we cannot prepare for every contingency. Every field has its folk wisdom about the disconnect between the goals of written plans and the behaviors they manage to constrain in practice. From
The Institution of Sequences
191
Murphy’s Law on down, if something can go wrong, it will, especially if the planning constraints are a large, complex set of written sequences. As Mark Pagel reminds us, we should not “conf late intentionality with omniscience or omnipotence.”28 Regardless of how explicit and best-laid we make our constraints, every plan always includes don’t-care conditions, and therein much mischief lies, the mischief of unintended consequences.
8.3 A Recipe for Learning How human culture evolves—either in tandem with or independently of biological evolution—is a field of study filled with knowledgeable and erudite practitioners, many of whom do not fully agree with one another.29 Its chief concerns can be seen in the examples earlier. Where do the constraints of biology leave off and those of culture begin—how much nature and how much nurture? How do biological and cultural evolution differ? How do they mutually inf luence each other’s evolution? Can the principles of Darwinism be extended to human societies? Note the parallels to our question of sequences. The claim of this book is that sequences of DNA in the cell and sequences of language in human culture are not two different things but, rather, two different examples of one sort of thing. Researchers in cultural evolution are claiming that biological evolution and cultural evolution are not two different processes but, rather, two different examples of the same kind of process, Darwinian natural selection. “Instead of genetics forming the fundamental analog to which all other selection processes must be compared,” says David Hull, “all examples of selection processes are treated on a par.”30 In other words, any theory that tries to link biological and cultural evolution should tell us what is important, as distinguished from what may be true but unimportant. “Generalization in science starts from a deliberately copious array of different phenomena and processes,” say sociologist Howard Aldrich and colleagues. “Where possible, scientists adduce shared principles.” Social and biological evolution are completely different at the level of detail, they conclude, but this is “ultimately irrelevant to the project of generalizing Darwinism,” true but unimportant.31 In the present context, how can our understanding of sequences and boundary conditions inform the study of cultural evolution? In particular, what can we learn when we tease apart the rate-dependent and rate-independent aspects of culture? Some scholars of cultural evolution do recognize this distinction. Robert Boyd and Peter Richerson identify culture itself with its rateindependent element: “Culture is information capable of affecting individuals’ phenotypes.”32 They continue, “the relationship between culture and behavior is similar to the relationship between genotype and phenotype in noncultural organisms.”33 In the same vein, “culture is most usefully regarded as semantic
192 The Institution of Sequences
information,” say Alex Mesoudi and colleagues, “that is transmitted from individual to individual via social learning and expressed in behaviour and artifacts.”34 In short, rate-independent culture (information) constrains ratedependent behavior. A central problem of cultural evolution involves the spread and adoption of cultural traits—things like backgammon, Christmas trees, eggs Benedict, and archery—within a human population. How does it come to pass that certain behaviors, technologies, and texts succeed for many generations over large geographical areas, whereas others do not? Research shows there are many reasons, including the utility of the behavior, the ease of learning it, how well it builds upon and harmonizes with existing behaviors, and the social status of those who adopt it.35 Dawkins, who brought us the extended phenotype, is also responsible for the meme, originally proposed as a replicating unit of culture analogous to the gene as the replicating unit of biology. “Examples of memes are tunes, ideas, catchphrases, clothes fashions, ways of making pots or of building arches,” he says. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.36 Those who study cultural evolution disagree over whether the meme is a useful concept, whether memes even exist and, if they do, what form they take. 37 None of this is made easier by the fact that meme itself has acquired a distinctive folk usage on social media. For users of these services, memes are funny or provocative images, bits of text, or snippets of video that spread virally, often with modification. Thus, meme has achieved its own great reproductive success, although perhaps not as Dawkins intended.38 Nonetheless, no more successful term has been invented to describe a cultural trait, so meme it is. Dawkins invites us “to consider a meme as like a parasite which commandeers an organism for its own replicative benefit,” says Daniel Dennett. Some memes are like domesticated animals; they are prized for their benefits, and their replication is closely fostered and relatively well understood by their human owners. Some memes are more like rats; they thrive in the human environment in spite of being positively selected against— ineffectually—by their unwilling hosts. And some are more like bacteria or other viruses, commandeering aspects of human behavior (provoking sneezing, for instance) in their ‘efforts’ to propagate from host to host. 39 As offered by Dawkins, however, meme fails to make one crucial distinction; it does not acknowledge the important difference between the rate-independent
The Institution of Sequences
193
and rate-dependent elements of human culture. A sonnet, changing a tire, and a microwave oven might all be memes under Dawkins’s definition yet, as we have seen, they are not the same. A sonnet is a rate-independent sequence. Changing a tire is a rate-dependent behavior. A microwave oven is an artifact constructed through rate-dependent behavior constrained by rate-independent sequences.40 If genes are sequences and memes are to be the cultural analogs of genes, how can they also be anything other than sequences? Replicating a gene always means replicating a linear pattern, but by Dawkins’s definition, copying a meme can be copying anything—a behavior, an artifact, or a sequence. To make clear this distinction, I will define meme strictly in terms of rate independence and use it in the sense of recipe.41 This usage “captures the essence of most standard anthropological definitions of ‘cultural trait,’” write Michael O’Brien and colleagues, “behavioural information that can be transmitted between people about how (and when, where, and why, to lesser extents) to produce something (that may or may not leave a material trace).”42 A meme or recipe is a set of instructions for doing something. Recipes behaviorally link “two general structures—ingredients and rules— that can be reconfigured to form different recipes and thus different products,” say O’Brien and colleagues. “The same is true in biology with respect to protein interaction networks, in which the same ‘ingredients’ can be activated in different orders by different rules to form different cellular products” (references omitted).43 Their “two general structures—ingredients and rules” specify the affordances of the recipe. The ingredient list describes salient features of the environment, the objects of perception to be recognized. The rules or instructions guide the relevant behaviors. When you think of a recipe, you probably imagine a set of written instructions for preparing a meal. The rate-independent sequences of the recipe orchestrate your rate-dependent behavior as you assemble the dish. The recipe constrains your perceptual system to identify and select ingredients and equipment, and your motor behavioral system to slice, dice, crack, peel, mix, sift, etc. This intuitive sense of recipe as a behavioral boundary condition is exemplified by anthropologist F.T. Cloak, Jr.: “Why don’t I use a recipe to make a cake? The answer is, Because the recipe uses me to make the cake.” He continues, “I, a living organism, am the instrument of a set of instructions. If you are doubtful of that, ask yourself, Do the instructions do what I say, or do I do what the instructions say?”44 Furthermore, the sequences of a recipe can be replicated, and whether they are replicated often depends on a selective process.45 In other words, if you like the cake, you can ask the baker to copy the recipe for you. If you don’t like the cake, you won’t. This is a stylized example in which the rate-independent and rate-dependent elements are clear, replication is explicit, and the selection criteria are straightforward. Most of culture is not like this; in fact, the reality of cake baking is not like this. It is one thing to hand a cake recipe to an experienced baker and
194 The Institution of Sequences
quite another to give it to someone who has never even cracked an egg. The former can be configured successfully by the rate-independent sequences without further help; the latter would also need a lot of spoken instruction, ostensive demonstration, and real-time monitoring. Written recipes may be necessary but are often insufficient.
8.4 Configured by Memes How do we learn and make habits of the memes that specify our cultural affordances? How do memes configure us? Some behaviors we can learn by watching the actions and overhearing the words of other people, what is called observational learning.46 Other behaviors we acquire through deliberate instruction, rate-dependent demonstrations by a teacher accompanied by verbal prompts, pointing, and pantomime. Some recipes—like assembling bookcases and setting up tents—we can learn to follow by reading instruction booklets. Sequences are necessary for learning most behaviors, but they are by no means sufficient for all. This is why instruction manuals have pictures and diagrams as well as text. Many behaviors also require rate-dependent ostensive demonstration. If you doubt this, try explaining how to tie a necktie over the telephone. “Anyone who has attempted to learn a technology,” write archaeologists Michael Schiffer and James Skibo, “be it glassblowing or haute cuisine— by studying recipes can readily appreciate why most technologies are passed down from generation to generation by means other than explicit rules.”47 Recipes have too many don’t-care conditions; as constraints, they underdetermine. At the same time, it is the rare behavior that we can learn without any intervention from sequences. “Almost all teaching in human societies occurs through the use of language,” says Kevin Laland.48 Think about it. Are there any everyday skills you could learn strictly through observation and imitation of someone else’s behavior, with no hints, no prompting, no commentary? Could you learn to dribble a basketball just by watching? Yes, but probably not that well. Could you learn to knit a sweater? Most likely, but it would demand concentrated attention for a long time. Could you learn to make an Acheulean hand-axe? You might, but probably not a good one.49 Could you learn to ride a bicycle? Possibly, but it would require a great deal of trial and error. Could you learn to drive an automobile? Maybe, but it would require a great deal of dangerous and expensive trial and error.50 “There is strong evidence that verbal teaching increases performance relative to gestural teaching,” says Laland.51 Many human implements and behaviors are anything but transparent to the untrained eye. Our preliterate ancestors would not have known what to make of a traffic signal. Confronted with an unfamiliar gadget or procedure, rarely can we figure out how to operate or perform it well, and often we cannot figure out how to do it at all. “Human artefacts and instrumental actions tend to be opaque,” write psychologists Gergely Csibra and György Gergely, “both
The Institution of Sequences
195
in terms of their adaptive function (teleological opacity) and in terms of their modus operandi (causal opacity).”52 Could you learn to use a typewriter by imitation? Ah, a trick question. The physical steps involved in operating the machine could be learned through observation and imitation but, unlike dribbling a basketball or riding a bicycle, a typewriter’s purpose is not the rate-dependent activity in itself but rather the production of rate-independent sequences. So, like the proverbial chimpanzees composing Shakespeare, you could learn how to use a typewriter without knowing how to use a typewriter. Rudimentary hunt-and-peck skills could be learned by imitation more easily than could touch typing, but neither is of much use without the additional knowledge that the sequence of pecks matters. Effective learning of complex behavior almost always demands a combination of rate-dependent imitation and rate-independent sequences, pointing and exaggerated demonstration paired with sequential hints, prompts, and commentary. Sequences and dynamic behavior are both necessary, but neither is sufficient. As Andy Clark says, human learning is “language-infected.”53 The entanglement of speech and gesture guarantee that ostensive constraints will be accompanied by sequences and vice versa. Memes sometimes go viral, meaning they spread rapidly from person to person via social media. Within hours, a successful meme can configure, ever so slightly, the behavior of millions of people. More generally, virus is a metaphor also used to describe how sequences of language spread from individual to individual. “Ideas are like viruses which spread autonomously from host to host,”54 says William Wimsatt, and Terrence Deacon writes, “like a ‘mind virus,’ the symbolic adaptation has infected us.”55 This metaphor is implicit in Clark’s description of learning as language-infected.56 But what does going viral have to do with actual viruses? Viruses are sequences, some made of DNA and some of RNA, the closest things we have to life forms which are pure, standalone sequences. “A virus can be entirely defined by its coding capacity,” write virologists Didier Raoult and Patrick Forterre.57 Viruses are also the most abundant living things on Earth. Viral particles outnumber cells by a factor of ten to 100, and in most environments viral genes greatly outnumber cellular genes.58 Although there is great diversity among viruses, one shared feature is a stripped-down genome lacking the genes for ribosomes. In order to express themselves, the viral sequences must hijack the translation machinery of the host cell. The same is true of replication. Viruses have evolved a distinctive way of life that computational biologists Eugene Koonin and Yuri Wolf call informational parasitism.59 They know the cell’s code and have access to its machinery. Viruses are recipes, molecular memes; they comprise a list of ingredients and a set of rules. Being an excellent host, the host cell provides the virus with the affordances it needs, a pantry full of ingredients and the gadgets necessary for executing the rules. A virus invading a cell prompts a wholesale
196 The Institution of Sequences
reconfiguration of the cellular machinery. “During the infection process,” says Forterre, “viral genetic information progressively transforms the cell . . . into a new type of cellular entity.”60 Like the fungus in the ant, the virus reclassifies the internal affordances of the cell. A chimera is born. “It is not just a modification of the cell’s metabolism,” says Forterre, but the emergence of a completely different metabolism with a different function that provides autonomy for the virus. . . . During this stage, it is impossible to determine to which organism some of the molecular organs present in the cell (such as the membrane or the ribosomes) belong.61 The constraints of virus and cell become entangled, creating what evolutionary biologist Jean-Michel Claverie calls a virus factory.62 Like viruses, the sequences of spoken and written language are informational parasites that hijack the equipment of their human hosts in order to be expressed and replicated. The recipe uses you to make the cake. If I like the way it tastes, you can replicate the recipe so that it can use me to make the cake, too. If the recipe is von Neumann’s (X), you can be his Universal Constructor A and his Universal Copier B, but not both at the same time. We know little about the neurological machinery that makes all of this possible, but on the evidence of our daily inputs and outputs we know it exists.
8.5 The Microbial Sharing Economy Viruses are not the only standalone sequences at the microbial level. Many bacteria regularly exchange DNA sequences with one another in a process called horizontal gene transfer (HGT). Like a meme, each horizontally transferred snippet of DNA carries its corresponding function or recipe.63 This is how medically important traits like antibiotic resistance spread through bacterial populations.64 Although HGT has been known to science since the 1950s, researchers began to appreciate its scale and scope only after genome sequencing became cheap and widespread in recent decades.65 The bacterium Escherichia coli is the laboratory workhorse of molecular biology and has long been considered a species. When we imagine a species, we usually expect the genomes of individual members of the species to be more or less the same, like the genomes of wolves or moose or horseshoe crabs. Although there is always some variation at the margin, the vast majority of genes in a species should be essentially the same. But individual E. coli genomes are far from identical. In fact, only a small fraction of the gene families in a typical E. coli genome are found in all E. coli genomes.66 If you count up all of the genes that have ever been found in E. coli, the number is large, about 20,000. However, the number of genes that all E. coli cells have in common is only one-tenth of the total, about 2,000 genes.67
The Institution of Sequences
197
These 2,000 are known as the core genome of the species. They describe the affordances common to all E. coli environments. What about the other 18,000? If the 2,000 sequences in the core genome constitute only a tiny sliver of all genes in the species, how do we describe the rest? The word seems to be optional. These are sequences that are shared across populations via horizontal gene transfer, yet are found in only a minority of individuals.68 “While the core genes perform vital routine functions,” says biologist Peter Young, “the diversity of bacterial phenotypes and adaptations is largely provided by the mobile pool of accessory genes that are readily gained and lost.”69 They comprise what is called the pan-genome. The distinction between the core genome and the pan-genome is like the operating system of a smartphone versus its constellation of apps. “Phones are shipped from the factories in a limited range of models (species) with standard operating systems (core genes),” writes Young, “but users download apps (accessory gene modules) from the internet (community gene pool) so there are soon millions of distinct combinations of software (genotypes). Rather than accessing a central app store, bacteria adapt through peer-to-peer networking (horizontal gene transfer).” 70 If everyday English were our core genome, our pan-genome would be the technical languages spoken by engineers, lawyers, clerics, or molecular biologists. These are specialized sequences that describe specialized affordances. An excellent example of how pan-genomes operate is found in a cyanobacterium known as Prochlorococcus, modestly described by microbiologist Steven Biller and colleagues as the “most abundant photosynthetic organism on the planet.” 71 Prochlorococcus accounts for about half of the chlorophyll in large parts of the surface oceans; its estimated global population is 1027 cells. (For those of you counting at home, that’s an octillion.) Collectively it “produces an estimated 4 gigatons of fixed carbon each year,” say Biller and colleagues, “which is approximately the same net primary productivity as global croplands” (references omitted).72 In other words, Prochlorococcus is not some weird outlier organism; it is a mainstream element of the global ecosystem.73 Prochlorococcus has a core genome of only about 1,000 genes, and these constitute about half the genome of an average Prochlorococcus cell. The other half is drawn from its pan-genome, which appears to be vast, perhaps more than 80,000 genes shared across the global population via HGT.74 “Prochlorococcus can be viewed as a federation of coexisting cells,” write Biller and colleagues, “a large collection of many groups, each of which exhibits different adaptations to specific environmental variables.” 75 By tapping into this external library of sequence diversity, populations of Prochlorococcus can respond to the affordances of their local habitats; they can be configured by input sequences as specific ecotypes.76 Were they human we might call these cultures. Each gene sequence in the Prochlorococcus pan-genome is like a recipe; it governs a function needed sometimes but not always, depending on the presence of
198 The Institution of Sequences
the relevant affordance. How often these sequences are shared, replicated, and expressed depends on the chemistry of the water column, the amount of sunlight available, the prevalence of predators, and the metabolic outputs of other bacteria. “Microorganisms do not have a brain,” says philosopher Nathalie Gontier, “but they nonetheless display differential phenotypic behavior that is relevant from an ecological point of view. Communication need not involve spoken or signed language, it can also be of a biochemical kind.” 77 As Pattee says, molecules can become messages.78 Prochlorococcus swims in an information soup rich with meme-like sequences, niche adaptation genes,79 which can be exploited in response to environmental requirements. By importing sequences from its distributed pan-genome, a Prochlorococcus cell can configure itself to exploit many novel affordances. The ocean is filled with the chatter of their DNA.80 The Prochlorococcus system architecture evokes von Neumann’s self-reproducing automaton, swimming in its own soup of parts and input sequences. Each sequence configures the Universal Constructor A to build a novel, specialized automaton. Similarly, each DNA snippet imported by Prochlorococcus configures it with the functional capability to deal with some novel local affordance. This allows Prochlorococcus to evolve more rapidly than would be possible through ordinary parent-to-offspring transmission. “The main process of genome innovation for many of the evolving entities on this planet is not vertical descent,” say James McInerney and colleagues, “rather it is recombination and gene acquisition.”81 Or, in the words of Carl Woese, “horizontally derived variation is the major, if not the sole, evolutionary source of true innovation.”82 We humans f loat in our own information soup of memes, inputting them, expressing them, and replicating them. Or, as Cloak would say, the memes use us to make our “cakes” and then use us again to copy them. Our culture, too, can evolve more rapidly than allowed by ordinary vertical descent. We share a cultural core genome—our native language is part of it—as well as a distributed pan-genome library which allows us to exploit specialized affordances as needed. If we need to make something novel, we can look up the recipe or call on an expert.
8.6 A Network Lovely as a Tree Descent with modification is how Charles Darwin summarized what we now know as natural selection. Living things reproduce, and their offspring are like them, mostly but not exactly. String together enough of these slightly off reproductive events and as the generations pile up you begin to see a pattern that looks like a tree. The tree of life has always been a powerful metaphor in biology, from the fanciful trees imagined by 19th century natural philosophers to the statistical trees based on sequence similarity reviewed in Chapter 1. In The Origin of Species, Darwin offers but a single illustration, a tree of descent.
The Institution of Sequences
199
Today’s tree of life is based on three assumptions about how DNA sequences replicate. First, when a cell divides, we assume that the entire genome is replicated all at once. It is not a piecemeal process; a complete copy of linear pattern is passed down in what Wimsatt calls a bolus.83 Second, we assume that genome replication is a unitary, intergenerational event; during most of its lifetime the cell is not replicating its DNA. Finally, we assume that the genome of the dividing cell accounts for not only most of the genes in the organism but also most of the genes in the entire species. There will always be genetic diversity across wild populations, but the differences are overwhelmed by the similarities. These assumptions work pretty well for the familiar plants, animals, and fungi of our everyday world, but in bacteria the pervasiveness of horizontal gene transfer calls the assumptions into question, especially for the earliest days of evolution. “Life was then a community of cells of various kinds sharing their genetic information so that clever chemical tricks and catalytic processes invented by one creature could be inherited by all of them,” says Freeman Dyson. “Evolution was a communal affair, the whole community advancing in metabolic and reproductive efficiency as the genes of the most efficient cells were shared.”84 To be sure, bacterial cells are just like other cells and replicate their DNA when they divide, fulfilling the three assumptions mentioned earlier, but they also do much more. With horizontal gene transfer, the bacterial genome can be replicated not only in its entirety all at once but also in short snippets of sequence that are shipped out into the environment throughout the cell’s lifetime. Thus, replication is not just an intergenerational event, but an ongoing feature of the organism’s life, with sequence export and import taking place regularly within generations as well as between. Finally, only a minority of genes in the species, the core genome, are found in all bacterial individuals; most of the diversity in the species comes from the genes f loating in the information soup, the distributed pan-genome. This is no way to build a tree. “Trees are inadequate because they capture only a tiny amount of the underlying evolutionary dynamics,” says botanist William Martin.85 But if a tree doesn’t do the job, what will? Many researchers argue for a network model—a web of life—in which the nodes represent bacterial genomes, and the links between the nodes represent the ongoing exchange of sequence information among the genomes.86 Networks “present a means of reconstructing microbial genome evolution that accommodates the incorporation of foreign genes,” write microbiologist Tal Dagan and colleagues, “more realistically modeling the process as it occurs in nature.”87 The networked web of life gives structure to the information soup.88 In the Prochlorococcus network, for example, individual cells are the nodes and the HGT pathways connecting the cells are the links between the nodes. Unlike descent with modification in a tree, evolution in a network is a dynamic, distributed activity in which gene sequences are input, expressed, and replicated
200 The Institution of Sequences
in real time as individual cells respond to local affordances. Horizontal gene transfer “leads to genomes whose constituent genes have different evolutionary histories,” write evolutionary biologist Peter Gogarten and colleagues.89 Modeling the evolution of these networks is a challenge, not least because each node (cell) can also reproduce itself in its entirety.90 As a result it is necessary to graft a conventional tree-like topology (for the nodes) onto the network model. “The complete picture of evolution will necessarily combine trees and nets,” write computational biologist Pere Puigbò and colleagues. “The treelike and net-like processes of evolution are entangled.”91 HGT does not replace descent with modification; it supplements it. As Woese says, “vertically generated and horizontally acquired variation could be viewed as the yin and the yang of the evolutionary process.”92 The structure of the hybrid tree-network emerges from the two fundamental ways in which sequences constrain their interactors: construction (descent with modification) and configuration (horizontal gene transfer). The genomes of one generation guide construction of the cells of the next, but at the same time, each cell is being configured by bits of DNA from other genomes within its own generation. This kind of intertwined evolutionary model even has a name, reticulate evolution.93 “In reticulate evolution, there is no unique notion of genealogical descent,” write bacteriologist Kalin Vetsigian and colleagues. “Genetic content can be distributed collectively.”94 At the other extreme of complexity, that of human civilization, it is no secret that the f low of cultural information is more complex than the conventional tree-of-life topology of biology.95 Children may acquire the traditions of their culture vertically from their parents in a tree-like fashion, but they also acquire snippets of culture—memes—horizontally from a network of friends, other relatives, teachers, neighbors, and mass media. Children grow up in a recipe-laden information soup. “Ideas and other cultural constructs spread horizontally and obliquely, not just vertically,” says Kim Sterelny. “They are not replicated and transmitted en masse (like a computer being loaded with its system disk) but are drip-fed.”96 Like biology, cultural evolution is more reticulate than tree-like, with traditions not only passed down from generation to generation but also supplemented by innovation within generations.97 Reticulate it may be, but cultural evolution differs in one key respect from the reticulate evolution of bacteria. In Prochlorococcus both the tree-like and web-like elements comprise sequences of the same kind, that being DNA. When you observe the behavior of Prochlorococcus, you can be confident that its behavior is constrained by genes, whether by descent from a parent or horizontally from a neighbor. As we have seen, however, chimeric human phenotypes are constrained by an asymmetric inheritance system.98 Like Prochlorococcus, our treelike biological inheritance comprises sequences of DNA but, unlike Prochlorococcus, our web-like recipes arrive not as genes, but as sequences of language. When you observe an individual human, you often cannot tell whether the
The Institution of Sequences
201
boundary conditions constraining her behavior are genetic or linguistic, nature or nurture. Sequences constrain their interactors through construction and configuration. They not only guide the fabrication of their interactors, as genes do with enzymes and genomes with cells, but they also manipulate the behavior of preexisting interactors, as viruses do with cells, worms with crickets, and memes with humans. Just as von Neumann’s input sequences (X) need to both build (construct) and program (configure) automata, the reticulate tree-network of evolution requires both processes. However, construction is more fundamental. Before it can be configured, an interactor must first be constructed. You cannot configure something that does not exist. Whether it is Prochlorococcus configured to live in a low-light region of the sea or a cricket configured to jump into the pool or a human configured to stop at a red traffic light, initial construction of the Prochlorococcus or the cricket or the human is a prerequisite. Construction yields the tree-like element of the reticulate model. Whether simple bacterium or complex vertebrate, construction of an organism requires the complete core genome; all relevant sequences must be available at the same time and place, time bound and space bound, to orchestrate development.99 Configuration, on the other hand, corresponds to the network-like element of the model. Once constructed, the organism is subject to configuration by sequences or allosteric constraints in its environment, anything demonstrating knowledge of the code and access to the machinery.
8.7 Constraints, Institutionalized Consider the sequence “Thou shalt not covet.” Here we have a translation, in beautiful 17th-century English, of a sequence of Hebrew characters dating back millennia and faithfully replicated by scribes ever since. It is a short, clear instruction, easily remembered if not so easily executed. Thus it makes a fine meme, a fine recipe, to be passed from person to person in our cultural information soup. You did not acquire this snippet of sequence through your genome; like the rest of your language you picked it up from your environment, most likely in early childhood. But it is more than just a solitary meme. The odds are very good that you picked it up as part of an extensive set of interlocking memes that have served to constrain the perception and behavior of billions of humans over thousands of years and across many continents. The set is passed on as a set, Wimsatt’s bolus.100 “Thou shalt not covet” is part of the cluster of sequences known as the Ten Commandments, which in turn are part of the sacred text of Judaism, the Torah. After governing the behavior of Jews for many generations, the sequences of the Torah were reclassified, some would say superseded, by sequences of the New Testament and began to constrain the behavior of Christians as well. After
202 The Institution of Sequences
several hundred more years, sequences of the Jewish and Christian Bibles were reclassified by sequences of the Qur’an and, in modified form, the Ten Commandments began guiding the behavior of Muslims. Like all world religions, Christianity and Islam have great collections of sequences, sacred foundational texts that serve to deploy their behavioral constraints across time and space. “A written text ensures that the word of God or of his associates can be transmitted unchanged over the generations,” says Jack Goody. “The word is preserved in the canonical text.”101 Only a small number of faith traditions have managed to distribute their sacred sequences worldwide. “The overwhelming majority of believers today are the cultural descendants of a very few such religions,” say linguist Ara Norenzayan and colleagues.102 Foundational texts are always subject to further reclassification through extensive interpretation and commentary, and derivative texts like Canon Law, Tafasir, and Talmud are often accepted as universal boundary conditions. Calvin’s Catechism and its kin can be thought of as cultural genomes containing “in easily replicated form the information required to develop an adaptive community,” writes evolutionary biologist David Sloan Wilson. “They are short enough for detailed analysis, and many religious denominations have them, enabling the comparative study of religious organizations. Finally, single denominations periodically revise their catechisms, providing a neatly packaged ‘fossil record’ of their evolutionary change.”103 The time and space boundaries of “thou shalt not covet” differ markedly from the tunes or catch-phrases or clothes fashions that Dawkins proposes as potential memes. Nor does the commandment resemble a gene shuff led around a population of Prochlorococcus in response to local environmental challenges. Rather, it is a small part of several large and durable institutions which have constrained the behavior of a considerable fraction of all humans who have ever lived. It is less like the pan-genome and more like the core genome of civilization. “While cross-cultural borrowing may be frequent for many peripheral components,” write Boyd and colleagues, “a conservative ‘core tradition’ in each culture is rarely affected by diffusion from other groups.” In this view, cultures are hierarchically integrated systems, each with its own internal gradient of coherence. At one extreme in the gradient are the ‘core’ components of a culture—those ideational phenomena that constitute its basic conceptual and interpretive framework, and inf luence many aspects of social life. At the other are peripheral elements that change rapidly and (or) are widely shared by diffusion.104 Most social scientists would agree with John Searle that “institutional power—massive, pervasive, and typically invisible—permeates every nook and
The Institution of Sequences
203
cranny of our social lives.”105 But what is institutional power? “Institutions are the humanly devised constraints that structure political, economic and social interaction” says economist and Nobel laureate Douglass North. “They consist of both informal constraints (sanctions, taboos, customs, traditions, and codes of conduct), and formal rules (constitutions, laws, property rights).”106 Like the U.S. Army, all institutions have their unwritten rules as well as their formal boundary conditions defined by written sequences. Hodgson defines institutions as “durable systems of established and embedded social rules that structure social interactions, rather than rules as such. In short, institutions are social rule-systems, not simply rules.”107 An institution is a sequence-governed, interlocking set of social constraints, what economist Avner Greif calls “a system of social factors that conjointly generate a regularity of behavior.”108 Explaining how systems of sequences generate this regularity of behavior requires more than a simple description of how individual particulate memes and recipes hop from person to person.109 “The pattern and intensity of selection acting on fads and fashions will be quite different from that acting on established norms and institutions,” write Mesoudi and colleagues.110 We need an explanation of how large interlocking systems of sequences construct, configure, and perpetuate the pervasive institutions of society, the strong bonds. “It is quite clear,” says Wimsatt, “that we cannot understand absolutely crucial elements of the structure and some of the content of cultural genomes of individuals without considering the institutional structures through which they pass and which in their activities they help to constitute.”111 The great world religions comprise hierarchies of priests, imams, and other specialists who replicate and interpret the foundational texts; they are the system’s first-order interactors. Like the white-collar enzymes of a gene regulatory network, they are created and governed by sequences. Their roles and power exist only by virtue of the sequential rules of the institution itself. Such equivalence classes are irreducible to the properties of the individuals involved.112 The equivalence class President of the United States has included Theodore Roosevelt and Andrew Jackson, but the affordances of the office are specified by institutional sequences, not by any biological attributes of the men. Such roles are “constituted by an extremely complex set of explicitly verbal phenomena,” says Searle.113
8.8 The State of Sequences World religions are ancient institutions governed by well-studied collections of sequences. They are almost unmatched in the scale and scope of the boundary conditions they have imposed on the behavior of human primates over the centuries. I say almost unmatched because world religions have one serious competitor: the modern state and its text-based legal system that constrains us
204 The Institution of Sequences
to stop at red lights and much, much more. “The government is the ultimate institutional structure,” writes Searle (emphasis his).114 Governments have their origin in a series of primitive biological phenomena, such as the tendency of most primate social groups to form status hierarchies, the tendency of animals to accept leadership from other animals, and, in some cases, the sheer brute physical force that some animals can exert over others.115 We comply with the rules of the legal system most of the time, but ultimately the state derives its power from its monopoly on violence, sheer brute physical force, its authority to constrain us physically and limit our degrees of freedom, up to and including killing us in some jurisdictions.116 Prisons are nothing if not boundary conditions. In Western democracies such extreme constraints are enforced through the procedures of environmental pattern recognition and regulatory logic known as due process. States are “huge social interactors whose properties are, in crucial aspects, defined by their component replicators: the legal system and the written codification of the law,” write Hodgson and Thorbjørn Knudsen.117 If enzymes are the simplest examples of pattern recognition coupled to behavioral response, then legal systems may be the most complex. Like enzymes, legal systems recognize patterns (evidence) and respond with behaviors (judgments). Pattern recognition in the legal system relies on the perceptual apparatus of its human interactors whose evidential percepts are memorialized as sequences. The rules of evidence specify the salience of these percepts, classifying them into admissible and inadmissible equivalence classes, like eyewitness and hearsay. The system allows only certain environmental features to be perceived and “the jury shall disregard” any percepts falling outside the permitted zone. Whether tasers, handcuffs, prisons, or the firing squad, the behavioral constraints of the system can be imposed only after the percepts of evidence have passed through an interlocking set of self-referential legal sequences as interpreted by judges and juries.118 If the evidence as perceived by the judge or jury matches the descriptions of criminal behavior in the sequences of law—if the affordance is recognized—then the appropriate behavioral constraints are triggered.119 The judicial appeals process in the United States gives judges near the top of the legal hierarchy the authority to reclassify verdicts and judgments of lower courts. It is strictly rate-independent. Judges review written descriptions of evidence and transcripts of arguments as perceived by the lower court—rather than the evidence and arguments themselves. While the judges and juries of trial courts may witness the rate-dependent histrionics of lawyers, the appeals court has access only to the abstract sequential descriptions. Gone are Anne Dyson’s “sensual qualities of speech.”120 In short, the legal system couples
The Institution of Sequences
205
regulated perceptual inputs to specific output constraints, with many levels of sequence processing in between.121 It’s quite an enzyme. Consider the sequence I now pronounce you married. This is a powerful constraint within the institutions of religion and government. It is expressed only if each potential spouse belongs to a well-defined equivalence class (unmarried, of legal age, etc.) and if the interactor speaking the sequence also belongs to a well-defined equivalence class ( judge, priest, etc.) with institutional authority to effect the constraint (“by the authority vested in me”).122 Assuming the institutional rules are satisfied, I now pronounce you married acts as a great switchlike constraint, cascading through many environments, reclassifying the affordances of hospitals, workplaces, houses of worship, and courtrooms. Flip one big switch and hundreds or thousands of subsidiary switches are also f lipped, reconfiguring the boundary conditions and affordances of the spouses’ lives. “Institutions of property, contract, rights, and law not only guide our thinking about social arrangements,” says Gallagher, “but allow us to think in ways that were not possible without such institutions.”123 The institutions of religion, government, and law express great reverence for tradition and precedent; major reclassifications of their sequential constraints occur rarely, and usually not for light and transient causes. This principle is exemplified by the Latin sequence stare decisis et non quieta movere, to stand by decisions and not disturb the undisturbed. The English common law has evolved slowly and incrementally for centuries, becoming more efficient as it does.124 Long-term conservation of complex systems is also found among genomic sequences in the cell. In Chapter 5 we met gene regulatory networks (GRNs), which orchestrate the development of multi-celled organisms. Guided by homeotic sequences, each individual GRN governs the piece-by-piece build-out of part of the body plan. Like a wedding, expression or repression of a single sequence can trigger a cascade of downstream effects. As with the human institutions of religion and law, the core, kernel sequences of gene regulatory networks are ancient and conservative. Their self-referential architecture is inherited in its entirety, making successful modification difficult and uncommon.125 “Interference with expression of any one kernel gene will destroy kernel function altogether,” say Eric Davidson and Erwin. “The result is extraordinary conservation of kernel architecture.”126 We think of the animal kingdom as being enormously diverse, but thanks to these ancient gene regulatory networks, many of us are much the same at a higher, abstract level. Crickets, horseshoe crabs, moose, wolves, and humans all share similar plans of symmetry, segmentation, and digestive canal. It has been 500 million years since a major new phylum-level body part came into existence. Every subsequent innovation has resulted from tweaks within phyla and classes.127 In the words of Davidson and Erwin, once the high-level GRNs were in place, “they could not be disassembled or basically rewired, only built on to.”128 So it is with the institutions of civilization.
206 The Institution of Sequences
The finely tuned gene regulatory networks of living things and the pervasive institutions of human culture follow an evolutionary trajectory distinct from the sequence snippets of horizontal gene transfer and the spread of memes like tunes and clothes fashions. The former are inherited and passed on as great interlocking boluses of sequences and the latter as independent, particulate functions and constraints. The former give us the tree-like element of the reticulate model of cultural evolution, and the latter provide the network-like element.
Notes 1. Police officers are sometimes called “the long arm of the law,” an acknowledgment that the elaborate sequences of legal texts ultimately require their own specialized interactors to function as effective constraints. In the movies the bad guys don’t say “Scram, it’s the law!” because they perceive a bunch of sequences. Rather, they are reacting to interactors constrained by those sequences. 2. Searle 1995 3. Recent genomic analyses, in fact, have confirmed “the chimeric origin of eukaryotes” (Rivera et al. 1998). That the sequence-bearing mitochondria and chloroplasts found in eukaryotes were once free-living bacteria is well known, but even eukaryotic nuclear DNA has chimeric origins. Different classes of genes—e.g., operational vs. informational—have been found to originate in different branches of the bacterial and archaeal lineage. 4. Skinner 1974 5. Searle 1995 6. In Consciousness Explained, Daniel Dennett calls the interplay of myriad boundary conditions in the mind a “pandemonium” (1991). 7. Although constraints in physics and constraints in computer science are different, the competition among constraints on our behavior resembles constraint satisfaction problems in artificial intelligence (e.g., Tsang 1993). 8. Deacon 1997 9. Cavalli-Sforza & Feldman 1981. Robert Boyd and Peter Richerson call these genetic and linguistic systems asymmetric. “Two inheritance systems are asymmetric when their patterns of transmission differ,” they say (1985). 10. Paul 2015 11. Lumsden & Wilson 1981 12. Paul 2015; emphasis mine. 13. Campbell 1982 14. Searle 1995 15. Dawkins 2004 16. Wong & Gerras 2015 17. Wong & Gerras 2015; emphasis mine. 18. Wong & Gerras 2015 19. Wong & Gerras 2015 20. Wong & Gerras 2015 21. Dickens, David Copperfield, Chapter 43. 22. “Most of us honor the big, important rules, like those prohibiting robbery, arson, rape, and murder,” write software engineer Kevin Simler and economist Robin Hanson. “But we routinely violate small and middling norms. We lie, jaywalk, take office supplies from work, fudge numbers on our tax returns, make illegal U-turns, suck up to our bosses, have extra-marital affairs, and use recreational drugs” (2018).
The Institution of Sequences
207
23. “At the social level, why is it that detailed description of behavior alone does not allow us to derive the policy underlying this behavior?,” asks Howard Pattee. “And why is it that a complete statement of policy does not allow us to derive the forces and dynamics under which these policies will operate?” (1978). 24. Campbell calls loopholes “the literal legalistic interpretations of the law that subvert announced legislative intent” (1982). 25. Hodgson 2006 26. Searle 2010 27. Hull 1982 28. Pagel 2006. We are, as economist Pavel Pelikan says, “still far from eliminating all randomness” (2011). In evolutionary biology, this concept is summarized in the second rule of Leslie Orgel, “evolution is cleverer than you are” (quoted in Dennett 1995) or, as Pattee tells his students, “natural selection bats last.” The failure of sequences to achieve their intended goals in a deterministic fashion opens the door to the variation which feeds creative evolution. Like Viagra, sometimes the side effect is where the money is. Originally developed in 1989 to treat high blood pressure and angina, its more commercially exciting effects were discovered during cardiovascular clinical trials (Osterloh 2004). 29. To get a sense of where things stand, try Laland 2017, Henrich 2016, Paul 2015, Lewens 2015, Hodgson & Knudsen 2010, Mesoudi et al. 2006, Morin 2015, Richerson & Boyd 2005, and Tamariz 2019a. In the founding canon of the field, two books stand out, Cavalli-Sforza & Feldman 1981 and Boyd & Richerson 1985. On Darwinian thinking in linguistics, see Gontier 2012. 30. Hull 2000 31. Aldrich et al. 2008 32. Boyd & Richerson 1985 33. Boyd & Richerson 1985 34. Mesoudi et al. 2006 35. Richerson & Boyd 2005 36. Dawkins 2006 37. See, for example, Hull 1982, Wilkins 1998, Blackmore 1999, Aunger 2001, and Sterelny 2006. 38. Paul 2015 39. Dennett 2001 40. “Economist Kenneth Boulding says, ‘a car is just an organism with an exceedingly complicated sex life.’ Like a technological virus, it takes over a complex social structure and redirects the resources of a large fraction of it to reproduce more of its own kind” (Wimsatt 1999). 41. Schiffer & Skibo 1987; Lyman & O’Brien 2003; Mesoudi & O’Brien 2008 42. O’Brien et al. 2010 43. O’Brien et al. 2010 44. Cloak 1975 45. Blackmore 1999 46. Bandura 1971, 1986 47. Schiffer & Skibo 1987 48. Laland 2017 49. Whittaker 1994. “Stone toolmaking, from the Oldowan on, requires bodily skills that cannot be acquired directly through observation,” writes archaeologist Dietrich Stout. “These pragmatic skills can only be developed through deliberate practice and experimentation leading to the discovery of low-level dynamics that would remain ‘opaque’ to observation alone” (2011). 50. “We have been selected to teach as well as learn,” says Kim Sterelny, “so the skilled need to recognize and diagnose errors in the operations of the less skilled, and because some of these skills are so complex, and have such little error tolerance,
208 The Institution of Sequences
51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62.
63.
64. 65.
66. 67. 68.
69. 70. 71. 72. 73. 74. 75. 76. 77. 78.
that to learn them we need to take crucial elements off line, and autocue their practice” (2012). Laland 2017 Csibra & Gergely 2011. As Mark Lake writes, “My actions in imitating you fishing for termites will be very different from my actions imitating you writing instructions on how to fish for termites!” (1998). Clark 2008 Wimsatt 1999. See also Dawkins’s essay “Viruses of the Mind” (1993). Deacon 1997 Clark 2008 Raoult & Forterre 2008 Forterre 2016 Koonin & Wolf 2012 Forterre 2016 Forterre 2016 Claverie 2006. The virus hijacking the cell calls to mind Dennett’s description of how memes configure human brains: “The haven all memes depend on reaching is the human mind, but a human mind is itself an artifact created when memes restructure a human brain in order to make it a better habitat for memes. The avenues for entry and departure are modified to suit local conditions, and strengthened by various artificial devices that enhance fidelity and prolixity of replication” (1991). Many mechanisms are available for the horizontal transfer of DNA sequences among prokaryotes, from what is called naked DNA in the cellular environment, through cellular organelles like plasmids, to viruses and virus-like particles. See Thomas & Nielsen 2005, Lang & Beatty 2007, and Biller et al. 2014. Salyers & Amábile-Cuevas 1997. For a thorough history of horizontal gene transfer and related topics in this chapter, see Quammen 2018. Interestingly, local conditions can limit HGT use. Researchers studying microbes in geothermal springs in Mexico found that a shortage of environmental phosphorus leads bacteria to digest free DNA to gain access to its phosphorus content. Rather than interpret HGT sequences, these microbes eat them (Souza et al. 2008). For humans, messages in bowls of alphabet soup do not survive long for the same reason. McInerney et al. 2011 Daubin & Szöllősi 2016 There is so much diversity within existing bacterial species that, if they were higher organisms, they would be distributed across many, many species. What is a species, anyway? In light of horizontal gene transfer, the concept needs some refinement. Should microbiologists identify bacterial species strictly with their core genomes or should they also consider the distributed pan-genome, the external library of innovation-sharing diversity. “The genome complex that characterizes a bacterial species is much larger than can be contained within any single cell,” writes molecular geneticist Michael Syvanen (2012). Young 2016. Young 2016 Biller et al. 2015 Biller et al. 2015 Chisholm 2017; Delmont & Eren 2018 Biller et al. 2015 Biller et al. 2015 Rohwer & Thurber 2009; Larkin et al 2020 Gontier 2015 Pattee 1969
The Institution of Sequences
209
79. Rohwer & Thurber 2009 80. Economists might call the Prochlorococcus information soup a public good; the consumption of the good by an individual microbe does not reduce its availability to another microbe, and it is impossible or at least very difficult to exclude the good from being available to everyone (McInerney et al. 2011). “Many nucleotide sequences are best seen as public goods,” says Douglas Erwin, “with individual sequences recruited by microbial lineages as needed. Just as in economics, public biological goods are those where the acquisition and use of the good by one organism does not preclude its use by another organism” (2015). The same is true of the sequences of language. 81. McInerney et al. 2011 82. Woese 2000 83. Wimsatt 1999 84. Dyson 2007 85. Martin 2011. See also Puigbò et al. 2010; Bapteste et al. 2009; Franklin-Hall 2010. 86. Koonin & Wolf 2012 87. Dagan et al. 2008 88. Soucy et al. 2015 89. Gogarten et al. 2002 90. The subject is beyond the scope of this book, but it will be interesting to see if the computational toolkit used to model learning in neural networks might be applied to model the evolution of gene sequence networks in horizontal transfer. (Sudasinghe et al. 2013; Ravenhall et al. 2015) 91. Puigbò et al. 2010 92. Woese 2000 93. Arnold 2009; Gontier 2015; Mallet et al. 2015 94. Vetsigian et al. 2006 95. Campbell 1979; Mesoudi et al. 2004; Gontier 2006 96. Sterelny 2001 97. These two elements of the reticulate tree-network model also represent different time scales in cultural evolution. Boyd and Richerson call them evolutionary (tree) and ecological (network) time scales. “Processes on an evolutionary time scale affect contemporary behavior because they determine the nature of the cultural traditions that characterize any society,” they write. “On the other hand, to understand evolutionary processes we must understand the forces that act on an ecological time scale to affect cultural variation as it is carried through time by a succession of individuals” (1985). For a discussion of reticulate evolution in speech, see Croft 2000. 98. Boyd & Richerson 1985 99. We do not know enough about brains to know whether separate construction and configuration mechanisms exist there. I treat the sequences of language as configuration mechanisms, not as constructors. But it is easy to imagine first-language acquisition as constructing permanent connections—something like strong bonds—in the brain. Subsequent sequential constraints would then be configuration mechanisms, the neuronal equivalent of weak bonds (Pattee, personal communication). Dennett makes a related argument about human consciousness: “Its successful installation is determined by myriad microsettings in the plasticity of the brain, which means that its functionally important features are very likely to be invisible to neuroanatomical scrutiny in spite of the extreme salience of the effects” (1991). 100. Nettle 1999; Wimsatt 1999 101. Goody 2000 102. Norenzayan et al. 2016 103. Wilson 2002 104. Boyd et al. 1997 105. Searle 1995
210 The Institution of Sequences
106. 107. 108. 109.
110. 111. 112. 113. 114. 115. 116.
117. 118.
119.
120. 121.
122.
123. 124. 125.
North 1991 Hodgson 2006 Greif 2006 Richerson writes: “Institutions tend to be hard to change because (1) individuals cannot unilaterally opt in or out of an institution, it takes a critical mass of people to collectively organize a new one; (2) institutions are complex and interlocking so that changing one institution often strains others; (3) institutional rewards and punishments tend to deter deviance. Individuals who privately detest an institution may conform to it because of these incentives; (4) institutions are hard to acquire by diffusion” (2017). Mesoudi et al. 2006 Wimsatt 1999 Hodgson & Knudsen 2010 Searle 2010 Searle 2010 Searle 1995 The regulatory power of most world religions derives from their claim of a monopoly on violence in the afterlife, where your degrees of freedom can be curtailed should your mortal behavior not comply with their earthly constraints. An input perception (“coveting”) in this world is coupled to an appropriate output (“eternal hellfire”) in the next. The affordances span both worlds. Norenzayan and colleagues argue the institutionalization of what they call prosocial religions overseen by omniscient Big Gods afforded an important evolutionary path to large-scale cooperation in human societies (2016). Hodgson & Knudsen 2010 In fact, linguist Chris Knight argues that language and law co-evolved. “Language is dependent on civilized, rule-governed behavior,” he writes. Its “existence presupposes community-wide contractual commitment—something that for deep reasons is impossible in the animal world” (2007). “In a court of law, evidence must be produced, and judgments must be based on that evidence following a set of rules,” write philosophers Shaun Gallagher and Anthony Crisafi. “Judgments may have to be based on the testimony of others who have information that you, as a judge or jury member, simply cannot have in the firsthand way. More than this, the whole case—and the judgments that get made—will depend on a body of law, the relevant parts of which only emerge (because of the precise particulars of the case) as the trial proceeds” (2009). Dyson 2013 Gallagher & Crisafi remind us that “science itself, in the modern sense of the term, is an institution, and that the scientific method is a tool that we use to extend cognition. We use our labs in the same way as we use our courts” (2009; see also Hull 1988). In How to Do Things With Words, philosopher J. L. Austin, who founded the field of speech act theory, calls such an utterance a performative, i.e., “in saying these words we are doing something” (1975). Its regulatory nature becomes clear because although it is a statement, it is neither true nor false in a conventional sense, but rather summons into existence the state of affairs it describes. Gallagher 2013 Priest 1977 Another example of a set of related sequences being passed along as a unit are supergenes, functional groups of genes, sometimes as many as 100, that are also inherited as a package and are largely immune to recombination and other evolutionary disruption. “Reduced recombination within supergenes is central to their evolution,” write evolutionary biologists Scott Taylor and Leonardo Campagna (2016). “Supergene
The Institution of Sequences
211
control of complex traits is widespread,” say geneticist Martin Thompson and evolutionary biologist Chris Jiggins (2014). See also Küpper et al. (2016) and Schwander et al. (2014). 126. Davidson & Erwin 2006 127. Says François Jacob: “Evolution does not produce novelties from scratch. It works on what already exists, either transforming a system to give it new functions or combining several systems to produce a more elaborate one” (1977a). Both are processes of regulatory reclassification. 128. Davidson & Erwin 2006; “The hierarchical structure of developmental GRNs controls the nature of the variation available, in effect ‘packaging’ entire GRN subcircuits for selection. Some of these sub-circuits have remained static for hundreds of millions of years, and this has forced evolutionary changes upstream and downstream in the GRN” (Erwin & Davidson 2009).
9 THE CONTINUUM OF ABSTRACTION
9.1 Universal at Both Ends Collisions between rocks are governed by the universal, inexorable laws of physics. Collisions between wolves and moose are governed by specific local constraints that result from the historical process of natural selection. Newton’s laws are universal; teeth and tendons are not. A quite different form of universality is found in the algorithmic rules of the Turing Machine, which are sufficiently universal to compute any function that can be computed. How can it be that physics and computation are both universal, when they are so clearly different? “The problem and the attraction of physics and computation as bases for models is that they are both universal, but complementary, modes of description,” writes Howard Pattee. “Everything must obey physical laws, we assume, even if our descriptions of these laws always have some inaccuracies. Computers are universal and conventional; everything can be described by a computer convention if it can be described by any other convention” (emphasis his).1 The connection between the two is as old as computation itself. One of the field’s pioneers, Ada Lovelace, notes that “in enabling mechanism to combine together general symbols in successions of unlimited variety and extent, a uniting link is established between the operations of matter and the abstract mental processes.”2 This was in 1843. Physics is the quintessence of rate dependence and computation of rate independence, physics of the concrete and computation of the abstract. Imagine, then, a continuum of abstraction anchored at each extreme by one of these universal models, physics at one end and computation at the other or, as Lovelace would say, at one end the operations of matter and at the other the abstract mental processes.
The Continuum of Abstraction
213
Arrayed between the universal extremes on the continuum are the particulars of systems of sequences—Lovelace’s general symbols in successions of unlimited variety and extent—as they have evolved in the living world and human civilization. While at one end we find the pure rate dependence of physics and at the other the pure rate independence of computation, at intermediate points rate dependence and rate independence are more or less entangled. The more entangled they are, the more they resemble dynamic physical systems. Conversely, as they become more abstract, their rate-dependent and rate-independent elements become easier to distinguish, with the rate-independent element more closely resembling computation.3 Along the continuum between physics and computation reside three broad groupings: allosteric constraints, entangled sequences, and control hierarchies, with allosteric constraints near the physics terminus and control hierarchies near the computation terminus. Traveling from left to right, we see rate dependence slowly giving way to rate independence, Michael Tomasello’s drift to the arbitrary.4 Leftmost is physics, governed by laws which are universal, inexorable, and incorporeal, and which exhibit behaviors that are rate-dependent. We accept the laws with the understanding that, in this universe at least, they apply everywhere and cannot be other than what they are. Strip away every arbitrary constraint in the living world, every element of what we would recognize as improbable but coherent behavior, and you are left with good old physics, the foundation in which all systems of sequences are grounded. Moving to the right, allosteric constraints include things like traffic lights, alarm calls of monkeys, trail pheromones of ants, icons used in signage, gestures like pointing, and regulatory molecules that bind to allosteric enzymes. These are not linear patterns, but they nonetheless behave as specific constraints within— and only within—a larger system of sequences. The function of an allosteric constraint is to configure an interactor, to get the monkey to evade a predator or the ant to follow a different path, to redirect the gaze to a particular affordance, to activate or repress an enzyme’s function. Allosteric constraints can only configure interactors, and only reversibly; they cannot construct or replicate them. Thus, they rely on a population of prefabricated interactors. Examined in isolation, allosteric constraints can be explained in physical terms. Out of context, a regulatory molecule binding to an allosteric enzyme looks like ordinary chemistry. In the absence of drivers to be constrained, a traffic light is just electromagnetic radiation. In the absence of ants to be diverted, a pheromone is just a molecule. “A molecule becomes a message,” says Pattee, “only in the context of a larger system of physical constraints which I have called a ‘language’ in analogy to our normal usage of the concept of message” (emphasis his).5 Moving another step to the right on the continuum we find entangled sequences, in which there is no clear division of labor between dynamic interaction and sequential storage. The rate-dependent and rate-independent elements
FIGURE 9. 1
214 The Continuum of Abstraction
The Continuum of Abstraction
215
cannot always be readily distinguished. Two examples of entangled systems are the RNA world and preliterate human culture. The ribozyme can interact and replicate, but it cannot do both at the same time. In human speech, sequential information is accompanied by dynamic elements of paralanguage and gesture. Neither system supports scalable random access, which requires stable, longterm storage of one-dimensional patterns. And neither can generate orders-ofmagnitude improvement in either the precision of pattern recognition or the power of catalytic manipulation. Systems of entangled sequences have evolved a limited degree of abstraction, but their rate independence is always infused with some rate-dependent dynamics. This limits how large, coherent, and complex they can become. Immediately to their right, John von Neumann’s threshold of complication remains uncrossed. To cross the threshold, we need control hierarchies in which the rate-independent and rate-dependent elements are decisively separated. As with cells and our literate technological civilization, the storage/replication element has evolved to be independent of the interactive/functional element. However, even these systems fail to completely escape entanglement with their physical mechanisms. They depend on lots of improbable equipment, like ribosomes, spliceosomes, and human animals, as well as ineffable processes like protein and chromosome folding and whatever goes on in the brain. The division of labor between the rate-dependent and rate-independent allows these systems to cross von Neumann’s threshold, making them capable of open-ended evolutionary creativity, Lovelace’s unlimited variety and extent. They can scale up and diversify, and there is no upper limit to their complexity; further reclassification is always possible. Here we get our first glimpse of Francis Crick’s Central Dogma: the division of labor between replication and interaction entails the one-way constraint of interactors by sequences. When we arrive at the right-hand pole, we find the formal sequence systems of computation. Defined relationships within and among the rate-independent linear patterns comprise the entire system; there is no entanglement. Sequences are manipulated and rewritten according to formal rules, which are sequences themselves. Meaning is not a system requirement. Algebra students can solve for x without knowing what x stands for. Semantics in computation is deliberate, in L. S. Vygotsky’s sense; if there is to be any meaning, it must be assigned from outside the system.6 And unlike control hierarchies, which require an elaborate and improbable hardware platform in order to operate, the sequence processing mechanism for formal sequences is arbitrary. There are many ways to instantiate a Turing machine. Like every model, the continuum is an oversimplification. Nonetheless, it provides a quick way to envision the evolution of complexity. At one end is symbolic information, at the other end physical systems, and arrayed between them are the mechanisms that have emerged to allow the former to actually get control of the latter.
216 The Continuum of Abstraction
9.2 The Universality of One-Dimensional Patterns This book began at the beginning, on a planet Earth devoid of life and governed by the inexorable laws of physics and chemistry. It has proceeded as an inquiry into the general principles that might allow such a barren planet to organize its matter, first into the living world and later into the civilized world. The instruments of that inquiry have been the one-dimensional patterns we call sequences, patterns that were nowhere to be found on the prebiotic Earth but are impossible to escape in our literate technological civilization. Back then sequences were nowhere; now they are everywhere. What can we surmise from this? My conclusion is that the sequences of DNA that organize life and the sequences of text and code that organize civilization are not different things but rather different examples of one kind of thing. Taking this path, using sequences as a lens to view the evolution of life and civilization, leads to some surprising results, especially for our understanding of human language. Summarizing these sprawling results in a few sentences is not easy. Nonetheless, here are one dozen high-level themes that appear throughout the book: 1. Sequences are complex constraints. This is how they actually get control of physical systems, by harnessing and channeling the laws of nature, specifying this behavior rather than that. Constraints operate in the region between the certain and the impossible. They change probabilities. They can incline but not necessitate 2. Sequences are rate-independent; their timeless patterns are important, not their physical makeup. A process is rate-independent if speeding it up or slowing it down does not affect the result. A process is rate-dependent if changes in rate can change the outcome. The world of behavior and physics is rate-dependent, always moving 3. Sequences describe affordances, environmental patterns that offer opportunities for specific behaviors. These patterns and behaviors are independently described and arranged in the internal structure of sequences, in their grammars. The grammar of interaction allows unlimited composition of affordances, recombination of patterns to be perceived and behaviors to execute. The grammar of extension allows description of affordances distant in time and space, and subject to probabilities and contingencies. The grammar of abstraction allows sequences to be affordances for other sequences 4. Affordances can be perceived at all scales of the living and civilized worlds, from the enzyme-substrate relationship to human institutions like the criminal justice system. Biological and cultural evolution are processes of affordance discovery, exploitation, and memorialization 5. Sequences constrain the rate-dependent world through interactors, which recognize affordances and respond appropriately. Sequences govern interactors
The Continuum of Abstraction
6.
7.
8.
9.
10.
11.
12.
217
through either construction or configuration. Construction entails building interactors from elementary parts, as proteins are built from amino acids. Configuration entails changing the behavior of an interactor that already exists, as in parasitism and animal communication. If configuration is reversible, it is called allosterism As systems of sequences become larger and more complex, their corresponding interactors become more discriminating in the range of environmental patterns they can recognize and the power and precision of their behavioral responses. Protein enzymes are powerful and specific catalysts, and technologies of measurement and manipulation allow humans to extend their perception and behavior into distant time and space In biological and cultural evolution, sequential constraints often underdetermine the behavior of their interactors, leaving unconstrained degrees of freedom or don’t-care conditions. This minimizes the amount of sequence needed to execute a constraint, and admits variation that can lead to improvements in function Sequences rely on external mechanisms for their interpretation and replication. These mechanisms are universal, able to process any sequence regardless of its functional relevance. Syntax is independent of semantics. Some of these mechanisms, like ribosomes, spliceosomes, and brains, are the complex and arbitrary result of millions of years of evolution. Other mechanisms, like protein folding, are lawful and dynamic Interpretation and replication operate independently. Interpretation is functional and semantic; a sequence constrains an interactor through construction or configuration. Replication is quiescent and structural; a sequence is a passive template, and its meaning is of no consequence. Von Neumann’s self-reproducing automaton shows that a control mechanism is needed to switch a system of sequences between interpretation and replication Sequence replication can be holistic or piecemeal. In cell division the entire genome sequence is replicated all at once; in horizontal gene transfer and human communication, short subsequences are replicated differentially. Holistic replication leads to tree-like descriptions of sequence lineages. Piecemeal replication leads to network-like descriptions. Taken together, they yield a reticulate pattern of inheritance Sequences can be affordances for other sequences only if they are stable enough to be perceived. This gives an evolutionary advantage to permanent storage media like DNA and writing. Stability enables random access, self-reference at scale, and unlimited opportunities for reclassification of constraints. This allows systems of sequences to grow in complexity and cross the threshold of complication Sequences needed for critical functions tend to be conserved across evolutionary time. Some of these form closed classes of short sequences, like
218 The Continuum of Abstraction
prepositions, that are needed to support grammatical function. Others are large sets of interlocking sequences, like genetic regulatory networks and text-based human institutions, that are passed down as a set DNA molecules, written texts, and computer code draw on very different alphabets, operate at different spatial and temporal scales, have different histories, and rely upon different mechanisms for interpretation and replication. However, the focus in this book has been on similarities rather than differences. What’s important are the observable regularities in the structure, behavior, and evolution of sequence-governed systems. As Carl Woese says, we should “think more about the process and less about the detailed forms it generates.” 7 Ignoring the detailed forms and focusing on the process inevitably requires a change of perspective. Despite Woese’s plea, most academic researchers spend most of their time studying the detailed forms. Molecular biologists and biochemists try to account for how genes guide the living world. Neuroscientists, linguists, and anthropologists try to account for how language guides human behavior. Computer scientists design code to guide the operation of computers and robots. This book has been more about the process; I prefer a unified theory to multiple unrelated accounts.
9.3 Feeling Special? In the 16th century Nicolaus Copernicus changed the course of scientific inquiry by proposing that Earth orbits the Sun and not the other way around. Contrary to the prevailing opinion of the time, he argued that Earth is not special, not privileged, not the center of the universe. Thus began centuries of scientific progress in which it became ever more clear that very little about our planet or our species is special. Scientists learned that when there are competing theories—one assuming humans are special and one assuming we aren’t—the smart bet is on the not-so-special. After the mathematician who started it all, this long-term inclination from the special toward the not-so-special in scientific thinking is called the Copernican Principle. “The Copernican Revolution taught us that it was a mistake to assume, without sufficient reason, that we occupy a privileged position in the Universe,” writes astrophysicist J. Richard Gott. “Darwin showed that, in terms of origin, we are not privileged above other species. Our position around an ordinary star in an ordinary galaxy in an ordinary supercluster continues to look less and less special.”8 The discovery of thousands of exoplanets makes it increasingly unlikely that Earth is all that special in its ability to support life. Many scholars have long thought that language makes humans special, that speech is the distinguishing characteristic of our species. To this end, researchers, including Steven Pinker and Noam Chomsky, have prepared inventories
The Continuum of Abstraction
219
of the properties of speech versus other animal communication systems to ascertain what is special about human language. Chomsky and linguist Robert Berwick even named their book about language evolution the decidedly unCopernican Why Only Us?9 So far, their question remains unanswered; there is no consensus on what is special about the special.10 My view is unabashedly Copernican: human language is neither special nor privileged. However, I do not come to this view by comparing human speech to birdsong, monkey calls, or bee dances. The not-so-specialness of our language is not revealed by comparing human speech with animal communication. But this is not to say that we are no more than social primates, or that speech has no unique features to distinguish it from other animal communication systems. Rather, I think human language is not-so-special precisely because it is a system of sequences, and in that it is not alone. We know of at least one other system of sequences with equivalent properties, the DNA-based genetic system that underpins life. Human language is not a different thing; it is a second example of a more general kind of thing. And when there are two examples of something, it becomes difficult to argue that one is more special than the other. Thus, the Copernican Principle can plant its f lag in new territory. So, what about all of those exoplanets? How many are barren, like the early Earth? How many of them harbor something that we would recognize as life? Why would we conclude that a collection of matter on a distant planet is a living system rather than just some arbitrary arrangement of complex dynamics? For astrobiologists, this is a scientific question, for humanity perhaps an existential concern. Answers are hard to come by; there is no general definition of life because, it is thought, we have only one example to work with. As Kim Sterelny and philosopher Paul Griffiths ask, how can we possibly distinguish “between accidental and essential features of life with a sample size of one?”11 This is called the N = 1 Problem. “Thus far, we know only one example of a planet that bears life: Earth,” writes astrobiologist Douglas Vakoch. “Consequently, we search for life beyond Earth by drawing analogies to ‘life as we know it.’”12 Agreed, N = 1 is a problem. However, N = 2 is less of a problem, and the central conclusion of this book is that our sample size is twice as big as researchers have thought. Earth has evolved not just one system of sequences, the living world but also a second—literate civilization—that piggybacks on the first. We still have no idea about the odds of a given planet crossing von Neumann’s threshold even once, much less twice, but it is increasingly hard to imagine such a crossing happening without sequences, one-dimensional patterns that actually get control of physical systems.
Notes 1. Pattee 1982b 2. Lovelace 1843
220 The Continuum of Abstraction
3. This continuum offers a framework for describing the diverging world views of Gunther Stent’s structurists and informationists in molecular biology, as endnoted in the Introduction (1968). Broadly speaking, the structurists view the continuum from its left end, the pole of physics. The informationists view it from its right end, the pole of computation. 4. Tomasello 2008 5. Pattee 1969 6. Vygotsky 1962 7. Woese 2002 8. Gott 1993; See also Barash 2018. 9. Berwick & Chomsky 2016. See also Moore 2017 and Gong et al. 2018. 10. One important dispute centers on whether speech depends on something special about the human brain or whether it relies on more general properties of cognition and social intelligence. Without getting into the details, the Copernican Principle alone would point away from the special and toward the general (Hauser et al. 2002; Pinker & Jackendoff 2005; Fitch et al. 2005; Jackendoff & Pinker 2005; Christiansen & Chater 2016). 11. Sterelny & Griffiths 1999 12. Vakoch 2013
APPENDIX: JUST ENOUGH MOLECULAR BIOLOGY
A.1 Why Mess With Molecules? To explain how one-dimensional patterns choreograph the world we live in, we have no choice but to take a close look at how sequences function at the molecular level. As Howard Pattee says, “we should first test our basic concepts at the cellular level where we know more about how it works.”1 At one extreme we find the sequences of speech and writing which we all know very well, so well, in fact, that we may even think we have good intuitions about how they do what they do. At the other extreme we find the sequences of DNA, RNA, and proteins in the cell. Here scientists have given us an ever-more-detailed understanding of how sequences go about their business. But unless you are a molecular biologist who keeps up with the literature, chances are good that this is a subject you never studied, or that you have studied but largely forgotten, or that you studied so long ago that the field has moved on. I wish this book could have been written without discussing the details of how cells work but, as you will see, that was not possible. Fortunately, you don’t need to study all 912 pages of The Molecular Biology of the Gene2 to get up to speed; just read this brief chapter now and refer back to it as necessary. I have made every effort to pare it down to only those topics that come up in the book.
A.2 One-Dimensional Patterns in Biopolymers Some molecules can bond to one another and assemble themselves end-to-end into long chains, hundreds, thousands, millions of molecules long. To get the idea, imagine an extremely long freight train in which each boxcar is coupled to
222 Appendix: Just Enough Molecular Biology
each of its neighbors. These chains are polymers, and their individual “boxcars” are monomers. All of our common plastics are made from these chains. Polymers put the “poly” in polyethylene, polypropylene, polystyrene, and PVC (polyvinyl chloride). These everyday polymer chains are just the same monomer over and over again ad infinitum, like a train in which all of the boxcars are identical. I call these chains rather than sequences because they are patternless; any chain 100 monomers long looks just like any other chain 100 monomers long. Biopolymers are a different story because in the living world it is common for chains to be made from more than one kind of monomer, not just boxcars, but f latbeds, tankers, hoppers, etc. As a result, two chains 100 monomers long can exhibit different patterns; these patterned chains can properly be called sequences. The number of possible patterns depends on how many different kinds of monomers are in the alphabet, as well as the length of the chain. It does not take very many monomers or a very long chain to yield a huge number of possibilities. Three biopolymers are of interest to us: DNA, RNA, and protein. We’ll start with DNA and RNA, which are close cousins, both made up of nucleic acids, otherwise known as nucleotides, nucleotide bases, or just bases. Each has four different kinds of nucleotide, four different letters in its molecular alphabet. The nucleotides in DNA are abbreviated A, C, G, and T; those in RNA are A, C, G, and U. You can see that DNA and RNA have three bases in common (A, C, G) and each has one that is different, T in DNA and U in RNA. When we describe sequences of DNA, the order of bases in the tiny molecule specifies the pattern of the abbreviations. For example, GATTACA represents a unique sequence of seven nucleotide bases, just one of many possible arrangements. In fact, there are 16,384 unique ways to arrange the four bases into such a seven-base sequence, which is a lot of diversity. When you stop to consider that a typical gene can be hundreds of bases long, you quickly realize that the number of potential genes is mind-boggling, even with only a fourbase alphabet. For now, let’s just say that a gene is a very long sequence of DNA that guides the construction of a single protein. The set of all genes in a cell is called its genome. Even simple cells contain thousands of genes and make thousands of proteins. In most cases when we discuss sequences of DNA or RNA, we pay attention to their one-dimensional patterns and not to their three-dimensional shapes. There are interesting exceptions, however, and we will meet them in due course. Proteins are the other major group of biopolymers; sometimes they are called polypeptides or peptide chains. Their alphabet comprises 20 different monomers, each a kind of amino acid, sometimes amino acid residues or just residues. As we shall see, proteins are initially constructed as sequences, but they normally fold themselves up into three-dimensional shapes. When we discuss proteins, sometimes we pay attention to their linear pattern and sometimes to their shape, and sometimes to both.
Appendix: Just Enough Molecular Biology 223
A.3 DNA Replication: Complementary Base-Pairing Sequences of DNA and RNA have the important and unusual property of being easy to replicate. This is because their nucleotide bases can bond to other bases in more than one direction, not only horizontally (lengthwise) to form a linear pattern but also vertically, at a right angle to the direction of the sequence. In the usual horizontal direction, each base bonds equally well with any of the four varieties. However, in the vertical direction each will bond with only a single unique partner. This makes faithful replication possible.
FIGURE A.1 This stylized drawing shows the double helix of the DNA molecule, with nucleotide bases (A, T, C, G) matched with their complements.
Source: National Human Genome Research Institute (www.genome.gov/genetics-glossary/DoubleHelix)
224 Appendix: Just Enough Molecular Biology
For example, to form a horizontal sequence, an A will bond with another A, or a T, C, or G. However, in the vertical direction A bonds exclusively with T, and C only with G, and vice versa. The same applies to RNA, except that A bonds with U rather than T. Any two complementary bases bonded together like this are called a base pair. Imagine a seven-monomer DNA sequence like GATTACA. If each nucleotide pairs vertically with its partner (C to G, A to T, etc.), then the result is the complementary sequence CTAATGT. This process is called template matching or base-pairing or sometimes Watson-Crick pairing, after the discoverers of DNA’s structure. The original sequence provides a template on which its complement is constructed. Scale this up to the billions of base pairs in the genome and this is how DNA replicates during cell division. If you look at a DNA molecule in a cell, you will notice it exists in doublestranded form, two complementary sequences bonded to one another. When not busy with something else, each nucleotide base remains naturally bonded to its complement. This yields the famous double helix structure of DNA. It also adds to another necessary step when it is time to replicate. Since every nucleotide is already bonded to its complement, that bond must be broken to allow a new strand to be formed. The double helix must be unwound and unzipped, and two new complementary sequences built upon both single-stranded templates. The job of unzipping the DNA and building the complementary
FIGURE A.2 A high-level view of DNA replication. The double helix is unwound and unzipped, and DNA polymerase constructs complementary sequences on the two exposed strands.
Source: National Human Genome Research Institute (www.genome.gov/Pages/Careers/Educational Programs/ShortCourse/2016ShortCourse/2016-08-03ShortCourseGeneticsandGenomicsPri mer_BW.pdf)
Appendix: Just Enough Molecular Biology 225
sequences falls to a team of molecules centered on the enzyme DNA polymerase (note the polymer in “polymerase”). That’s the replication process, how cells divide. Start with a double-stranded DNA molecule, unwind and unzip the two strands, build two complementary sequences on the single-stranded templates, and the result is two identical double-stranded DNA molecules. One copy goes in one daughter cell and the other goes in the other. DNA polymerase is the molecule responsible for accurate replication and it is very good at what it does. However, sometimes it screws up. One would think it a no-brainer to follow the clear and simple rules “A for T” and “C for G,” but once in about every billion base pairs or so, DNA polymerase will match the wrong nucleotide. We call this a mutation, and this is how genetic variation is introduced into the living world.
A.4 Transcription: DNA to Messenger RNA (mRNA) Though replicate it must, DNA has way more to do than just replicate. Its day job is to choreograph the cell’s operation. This is done through a threestep process called gene expression. At every moment, hundreds or thousands of genes in the cell’s genome are being expressed. The process begins with a onedimensional sequence of DNA and results in a functional three-dimensional protein molecule. The three steps involved are transcription, translation,3 and folding. Let’s look at them in turn. Transcription resembles replication because once again the double-stranded DNA molecule must be unzipped and a complementary copy created from the DNA template. But transcription differs from replication in two important ways. First, replication affects the entire DNA molecule all at once, but transcription targets only a short segment of the sequence, one gene, more or less. Second, the daughter sequence created in transcription is not a sequence of DNA but rather a sequence of RNA. Since DNA and RNA have slightly different alphabets, A does not pair to T (as it does in DNA) but rather pairs with U. The DNA sequence GATC may be replicated into DNA as CTAG, but it is transcribed into RNA as CUAG. The lead molecule responsible for gene transcription is called RNA polymerase. This enzyme and its helpers govern the transcription of every gene in the genome, regardless of the gene’s function. Like its cousin DNA polymerase, it must unwind and unzip the double helix of the DNA. It also uses WatsonCrick base-pairing to build a complementary strand, but this time of RNA. However, RNA polymerase has to do a few things that DNA polymerase does not have to do. In replication every base pair in the entire genome of the cell is copied, but in transcription only a tiny fraction of the genome is targeted. DNA polymerase simply starts at the beginning and ends when it runs out of sequence. By contrast, RNA polymerase must be able to accurately locate its
226 Appendix: Just Enough Molecular Biology
The first step of gene expression is transcription. RNA polymerase unwinds and unzips the DNA molecule, and then constructs a messenger RNA transcript from the template of one of the exposed strands.
FIGURE A.3
Source: National Human Genome Research Institute (www.genome.gov/genetics-glossary/Promoter)
starting point and its ending point, where it is to begin transcription and where it is to halt. In other words, it needs random access to the DNA sequence. Once it reaches its ending point, the polymerase releases the RNA it has just transcribed, which is now called messenger RNA, or mRNA. The polymerase also re-zips the double helix, leaving everything just as it found it. In many cases the mRNA undergoes further surgery at the hands of a very large molecule called a spliceosome. The spliceosome excises unneeded sections of the mRNA (called introns), and stitches together the remaining sections (called exons) into a functional messenger RNA sequence. Transcription and splicing leave us with an mRNA transcript ready for the next step, which is translation.
A.5 The Genetic Code: Mapping RNA to Amino Acids But before we can discuss translation, we need to understand what is being translated into what. Translation requires a code. Like Morse Code or any other coding system, the genetic code enables the mapping of sequences in one alphabet to sequences in a second alphabet or, in our case, from one set of monomers (nucleotide bases) to a second set of monomers (amino acids). In Morse Code, for example, “dot-dash” maps to the letter A. In the ASCII computer code, the binary sequence “100 0001” maps to the upper-case letter A.4 The genetic code is this kind of map, and it maps from the nucleotide sequences of mRNA to the polypeptide sequences of amino acids that constitute proteins. Figuring out the mapping from one to the other, deciphering the genetic code, was a major achievement of molecular biology during the 1950s and 1960s. Twenty different amino acids are found in the protein alphabet, but only four nucleotide bases in the mRNA transcript. Obviously, this is a mismatch; there are too few nucleotide letters to map uniquely a 20-letter amino acid alphabet. If you tried it, you would have 16 orphan amino acids. Even if you map from
Appendix: Just Enough Molecular Biology 227
pairs of nucleotides (AA, AT, AC, AG, etc.), there are still not enough, just 16 ways to combine the four letters two at a time. That would leave you with four orphan amino acids. Building a coding map sufficient to account for all 20 amino acids requires a minimum of three letters (AAA, AAT, AAC, etc.), and this is how the genetic code does it, using triplets of nucleotide bases. The nucleotide triplets are called codons. But now there is a huge surplus, way too many triplets for a one-to-one
FIGURE A.4 This is the canonical representation of the genetic code, showing the 64 codons and the amino acids they code for.
Source: National Human Genome Research Institute (www.genome.gov/genetics-glossary/GeneticCode)
228 Appendix: Just Enough Molecular Biology
match. There are 64 ways to combine the four letters three at a time, far more unique combinations than are needed. As a result, there is redundancy in the code; most amino acids are forced to correspond to more than one codon. For example, six different triplets can represent the amino acid Arginine and another six can represent Leucine. These triplet sequences are synonymous; they all mean the same thing. If you study the table closely you will see that there are three special codons, labeled stop codons. These are boundary markers for the end of the gene. Much like punctuation in human texts, they serve as signals to the molecules involved in transcription and translation.
A.6 How Translation Happens: Transfer RNA (tRNA) It is one thing to create a table that shows how abbreviations for codons are mapped to abbreviations for amino acids, but it is quite another to actually implement the code with real molecules in the cell. How does a sequence of codons in the messenger RNA actually guide assembly of a polypeptide sequence of amino acids? Time, then, to meet a second important variety of RNA, transfer RNA, or tRNA. Although made of the same four nucleotide bases as messenger RNA, a tRNA looks different. First, it is short, less than 100 bases long, compared with the thousands of bases found in many mRNAs. Second, instead of remaining a one-dimensional sequence, the tRNA folds itself into a shape like a three-leaf clover. It does this by Watson-Crick base pairing with itself. Short sequences in the tRNA are the complements of other short sequences elsewhere. These sets of base pairs find each other and bond, creating the folded structure. At the end of the middle cloverleaf is a three-nucleotide RNA sequence called an anticodon, which is the complement of one of the codons in the messenger RNA. On the opposite end is an attachment point for a specific amino acid. Since there are 20 amino acids, there are at least 20 unique tRNAs.5 Each bears an anticodon specific to its amino acid. For example, if the tRNA bears the anticodon AAA, then the corresponding codon in the incoming mRNA would be UUU. From the genetic code table, we know that UUU codes for the amino acid phenylalanine. Thus, the amino acid attached to the other end of the tRNA is phenylalanine. This pattern holds for all 20 amino acids; each has at least one tRNA to call its own, with the amino acid molecule attached to one end and the anticodon for that amino acid at the other. Transfer RNA is the physical mechanism that maps the nucleotide alphabet of RNA to the amino acid alphabet of protein. tRNA is an actual molecular object with the anticodon at one end and the corresponding amino acid at the other. We have the genetic code we do because of the tRNAs we have. There does not appear to be any physical or chemical reason why the tRNA with
Appendix: Just Enough Molecular Biology 229
anticodon AAA must have phenylalanine at the other end. It just worked out that way. We could easily imagine alternative genetic codes in which AAA codes for something else because its tRNA is connected to a different amino acid.
A.7 Constructing the Finished Protein: Ribosomes The construction of a polypeptide sequence from a series of tRNAs does not happen all by itself. For this you need the complex molecular machine known as the ribosome, an enormous conglomerate of proteins and nucleic acids that reads the mRNA transcript, recruits the corresponding tRNAs, and constructs the peptide chain that will become a functioning protein. The ribosome is the cell’s engine of translation. Ribosomes are present in largely the same form in every species of living thing, and because of their importance their structure has been conserved throughout billions of years of evolutionary history. For evolution, the ribosome is the paradigm case of if it ain’t broke, don’t fix it.6 Think of the ribosome as a machine f loating in a reservoir of tRNAs. It reads an input sequence of messenger RNA, matches each consecutive codon with its tRNA anticodon, and forges the peptide bond between adjacent amino acids. Then it moves along to the next codon. Meanwhile, each empty tRNA is
The ribosome reads the mRNA template, recruits the appropriate transfer RNAs, forms the peptide bond between adjacent amino acids, and releases the empty tRNAs and the growing protein sequence.
FIGURE A.5
Source: National Human Genome Research Institute (www.genome.gov/genetics-glossary/TransferRNA)
230 Appendix: Just Enough Molecular Biology
released to be recharged with a fresh amino acid molecule. This process continues until the end of the mRNA is reached, whereupon the finished polypeptide chain is released into the cell’s interior. Once released, the amino acid sequence undergoes the final step in gene expression, which is folding. The polypeptide chain spontaneously folds itself into a three-dimensional structure. Each unique one-dimensional chain of amino acids gives rise to exactly one three-dimensional structure, and the structure is responsible for the function. Folding is discussed in greater depth in Chapter 3. There is a lot of detail here, but do not allow the detail to obscure one crucial fact: gene expression is a chain of specification. The molecular machinery at each step begins with a unique input pattern that specifies a unique output pattern. In transcription the DNA sequence specifies the mRNA sequence. In translation the mRNA sequence specifies the amino acid sequence. In folding the amino acid sequence specifies the structure of a three-dimensional protein. Coupling all of these inputs and outputs lets the cell begin with a unique linear pattern of DNA in its genome and wind up with a protein molecule with a unique three-dimensional shape. The reliability of this process gives us life as we know it.
Notes 1. Pattee 1982b 2. Watson et al. 2014 3. The adoption by molecular biologists of language-related terms like “transcription” and “translation” exemplifies François Jacob’s “linguistic model in biology” (1977b), as discussed in the Introduction. 4. Sequences of nucleotide bases are typically mapped to the Roman alphabet: A, C, G, T. They could just as easily be mapped to Morse Code or ASCII. 5. For reasons beyond the scope of this book, the number of unique tRNAs varies across species. Suffice it to say that the cell can get by with fewer than the 61 that you get by subtracting the three stop codons from the 64 possible combinations. In practice the minimum number is 31, which is still a lot of unique tRNAs. To complicate matters, there is a matching series of unique enzymes called aminoacyl tRNA synthetases whose sole function is to attach each amino acid to its corresponding tRNA. 6. Note that viruses do not have ribosomes. This is one reason they must infect cells in order to grow and reproduce.
REFERENCES
Abe, Namiko, Iris Dror, Lin Yang, Matthew Slattery, Tianyin Zhou, Harmen J. Bussemaker, Remo Rohs, and Richard S. Mann. 2015. Deconvolving the recognition of DNA shape from sequence. Cell 161: 307–318. Abler, William L. 1989. On the particulate principle of self-diversifying systems. Journal of Social and Biological Structures 12: 1–13. Adamo, Shelley A. 2012. The strings of the puppet master: How parasites change host behavior. In Host Manipulation by Parasites, eds. Hughes, David P., Jacques Brodeur, and Frédéric Thomas, 36–51. Oxford: Oxford University Press. Adamo, Shelley A. 2013. Parasites: Evolution’s neurobiologists. Journal of Experimental Biology 216: 3–10. Adleman, Leonard M. 1994. Molecular computation of solutions to combinatorial problems. Science 266: 1021–1024. Aldrich, Howard E., Geoffrey M. Hodgson, David L. Hull, Thorbjørn Knudsen, Joel Mokyr, and Viktor J. Vanberg. 2008. In defence of generalized Darwinism. Journal of Evolutionary Economics 18: 577–596. Altmann, Stuart A. 1967. The structure of primate social communication. In Social Communication Among Primates, ed. Altmann, Stuart A., 325–362. Chicago: University of Chicago Press. Amano, Takanori, Tomoko Sagai, Hideyuki Tanabe, Yoichi Mizushina, Hiromi Nakazawa, and Toshihiko Shiroishi. 2009. Chromosomal dynamics at the Shh locus: Limb bud-specific differential regulation of competence and active transcription. Developmental Cell 16: 47–57. Ambrose, Stanley H. 2001. Paleolithic technology and human evolution. Science 291: 1748–1753. Andersen, Sandra B., Sylvia Gerritsma, Kalsum M. Yusah, David Mayntz, Nigel L. Hywel-Jones, Johan Billen, Jacobus J. Boomsma, and David P. Hughes. 2009. The life of a dead ant: The expression of an adaptive extended phenotype. American Naturalist 174: 424–433. Anfinsen, Christian B. 1973. Principles that govern the folding of protein chains. Science 181: 223–230.
232
References
Antonarakis, Stylianos, and Bernardino Fantini. 2007. Introduction to special issue ‘History of Central Dogma of Molecular Biology’. History and Philosophy of the Life Sciences 28: 487–490. Arauo, Joao P. M., Harry C. Evans, Ryan Kepler, and David P. Hughes. 2018. Zombie-ant fungi across continents: 15 new species and new combinations within Ophiocordyceps. I. Myrmecophilous hirsutelloid species. Studies in Mycology 90: 119–160. Arbib, Michael A. 1969. Theories of Abstract Automata. Englewood Cliffs, NJ: PrenticeHall. Armstrong, David F., and Sherman E. Wilcox. 2007. The Gestural Origin of Language. Oxford: Oxford University Press. Arnold, Michael L. 2009. Reticulate Evolution and Humans: Origins and Ecology. Oxford: Oxford University Press. Aryani, Arash, Erin S. Isbilen, and Morten H. Christiansen. 2020. Affective arousal links sound to meaning. Psychological Science 1–9: 0956797620927967. Atkins, John F., Raymond F. Gesteland, and Thomas Cech, eds. 2011. RNA Worlds: From Life’s Origins to Diversity in Gene Regulation. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Atkinson, Quentin D., and Russell D. Gray. 2005. Curious parallels and curious connections—Phylogenetic thinking in biology and historical linguistics. Systematic Biology 54: 513–526. Auerswald, Philip E. 2017. The Code Economy: A Forty-Thousand Year History. Oxford: Oxford University Press. Aunger, Robert, ed. 2001. Darwinizing Culture: The Status of Memetics as a Science. Oxford: Oxford University Press. Aunger, Robert. 2010a. Types of technology. Technological Forecasting and Social Change 77: 762–782. Aunger, Robert. 2010b. What’s special about human technology? Cambridge Journal of Economics 34: 115–123. Austin, John L. 1975. How to Do Things With Words. Oxford: Oxford University Press. Balchin, David, Manajit Hayer-Hartl, and F. Ulrich Hartl. 2016. In vivo aspects of protein folding and quality control. Science 353: aac4354. Balkin, Jack M. 1998. Cultural Software: A Theory of Ideology. New Haven, CT: Yale University Press. Baltimore, David. 1984. The brain of a cell. Science84 5: 149–151. Ban, Nenad, Poul Nissen, Jeffrey Hansen, Peter B. Moore, and Thomas A. Steitz. 2000. The complete atomic structure of the large ribosomal subunit at 2.4 Å resolution. Science 289: 905–920. Bancroft, Carter, Timothy Bowler, Brian Bloom, and Catherine Taylor Clelland. 2001. Long-term storage of information in DNA. Science 293: 1763–1765. Bandura, Albert. 1971. Social Learning Theory. New York: General Learning Press. Bandura, Albert. 1986. Social Foundations of Thought and Action. Englewood Cliffs, NJ: Prentice Hall. Bapteste, Eric, Maureen A. O’Malley, Robert G. Beiko, Marc Ereshefsky, J. Peter Gogarten, Laura Franklin-Hall, François-Joseph Lapointe, John Dupré, Tal Dagan, Yan Boucher, and William Martin. 2009. Prokaryotic evolution and the tree of life are two different things. Biology Direct 4: 34–34. Barash, David P. 2018. Through a Glass Brightly: Using Science to See Our Species as We Really Are. Oxford: Oxford University Press.
References
233
Barash, Yoseph, John A. Calarco, Weijun Gao, Qun Pan, Xinchen Wang, Ofer Shai, Benjamin J. Blencowe, and Brendan J. Frey. 2010. Deciphering the splicing code. Nature 465: 53–59. Barbieri, Marcello. 2016. What is information? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374: 20150060. Barbrook, Adrian C., Christopher J. Howe, Norman Blake, and Peter Robinson. 1998. The phylogeny of The Canterbury Tales. Nature 394: 839–839. Barrett, Louise. 2011. Beyond the Brain: How Body and Environment Shape Animal and Human Minds. Princeton: Princeton University Press. Basu, Sudipta, and Gregory B. Waymire. 2006. Recordkeeping and human evolution. Accounting Horizons 20: 201–229. Bazerman, Charles. 2006. The writing of social organization and the literate situating of cognition: Extending Goody’s social implications of writing. In Technology, Literacy and the Evolution of Society: Implications of the Work of Jack Goody, eds. Olson, David R., and Michael Cole, 215–240. Mahwah, NJ: Lawrence Erlbaum. Behr, Jean-Paul. 2008. The Lock-and-Key Principle: The State of the Art—100 Years On. New York: John Wiley & Sons. Benkovic, Stephen J., and Sharon Hammes-Schiffer. 2003. A perspective on enzyme catalysis. Science 301: 1196–1202. Benner, Steven A. 2014. Paradoxes in the origin of life. Origins of Life and Evolution of the Biosphere 44: 339–343. Benner, Steven A., Petra Burgstaller, Thomas R. Battersby, and Simona Jurczyk. 1999. Did the RNA world exploit an expanded genetic alphabet? In The RNA World, 2nd Ed.: The Nature of Modern RNA Suggests a Prebiotic RNA World, eds. Gesteland, Raymond F., Thomas R. Cech, and John F. Atkins, 163–181. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Benner, Steven A., Andrew D. Ellington, and Andreas Tauer. 1989. Modern metabolism as a palimpsest of the RNA world. Proceedings of the National Academy of Sciences 86: 7054–7058. Benner, Steven A., Hyo-Joong Kim, and Zunyi Yang. 2011. Setting the stage: The history, chemistry, and geobiology behind RNA. In RNA Worlds: From Life’s Origins to Diversity in Gene Regulation, eds. Atkins, John F., Raymond F. Gesteland, and Thomas R. Cech, 7–19. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Bennett, Charles H. 1979. Commentary on H. H. Pattee’s ‘The complementarity principle and the origin of macromolecular information’. Biosystems 11: 226. Bennett, Charles H., and Rolf Landauer. 1985. The fundamental physical limits of computation. Scientific American 253: 48–56. Bernstein, Basil. 1971. Class, Codes and Control. Volume 1: Theoretical Studies Towards a Sociology of Language. London: Routledge & Kegan Paul. Berwick, Robert C., and Noam Chomsky. 2016. Why Only Us: Language and Evolution. Cambridge, MA: MIT Press. Bickerton, Derek. 2009. Adam’s Tongue: How Humans Made Language, How Language Made Humans. New York: Farrar, Straus and Giroux. Biller, Steven J., Paul M. Berube, Debbie Lindell, and Sallie W. Chisholm. 2015. Prochlorococcus: The structure and function of collective diversity. Nature Reviews Microbiology 13: 13–27. Biller, Steven J., Florence Schubotz, Sara E. Roggensack, Anne W. Thompson, Roger E. Summons, and Sallie W. Chisholm. 2014. Bacterial vesicles in marine ecosystems. Science 343: 183–186.
234
References
Biron, David G., and Hugh D. Loxdale. 2013. Host-parasite molecular cross-talk during the manipulative process of a host by its parasite. Journal of Experimental Biology 216: 148–160. Blackmore, Susan. 1999. The Meme Machine. Oxford: Oxford University Press. Blasi, Damián E., Søren Wichmann, Harald Hammarström, Peter F. Stadler, and Morten H. Christiansen. 2016. Sound-meaning association biases evidenced across thousands of languages. Proceedings of the National Academy of Sciences 113: 10818–10823. Bohr, Niels. 1933. Light and life. Nature 133: 457–459. Bonev, Boyan, and Giacomo Cavalli. 2016. Organization and function of the 3D genome. Nature Reviews Genetics 17: 661–678. Borghi, Anna M., Laura Barca, Ferdinand Binkofski, Cristiano Castelfranchi, Giovanni Pezzulo, and Luca Tummolini. 2019. Words as social tools: Language, sociality and inner grounding in abstract concepts. Physics of Life Reviews 29: 120–153. Borghi, Anna M., Ferdinand Binkofski, Cristiano Castelfranchi, Felice Cimatti, Claudia Scorolli, and Luca Tummolini. 2017. The challenge of abstract concepts. Psychological Bulletin 143: 263–292. Borghi, Anna M., Claudia Scorolli, Daniele Caligiore, Gianluca Baldassarre, and Luca Tummolini. 2013. The embodied mind extended: Using words as social tools. Frontiers in Psychology 4: 214–214. Boyd, Richard. 1979. Metaphor and theory change: What is “metaphor” a metaphor for? In Metaphor and Thought, ed. Ortony, Andrew, 356–408. Cambridge, UK: Cambridge University Press. Boyd, Robert, Monique Borgerhoff Mulder, William H. Durham, and Peter J. Richerson. 1997. Are cultural phylogenies possible? In Human by Nature: Between Biology and the Social Sciences, eds. Weingart, Peter, Sandra D. Mitchell, Peter J. Richerson, and Sabine Maasen, 355–386. Mahwah, NJ: Lawrence Erlbaum. Boyd, Robert, and Peter J. Richerson. 1985. Culture and the Evolutionary Process. Chicago: University of Chicago Press. Boza, Gergely, András Szilágyi, Ádám Kun, Mauro Santos, and Eörs Szathmáry. 2014. Evolution of the division of labor between genes and enzymes in the RNA world. PLoS Computational Biology 10: e1003936. Breaker, Ronald R. 2002. Engineered allosteric ribozymes as biosensor components. Current Opinion in Biotechnology 13: 31–39. Breaker, Ronald R. 2018. Riboswitches and translation control. Cold Spring Harbor Perspectives in Biology pii: a032797. Breaker, Ronald R., and Gerald F. Joyce. 2014. The expanding view of RNA and DNA function. Chemistry & Biology 21: 1059–1065. Brenner, Sydney. 1998. Biological computation. In The Limits of Reductionism in Biology, eds. Bock, Gregory R., and Jamie A. Goode, 106–116. New York: John Wiley & Sons. Brenner, Sydney. 2009. Interview with Sydney Brenner by Soraya de Chadarevian. Studies in History and Philosophy of Biological and Biomedical Sciences 40: 65–71. Brenner, Sydney. 2010. Sequences and consequences. Philosophical Transactions of the Royal Society of London B: Biological Sciences 365: 207–212. Brighton, Henry, Simon Kirby, and Kenny Smith. 2005. Cultural selection for learnability: Three principles underlying the view that language adapts to be learnable. In Language Origins: Perspectives on Evolution, ed. Tallerman, Maggie, 291–309. Oxford: Oxford University Press.
References
235
Britten, Roy J., and Eric H. Davidson. 1969. Gene regulation for higher cells: A theory. Science 165: 349–357. Brown, Theodore L. 2003. Making Truth: Metaphor in Science. Urbana, IL: University of Illinois Press. Bühler, Karl. 1934. Theory of Language: The Representational Function of Language. Philadelphia: John Benjamins Publishing. Bybee, Joan L. 2010. Language, Usage and Cognition. Cambridge, UK: Cambridge University Press. Byrne, Richard W. 2016. Evolving Insight. Oxford: Oxford University Press. Cai, Qiang, Lulu Qiao, Ming Wang, Baoye He, Feng-Mao Lin, Jared Palmquist, Sienna-Da Huang, and Hailing Jin. 2018. Plants send small RNAs in extracellular vesicles to fungal pathogen to silence virulence genes. Science 360: 1126–1129. Calarco, John A., Yi Xing, Mario Cáceres, Joseph P. Calarco, Xinshu Xiao, Qun Pan, Christopher Lee, Todd M. Preuss, and Benjamin J. Blencowe. 2007. Global analysis of alternative splicing differences between humans and chimpanzees. Genes & Development 21: 2963–2975. Callebaut, Werner, and Diego Rasskin-Gutman, eds. 2005. Modularity: Understanding the Development and Evolution of Natural Complex Systems. Cambridge, MA: MIT Press. Campbell, Donald T. 1965. Variation and selective retention in socio-cultural evolution. In Social Change in Developing Areas: A Reinterpretation of Evolutionary Theory, eds. Barringer, Herbert R., George I. Blanksten, and Raymond W. Mack, 19–49. Cambridge, MA: Schenkman. Campbell, Donald T. 1974. ‘Downward causation’ in hierarchically organised biological systems. In Studies in the Philosophy of Biology, eds. Ayala, Francisco J., and Theodosius Dobzhansky, 179–186. Berkeley, CA: University of California Press. Campbell, Donald T. 1979. Comments on the sociobiology of ethics and moralizing. Behavioral Science 24: 37–45. Campbell, Donald T. 1982. Legal and primary-group social controls. Journal of Social and Biological Structures 5: 431–438. Carroll, Sean B. 2008. Evo-devo and an expanding evolutionary synthesis: A genetic theory of morphological evolution. Cell 134: 25–36. Carter, Charles W., Jr. 2016. An alternative to the RNA world. Natural History 125: 28–33. Catania, A. Charles. 1990. What good is five percent of a language competence? Behavioral and Brain Sciences 13: 729–731. Cavalli-Sforza, Luigi L., and Marcus W. Feldman. 1981. Cultural Transmission and Evolution: A Quantitative Approach. Princeton: Princeton University Press. Cech, Thomas R. 2011. The RNA worlds in context. In RNA Worlds: From Life’s Origins to Diversity in Gene Regulation, eds. Atkins, John F., Raymond F. Gesteland, and Thomas R. Cech, 1–5. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Cech, Thomas R., and Joan A. Steitz. 2014. The noncoding RNA revolution— Trashing old rules to forge new ones. Cell 157: 77–94. Chafe, Wallace. 1982. Integration and involvement in speaking, writing, and oral literature. In Spoken and Written Language: Exploring Orality and Literacy, ed. Tannen, Deborah, 35–53. Norwood, NJ: Ablex. Chafe, Wallace. 1994. Discourse, Consciousness, and Time: The Flow and Displacement of Conscious Experience in Speaking and Writing. Chicago: University of Chicago Press.
236
References
Chambers, Craig G., Michael K. Tanenhaus, and James S. Magnuson. 2004. Actions and affordances in syntactic ambiguity resolution. Journal of Experimental Psychology: Learning, Memory, and Cognition 30: 687–696. Cheney, Dorothy L., and Robert M. Seyfarth. 1990. How Monkeys See the World: Inside the Mind of Another Species. Chicago: University of Chicago Press. Chisholm, Sallie W. 2017. Prochlorococcus. Current Biology 27: R447–R448. Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press. Chomsky, Noam. 1980. Rules and Representations. New York: Columbia University Press. Chomsky, Noam. 2005. Three factors in language design. Linguistic Inquiry 36: 1–22. Christiansen, Morten H., and Nick Chater. 2016. Creating Language: Integrating Evolution, Acquisition, and Processing. Cambridge, MA: MIT Press. Christiansen, Morten H., and Simon Kirby. 2003. Language evolution: The hardest problem in science? In Language Evolution, eds. Christiansen, Morten H., and Simon Kirby, 1–15. Oxford: Oxford University Press. Church, George M., Yuan Gao, and Sriram Kosuri. 2012. Next-generation digital information storage in DNA. Science 337: 1628. Churchill, Man E. A., and Andrew A. Travers. 1991. Protein motifs that recognize structural features of DNA. Trends in Biochemical Sciences 16: 92–97. Cisek, Paul. 1999. Beyond the computer metaphor: Behavior as interaction. Journal of Consciousness Studies 6: 125–142. Cisek, Paul. 2019. Resynthesizing behavior through phylogenetic refinement. Attention, Perception, & Psychophysics 81: 2265–2287. Clanchy, Michael T. 1983. Looking back from the invention of printing. In Literacy in Historical Perspective, ed. Resnick, Daniel P., 7–22. Washington: Library of Congress. Clark, Andy. 2008. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford: Oxford University Press. Clark, Andy, and David Chalmers. 1998. The extended mind. Analysis 58: 7–19. Clark, Andy, and Jennifer B. Misyak. 2009. Language, innateness and universals. In Language Universals, eds. Christiansen, Morten H., Chris Collins, and Shimon Edelman, 253–260. Oxford: Oxford University Press. Claverie, Jean-Michel. 2006. Viruses take center stage in cellular evolution. Genome Biology 7: 110–110. Cloak Jr., F. T. 1975. That a culture and a social organization mutually shape each other through a process of continuing evolution. Man-Environment Systems 5: 3–6. Collins, Lesley, and David Penny. 2005. Complex spliceosomal organization ancestral to extant eukaryotes. Molecular Biology and Evolution 22: 1053–1066. Copley, Shelley D. 2003. Enzymes with extra talents: Moonlighting functions and catalytic promiscuity. Current Opinion in Chemical Biology 7: 265–272. Corballis, Michael C. 2003. From Hand to Mouth: The Origins of Language. Princeton: Princeton University Press. Cotterell, Brian, and Johan Kamminga. 1990. Mechanics of Pre-Industrial Technology: An Introduction to the Mechanics of Ancient and Traditional Material Culture. Cambridge, UK: Cambridge University Press. Coventry, Kenny R. 2013. On the mapping between spatial language and the vision and action systems. In Language and Action in Cognitive Neuroscience, eds. Coello, Yann, and Angela Bartolo, 209–223. New York, NY: Psychology Press. Coventry, Kenny R., and Simon C. Garrod. 2004. Saying, Seeing and Acting: The Psychological Semantics of Spatial Prepositions. Hove, UK: Psychology Press.
References
237
Crick, Francis H. C. 1958. On protein synthesis. Symposia of the Society for Experimental Biology 12: 138–163. Crick, Francis H. C. 1968. The origin of the genetic code. Journal of Molecular Biology 38: 367–379. Crick, Francis H. C. 1970. Central dogma of molecular biology. Nature 227: 561–563. Croft, William. 1991. Syntactic Categories and Grammatical Relations: The Cognitive Organization of Information. Chicago: University of Chicago Press. Croft, William. 2000. Explaining Language Change: An Evolutionary Approach. London: Longman. Croft, William. 2002. The Darwinization of linguistics. Selection 3: 75–91. Crone, Patricia. 1989. Pre-Industrial Societies: New Perspectives on the Past. Oxford: Basil Blackwell. Csibra, Gergely, and György Gergely. 2011. Natural pedagogy as evolutionary adaptation. Philosophical Transactions of the Royal Society B: Biological Sciences 366: 1149–1157. Dagan, Tal, Yael Artzy-Randrup, and William Martin. 2008. Modular networks and cumulative impact of lateral transfer in prokaryote genome evolution. Proceedings of the National Academy of Sciences 105: 10039–10044. Damerow, Peter. 2012. The origins of writing and arithmetic. In The Globalization of Knowledge in History, ed. Renn, Jürgen, 153–174. Berlin: epubli GmbH. Daniels, Peter T. 1996. The study of writing systems. In The World’s Writing Systems, eds. Daniels, Peter T., and William Bright, 1–17. Oxford: Oxford University Press. Daubin, Vincent, and Gergely J. Szöllősi. 2016. Horizontal gene transfer and the history of life. Cold Spring Harbor Perspectives in Biology 8: a018036. David, Paul A. 2001. Path dependence, its critics and the quest for ‘historical economics’. In Evolution and Path Dependence in Economic Ideas: Past and Present, eds. Garrouste, Pierre, and Stavros Ioannides, 15–40. Cheltenham, UK: Edward Elgar Publishing. Davidson, Andrew. 2019. Writing: The re-construction of language. Language Sciences 72: 134–149. Davidson, Eric H. 2010. Emerging properties of animal gene regulatory networks. Nature 468: 911–920. Davidson, Eric H., and Douglas H. Erwin. 2006. Gene regulatory networks and the evolution of animal body plans. Science 311: 796–800. Davies, Nicholas B. 2000. Cuckoos, Cowbirds and Other Cheats. London: A&C Black. Dawkins, Richard. 1978. Replicator selection and the extended phenotype. Zeitschrift für Tierpsychologie 47: 61–76. Dawkins, Richard. 1982. The Extended Phenotype. Oxford: Oxford University Press. Dawkins, Richard. 1983. Universal Darwinism. In Evolution from Molecules to Men, ed. Bendall, Derek S., 403–428. Cambridge, UK: Cambridge University Press. Dawkins, Richard. 1990. Parasites, desiderata lists and the paradox of the organism. Parasitology 100: S63–S73. Dawkins, Richard. 1993. Viruses of the mind. In Dennett and His Critics: Demystifying Mind, ed. Dahlbom, Bo, 11–27. Malden, MA: Blackwell. Dawkins, Richard. 2004. Extended phenotype—but not too extended. A reply to Laland, Turner and Jablonka. Biology and Philosophy 19: 377–396. Dawkins, Richard. 2006. The Selfish Gene, Third Edition. Oxford: Oxford University Press. Dawkins, Richard. 2012. Foreword. In Host Manipulation by Parasites, eds. Hughes, David P., Jacques Brodeur, and Frédéric Thomas, xi–xiii. Oxford: Oxford University Press.
238
References
Dawkins, Richard. 2016. Preface to the second edition. In The Selfish Gene, 4th Edition, xix–xxiii. Oxford: Oxford University Press. Deacon, Terrence W. 1997. The Symbolic Species: The Co-Evolution of Language and the Brain. New York: W. W. Norton & Company. Delbrück, Max. 1949. A physicist looks at biology. Transactions of the Connecticut Academy of Arts & Sciences 38: 173–190. Delmont, Tom O., and A. M. Eren. 2018. Linking pangenomes and metagenomes: The Prochlorococcus metapangenome. PeerJ 6: e4320. DeLong, J. Bradford, and A. Michael Froomkin. 2000. Speculative microeconomics for tomorrow’s economy. In Internet Publishing and Beyond: The Economics of Digital Information and Intellectual Property, eds. Kahin, Brian, and Hal R. Varian, 6–43. Cambridge, MA: MIT Press. Demir, Ebru, and Barry J. Dickson. 2005. fruitless splicing specifies male courtship behavior in Drosophila. Cell 121: 785–794. Dennett, Daniel C. 1988. Out of the armchair and into the field. Poetics Today 9: 205–221. Dennett, Daniel C. 1991. Consciousness Explained. New York: Little, Brown. Dennett, Daniel C. 1995. Darwin’s Dangerous Idea: Evolution and the Meanings of Life. New York: Simon & Schuster. Dennett, Daniel C. 2001. The evolution of evaluators. In The Evolution of Economic Diversity, eds. Nicita, Antonio, and Ugo Pagano, 66–81. London: Routledge. Dennett, Daniel C. 2003. Explaining the “magic” of consciousness. Journal of Cultural and Evolutionary Psychology 1: 7–19. Dennett, Daniel C. 2017. What scientific term or concept ought to be more widely known? Edge. Retrieved from www.edge.org/response-detail/27002. Diakonoff, Igor M. 1983. Some ref lections on numerals in Sumerian: Towards a history of mathematical speculation. Journal of the American Oriental Society 103: 84–92. Diamond, Jared, and Peter Bellwood. 2003. Farmers and their languages: The first expansions. Science 300: 597–603. Dixon, Robert M. W., and Alexandra Y. Aikhenvald. 2002. Word: A typological framework. In Word: A Cross-Linguistic Typology, eds. Dixon, Robert M. W., and Alexandra Y. Aikhenvald, 1–41. Cambridge, UK: Cambridge University Press. Doudna, Jennifer A., and Thomas R. Cech. 2002. The chemical repertoire of natural ribozymes. Nature 418: 222–228. Drexler, Eric K. 1992. Nanosystems: Molecular Machinery, Manufacturing, and Computation. New York: John Wiley. Dunbar, Robin I. M., N. D. C. Duncan, and Daniel Nettle. 1995. Size and structure of freely forming conversational groups. Human Nature 6: 67–78. Dyson, Anne Haas. 2013. Relations between oral language and literacy. In The Encyclopedia of Applied Linguistics, ed. Chapelle, Carol A., 1–7. New York: John Wiley. Dyson, Freeman J. 1979. Disturbing the Universe. New York: Basic Books. Dyson, Freeman J. 2007. Our biotech future. The New York Review of Books 54: 12 ( July 19, 2007). Early, P., J. Rogers, M. Davis, K. Calame, M. Bond, R. Wall, and L. Hood. 1980. Two mRNAs can be produced from a single immunoglobulin μ gene by alternative RNA processing pathways. Cell 20: 313–319. Eerkens, Jelmer W., and Carl P. Lipo. 2007. Cultural transmission theory and the archaeological record: Providing context to understanding variation and temporal changes in material culture. Journal of Archaeological Research 15: 239–274.
References
239
Eigen, Manfred, and Ruthild Winkler. 1981. Laws of the Game: How the Principles of Nature Govern Chance. New York: Alfred A. Knopf. Eigner, Joseph, Helga Boedtker, and George Michaels. 1961. The thermal degradation of nucleic acids. Biochimica et Biophysica Acta 51: 165–168. Ellis, Nick C., and Diane Larsen-Freeman, eds. 2009. Language as a Complex Adaptive System. Malden, MA: Wiley-Blackwell. Enquist, M., S. Ghirlanda, A. Jarrick, and C.-A. Wachtmeister. 2008. Why does human culture increase exponentially? Theoretical Population Biology 74: 46–55. Erwin, Douglas H. 2015. A public goods approach to major evolutionary innovations. Geobiology 13: 308–315. Erwin, Douglas H., and Eric H. Davidson. 2009. The evolution of hierarchical gene regulatory networks. Nature Reviews Genetics 10: 141–148. Evans, Nicholas, and Stephen C. Levinson. 2009. The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences 32: 429–448. Everett, Daniel L. 2008. Don’t Sleep, There are Snakes: Life and Language in the Amazonian Jungle. New York: Knopf Doubleday. Feynman, Richard P. 1960. There’s plenty of room at the bottom: An invitation to enter a new field of physics. Caltech Engineering & Science 23: 22–36. Fica, Sebastian M., Nicole Tuttle, Thaddeus Novak, Nan-Sheng Li, Jun Lu, Prakash Koodathingal, Qing Dai, Jonathan P. Staley, and Joseph A. Piccirilli. 2013. RNA catalyses nuclear pre-mRNA splicing. Nature 503: 229–234. Fischer, Emil. 1894. Einf luss der configuration auf die wirkung der enzyme. Berichte der Deutschen Chemischen Gesellschaft 27: 2985–2993. Fitch, W. Tecumseh, Marc D. Hauser, and Noam Chomsky. 2005. The evolution of the language faculty: Clarifications and implications. Cognition 97: 179–210. Forterre, Patrick. 2012. Darwin’s goldmine is still open: Variation and selection run the world. Frontiers in Cellular and Infection Microbiology 2: 1–13. Forterre, Patrick. 2016. To be or not to be alive: How recent discoveries challenge the traditional definitions of viruses and life. Studies in History and Philosophy of Science Part C: Biological and Biomedical Sciences 59: 100–108. Franklin-Hall, Laura R. 2010. Trashing life’s tree. Biology and Philosophy 25: 689–709. Fredericksen, Maridel A., Yizhe Zhang, Missy L. Hazen, Raquel G. Loreto, Colleen A. Mangold, Danny Z. Chen, and David P. Hughes. 2017. Three-dimensional visualization and a deep-learning model reveal complex fungal parasite networks in behaviorally manipulated ants. Proceedings of the National Academy of Sciences 114: 12590–12595. Freitas, Robert A., and Ralph C. Merkle. 2004. Kinematic Self-Replicating Machines. Georgetown, TX: Landes Bioscience. Fultot, Martin, and Michael T. Turvey. 2019. von Uexküll’s Theory of Meaning and Gibson’s Organism—Environment Reciprocity. Ecological Psychology 31: 289–315. Furlong, Eileen E. M., and Michael Levine. 2018. Developmental enhancers and chromosome topology. Science 361: 1341–1345. Furnham, Nicholas, Ian Sillitoe, Gemma L. Holliday, Alison L. Cuff, Roman A. Laskowski, Christine A. Orengo, and Janet M. Thornton. 2012. Exploring the evolution of novel enzyme functions within structurally defined protein superfamilies. PLoS Computational Biology 8: 1–13. Gagniuc, Paul, and Constantin Ionescu-Tirgoviste. 2012. Eukaryotic genomes may exhibit up to 10 generic classes of gene promoters. BMC Genomics 13: 512.
240
References
Galas, D. J., T. B. L. Kirkwood, and R. F. Rosenberger. 1986. An introduction to the problem of accuracy. In Accuracy in Molecular Processes, eds. Kirkwood, T. B. L., R. F. Rosenberger, and D. J. Galas, 1–16. London: Chapman and Hall. Gallagher, Shaun. 2013. The socially extended mind. Cognitive Systems Research 25–26: 4–12. Gallagher, Shaun, and Anthony Crisafi. 2009. Mental institutions. Topoi 28: 45–51. García, Ángel López. 2005. The Grammar of Genes: How the Genetic Code Resembles the Linguistic Code. Bern: Peter Lang. Gardner, Martin. 1970. The fantastic combinations of John Conway’s new solitaire game “life”. Scientific American 223: 120–123. Gayon, Jean. 2015. Enzymatic cybernetics: An unpublished work by Jacques Monod. Comptes Rendus Biologies 338: 398–405. Gazzaniga, Michael S. 2018. The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind. New York: Farrar, Straus and Giroux. Gee, James Paul. 2006. Oral discourse in a world of literacy. Research in the Teaching of English 41: 153–159. Geertz, Clifford. 1966. Religion as a cultural system. In Anthropological Approaches to the Study of Religion, ed. Banton, Michael, 1–46. London: Tavistock Publications. Gibbons, Ann. 1990. Evolving similarities-between disciplines. Science: 504–506. Gibson, James J. 1966. The Senses Considered as Perceptual Systems. Boston: Houghton Mifflin. Gibson, James J. 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin. Gibson, James J. 1982. Notes on direct perception and indirect apprehension. In Reasons for Realism: Selected Essays of James J. Gibson, eds. Reed, Edward S., and Rebecca Jones, 289–291. Hillsdale, NJ: Lawrence Erlbaum. Gibson, Kathleen R., and Tim Ingold, eds. 1993. Tools, Language and Cognition in Human Evolution. Cambridge, UK: Cambridge University Press. Gifford, David K. 1994. On the path to computation with DNA. Science 266: 993–995. Gilbert, Walter S. 1978. Why genes in pieces? Nature 271: 501–501. Gilbert, Walter S. 1985. Genes-in-pieces revisited. Science 228: 823–824. Gilbert, Walter S. 1986. Origin of life: The RNA world. Nature 319: 618–619. Givón, Talmy. 1979. On Understanding Grammar. New York: Academic Press. Givón, Talmy. 2002. Bio-Linguistics: The Santa Barbara Lectures. Philadelphia: John Benjamins Publishing. Glenberg, Arthur M. 2009. Language and action: Creating sensible combinations of ideas. In The Oxford Handbook of Psycholinguistics, ed. Gaskell, M. Gareth, 361–370. Oxford: Oxford University Press. Glenberg, Arthur M., Raymond Becker, Susann Klötzer, Lidia Kolanko, Silvana Müller, and Mike Rinck. 2009. Episodic affordances contribute to language comprehension. Language and Cognition 1: 113–135. Gogarten, J. Peter, W. Ford Doolittle, and Jeffrey G. Lawrence. 2002. Prokaryotic evolution in light of gene transfer. Molecular Biology and Evolution 19: 2226–2238. Goldin-Meadow, Susan. 1999. The role of gesture in communication and thinking. Trends in Cognitive Sciences 3: 419–429. Goldman, Nick, Paul Bertone, Siyuan Chen, Christophe Dessimoz, Emily M. LeProust, Botond Sipos, and Ewan Birney. 2013. Towards practical, high-capacity, lowmaintenance information storage in synthesized DNA. Nature 494: 77–80. Goldstein, Herbert. 1980. Classical Mechanics. Reading, MA: Addison-Wesley. Goldstine, Herman H. 1972. The Computer from Pascal to von Neumann. Princeton: Princeton University Press.
References
241
Gong, Tao, Lan Shuai, and Yicheng Wu. 2018. Rethinking foundations of language from a multidisciplinary perspective. Physics of Life Reviews 26: 120–138. Gontier, Nathalie. 2006. Evolutionary epistemology and the origin and evolution of language: Taking symbiogenesis seriously. In Evolutionary Epistemology, Language and Culture: A Non-adaptationist, Systems Theoretical Approach, eds. Gontier, Nathalie, Jean Paul Van Bendegem, and Diederik Aerts, pp. 195–226. Dordrecht: Springer. Gontier, Nathalie. 2012. Selectionist approaches in evolutionary linguistics: An epistemological analysis. International Studies in the Philosophy of Science 26: 67–95. Gontier, Nathalie. 2015. Reticulate evolution everywhere. In Reticulate Evolution: Symbiogenesis, Lateral Gene Transfer, Hybridization and Infectious Heredity, ed. Gontier, Nathalie, 1–40. New York: Springer. Goody, Jack. 1977. The Domestication of the Savage Mind. Cambridge, UK: Cambridge University Press. Goody, Jack. 1986. The Logic of Writing and the Organization of Society. Cambridge, UK: Cambridge University Press. Goody, Jack. 2000. The Power of the Written Tradition. Washington: Smithsonian Institution Press. Goody, Jack, and Ian Watt. 1963. The consequences of literacy. Comparative Studies in Society and History 5: 304–345. Gott, J. Richard. 1993. Implications of the Copernican principle for our future prospects. Nature 363: 315–319. Gould, Stephen Jay. 2002. The Structure of Evolutionary Theory. Cambridge, MA: Harvard University Press. Gray, Russell D., and Quentin D. Atkinson. 2003. Language-tree divergence times support the Anatolian theory of Indo-European origin. Nature 426: 435–439. Greenblatt, Ethan J., and Allan C. Spradling. 2018. Fragile X mental retardation 1 gene enhances the translation of large autism-related proteins. Science 361: 709–712. Greif, Avner. 2006. Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Cambridge, UK: Cambridge University Press. Haeckel, Ernst. 1903. The Evolution of Man: A Popular Exposition of the Principal Points of Human Ontogeny and Phylogeny, Volume 1. New York: D. Appleton. Hagerman, Randi J., Elizabeth Berry-Kravis, Heather Cody Hazlett, Donald B. Bailey Jr, Herve Moine, R. Frank Kooy, Flora Tassone, Ilse Gantois, Nahum Sonenberg, Jean Louis Mandel, and Paul J. Hagerman. 2017. Fragile X syndrome. Nature Reviews Disease Primers 3:17065. Haldane, J. B. S. 1953. Animal ritual and human language. Diogenes 1: 61–73. Hallyn, Fernand, ed. 2000. Metaphor and Analogy in the Sciences. Dordrecht: Springer. Hamming, Richard W. 1980. The unreasonable effectiveness of mathematics. American Mathematical Monthly 87: 81–90. Harnad, Stevan. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena 42: 335–346. Harris, Roy. 1986. The Origin of Writing. LaSalle, IL: Open Court. Harris, Zellig S. 1968. Mathematical Structures of Language. New York: John Wiley. Hauber, Mark E., Stefani A. Russo, and Paul W. Sherman. 2001. A password for species recognition in a brood-parasitic bird. Proceedings of the Royal Society of London B: Biological Sciences 268: 1041–1048. Hauser, Kevin, Bernard Essuman, Yiqing He, Evangelos Coutsias, Miguel GarciaDiaz, and Carlos Simmerling. 2016. A human transcription factor in search mode. Nucleic Acids Research 44: 63–74.
242
References
Hauser, Marc D., Noam Chomsky, and W. Tecumseh Fitch. 2002. The faculty of language: What is it, who has it, and how did it evolve? Science 298: 1569–1579. Heine, Bernd, and Tania Kuteva. 2002. On the evolution of grammatical forms. In The Transition to Language, ed. Wray, Alison, 376–397. Oxford: Oxford University Press. Heine, Bernd, and Tania Kuteva. 2007. The Genesis of Grammar: A Reconstruction. Oxford: Oxford University Press. Henrich, Joseph. 2016. The Secret of our Success: How Culture is Driving Human Evolution, Domesticating our Species, and Making Us Smarter. Princeton: Princeton University Press. Herken, Rolf. 1994. The Universal Turing Machine: A Half-Century Survey. Vienna: Springer-Verlag. Herva, Vesa-Pekka, Kerkko Nordqvist, Antti Lahelma, and Janne Ikäheimo. 2014. Cultivation of perception and the emergence of the Neolithic world. Norwegian Archaeological Review 47: 141–160. Hewes, Gordon W. 1973. Primate communication and the gestural origin of language. Current Anthropology 14: 5–24. Hockett, Charles F. 1959. Animal “languages” and human language. Human Biology 31: 32–39. Hockett, Charles F. 1960a. Logical considerations in the study of animal communication. In Animal Sounds and Communication, eds. Lanyon, W. E., and W. N. Tavolga, 392–430. Washington: American Institute of Biological Sciences. Hockett, Charles F. 1960b. The origin of speech. Scientific American 203: 4–12. Hockett, Charles F. 1966. The problem of universals in language. In Universals of Language, ed. Greenberg, Joseph H., 1–29. Cambridge, MA: MIT Press. Hockett, Charles F., and Stuart A. Altmann. 1968. A note on design features. In Animal Communication: Techniques of Study and Results of Research, ed. Sebeok, Thomas A., 61–72. Bloomington, IN: Indiana University Press. Hodgson, Geoffrey M. 2006. What are institutions? Journal of Economic Issues XL: 1–25. Hodgson, Geoffrey M., and Thorbjørn Knudsen. 2010. Darwin’s Conjecture: The Search for General Principles of Social and Economic Evolution. Chicago: University of Chicago Press. Hoffmeister, Robert J., and Catherine Caldwell-Harris. 2014. Acquiring English as a second language via print: The task for deaf children. Cognition 132: 229–242. Hofstadter, Douglas R. 1979. Gödel, Escher, Bach: An Eternal Golden Braid. New York, NY: Basic Books. Hopper, Paul J., and Sandra A. Thompson. 1984. The discourse basis for lexical categories in universal grammar. Language 60: 703–752. Huang, Biao, and Rongxin Zhang. 2014. Regulatory non-coding RNAs: Revolutionizing the RNA world. Molecular Biology Reports 41: 3915–3923. Hughes, David P. 2008. The extended phenotype within the colony and how it obscures social communication. In Sociobiology of Communication: An Interdisciplinary Perspective, eds. d’Ettorre, Patrizia, and David P. Hughes, 171–190. Oxford: Oxford University Press. Hughes, David P. 2013. Pathways to understanding the extended phenotype of parasites in their hosts. Journal of Experimental Biology 216: 142–147. Hughes, David P., Jacques Brodeur, and Frédéric Thomas, eds. 2012. Host Manipulation by Parasites. Oxford: Oxford University Press. Hull, David L. 1980. Individuality and selection. Annual Review of Ecology and Systematics 11: 311–332. Hull, David L. 1982. The naked meme. In Learning, Development and Culture: Essays in Evolutionary Epistemology, ed. Plotkin, Henry C., 273–327. New York: John Wiley & Sons.
References
243
Hull, David L. 1988. Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. Chicago: University of Chicago Press. Hull, David L. 2000. Taking memetics seriously: Memetics will be what we make it. In Darwinizing Culture: The Status of Memetics as a Science, ed. Aunger, Robert, 43–68. Oxford: Oxford University Press. Hull, David L. 2002a. Species, languages and the comparative method. Selection 3: 17–28. Hull, David L. 2002b. Introduction. Selection 3: 5–6. Hurley, Susan L. 1998. Consciousness in Action. Cambridge, MA: Harvard University Press. Hyman, Malcolm D., and Jürgen Renn. 2012. Survey: From technology transfer to the origins of science. In The Globalization of Knowledge in History, ed. Renn, Jürgen, 75–104. Berlin: Max Planck Research Library for the History and Development of Knowledge. Innis, Harold. 1950. Empire and Communications. Oxford: Oxford University Press. Irimia, Manuel, Jakob Lewin Rukov, David Penny, and Scott William Roy. 2007. Functional and evolutionary analysis of alternatively spliced genes is consistent with an early eukaryotic origin of alternative splicing. BMC Evolutionary Biology 7: 188–188. Jablonka, Eva. 2004. From replicators to heritably varying phenotypic traits: The extended phenotype revisited. Biology and Philosophy 19: 353–375. Jackendoff, Ray, and Steven Pinker. 2005. The nature of the language faculty and its implications for evolution of language (Reply to Fitch, Hauser, and Chomsky). Cognition 97: 211–225. Jacob, François. 1977a. Evolution and tinkering. Science 196: 1161–1166. Jacob, François. 1977b. The linguistic model in biology. In Roman Jakobson: Echoes of his Scholarship, eds. Armstrong, D., and C. H. V. Schooneveld, 185–192. Lisse: Peter de Ridder. Jacob, François, and Jacques Monod. 1961. Genetic regulatory mechanisms in the synthesis of proteins. Journal of Molecular Biology 3: 318–356. Jakobson, Roman, and Linda R. Waugh. 1979. The Sound Shape of Language. Berlin: Walter De Gruyter. Japyassú, Hilton F., and Kevin N. Laland. 2017. Extended spider cognition. Animal Cognition 20: 375–395. Ji, Sungchul. 1997. Isomorphism between cell and human languages: Molecular biological, bioinformatic and linguistic implications. BioSystems 44: 17–39. Johnsen, Sönke, and Kenneth J. Lohmann. 2008. Magnetoreception in animals. Physics Today 61: 29–35. Jose, Antony M., Garrett A. Soukup, and Ronald R. Breaker. 2001. Cooperative binding of effectors by an allosteric ribozyme. Nucleic Acids Research 29: 1631–1637. Joyce, Gerald F. 1989. RNA evolution and the origins of life. Nature 338: 217–224. Joyce, Gerald F. 2002. The antiquity of RNA-based evolution. Nature 418: 214–221. Joyce, Gerald F. 2012. Bit by bit: The darwinian basis of life. PLoS Biology 10: 1–6. Joyce, Gerald F., and Leslie E. Orgel. 1999. Prospects for understanding the origin of the RNA world. In The RNA World, eds. Gesteland, Raymond F., Thomas R. Cech, and John F. Atkins, 49–77. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Joyce, Gerald F., and Leslie E. Orgel. 2006. Progress toward understanding the origin of the RNA world. In The RNA World, eds. Gesteland, Raymond F., Thomas R. Cech, and John F. Atkins, 23–56. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press.
244
References
Judson, Horace Freeland. 1996. The Eighth Day of Creation. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Kalmus, Hans. 1962. Analogies of language to life. Language and Speech 5: 15–25. Kay, Lily E. 2000. Who Wrote the Book of Life?: A History of the Genetic Code. Palo Alto: Stanford University Press. Kay, Paul. 1977. Language evolution and speech style. In Sociocultural Dimensions of Language Change, eds. Blount, Ben G., and Mary Sanches, 21–33. New York: Academic Press. Keller, Evelyn Fox. 2009. Rethinking the meaning of biological information. Biological Theory 4: 159–166. Kemeny, John G. 1955. Man viewed as a machine. Scientific American 192: 58–67. Kendon, Adam. 2004. Gesture: Visible Action as Utterance. Cambridge, UK: Cambridge University Press. Kendrew, John C. 1967. How molecular biology started. Scientific American 216.3: 141–144. Keren, Hadas, Galit Lev-Maor, and Gil Ast. 2010. Alternative splicing and evolution: Diversification, exon definition and function. Nature Reviews Genetics 11: 345–355. Khaitovich, Philipp, Wolfgang Enard, Michael Lachmann, and Svante Pääbo. 2006. Evolution of primate gene expression. Nature Reviews Genetics 7: 693–702. King, Mary-Claire, and Allan C. Wilson. 1975. Evolution at two levels in humans and chimpanzees. Science 188: 107–116. Kirby, Simon. 1998. Fitness and the selective adaptation of language. In Approaches to the Evolution of Language, eds. Hurford, James R., Michael G. Studdert-Kennedy, and Chris Knight, 359–383. Cambridge, UK: Cambridge University Press. Knight, Chris. 2007. Language co-evolved with the rule of law. Mind & Society 7: 109–128. Knudsen, Thorbjørn. 2002. The significance of tacit knowledge in the evolution of human language. Selection 3: 93–112. Koonin, Eugene V. 2012. Does the central dogma still stand? Biology Direct 7: 27. Koonin, Eugene V., and Yuri I. Wolf. 2012. Evolution of microbes and viruses: A paradigm shift in evolutionary biology? Frontiers in Cellular and Infection Microbiology 2: 1–15. Koshland, Daniel E. 1958. Application of a theory of enzyme specificity to protein synthesis. Proceedings of the National Academy of Sciences 44: 98–104. Koshland, Daniel E. 1994. The key–lock theory and the induced fit theory. Angewandte Chemie International Edition in English 33: 2375–2378. Kosse, Krisztina. 2000. Some regularities in human group formation and the evolution of societal complexity. Complexity 6: 60–64. Krull, W. H., and C. R. Mapes. 1953. Studies on the biology of Dicrocoelium dendritiaum (Rudolphi, 1819) Looss, 1899 (Trematoda: Dicrocoeliidae), including its relation to the intermediate host, Cionella lubrica (Müller). IX. Notes on the cyst, metacercaria, and infection in the ant, Formica fusca. Cornell Veterinarian 43: 389–410. Krumlauf, Robb. 2018. Hox genes, clusters and collinearity. International Journal of Developmental Biology 62: 659–663. Küpper, Clemens, Michael Stocks, Judith E. Risse, Natalie dos Remedios, Lindsay L. Farrell, Susan B. McRae, Tawna C. Morgan, Natalia Karlionova, Pavel Pinchuk, Yvonne I. Verkuil, Alexander S. Kitaysky, John C. Wingfield, Theunis Piersma, Kai Zeng, Jon Slate, Mark Blaxter, David B. Lank, and Terry Burke. 2016. A supergene determines highly divergent male reproductive morphs in the ruff. Nature Genetics 48: 79–83. Küppers, Bernd-Olaf. 1990. Information and the Origin of Life. Cambridge, MA: MIT Press.
References
245
Küppers, Bernd-Olaf. 2018. The Computability of the World: How Far Can Science Take Us? New York: Springer. Labov, William. 1990. On the adequacy of natural languages: I. The development of tense. In Pidgin and Creole Tense-Mood-Aspect Systems, ed. Singler, John Victor, 1–58. Philadelphia: John Benjamins. Lake, Mark. 1998. Digging for memes: The role of material objects in cultural evolution. In Cognition and Material Culture: The Archaeology of Symbolic Storage, eds. Scarre, Christopher, and Colin Renfrew, 77–88. Cambridge, UK: McDonald Institute for Archaeological Research. Laland, Kevin N. 2004. Extending the extended phenotype. Biology and Philosophy 19: 313–325. Laland, Kevin N. 2017. Darwin’s Unfinished Symphony: How Culture Made the Human Mind. Princeton: Princeton University Press. Laland, Kevin N., John Odling-Smee, and Marcus W. Feldman. 2004. Causing a commotion. Nature 429: 609–609. LaMonte, Gregory, Nisha Philip, Joseph Reardon, Joshua R. Lacsina, William Majoros, Lesley Chapman, Courtney D. Thornburg, Marilyn J. Telen, Uwe Ohler, Christopher V. Nicchitta, Timothy Haystead, and Jen-Tsan Chi. 2012. Translocation of sickle cell erythrocyte microRNAs into Plasmodium falciparum inhibits parasite translation and contributes to malaria resistance. Cell Host & Microbe 12: 187–199. Landau, Barbara, and Ray Jackendoff. 1993. “What” and “where” in spatial language and spatial cognition. Behavioral and Brain Sciences 16: 217–238. Landauer, Rolf. 1961. Irreversibility and heat generation in the computing process. IBM Journal of Research and Development 5: 183–191. Lane, Nick, and William Martin. 2010. The energetics of genome complexity. Nature 467: 929–934. Lang, Andrew S., and J. Thomas Beatty. 2007. Importance of widespread gene transfer agent genes in -proteobacteria. Trends in Microbiology 15: 54–62. Larkin, Alyse A., Catherine A. Garcia, Kimberly A. Ingoglia, Nathan S. Garcia, Steven E. Baer, Benjamin S. Twining, Michael W. Lomas, and Adam C. Martiny. 2020. Subtle biogeochemical regimes in the Indian Ocean revealed by spatial and diel frequency of Prochlorococcus haplotypes. Limnology and Oceanography 65: S220–S232. Latour, Bruno, and Steve Woolgar. 1986. Laboratory Life: The Construction of Scientific Facts. Princeton: Princeton University Press. Lees, Robert B. 1979. Language and the genetic code. In The Signifying Animal, eds. Rauch, I., and G. F. Carr, 218–226. Bloomington: Indiana University Press. Leff, Harvey, and Andrew F. Rex. 2002. Maxwell’s Demon: Entropy, Classical and Quantum Information, Computing. Bristol, UK: Institute of Physics. Leu, Kevin, Benedikt Obermayer, Sudha Rajamani, Ulrich Gerland, and Irene A. Chen. 2011. The prebiotic evolutionary advantage of transferring genetic information from RNA to DNA. Nucleic Acids Research 39: 8135–8147. Levine, Michael, Claudia Cattoglio, and Robert Tjian. 2014. Looping back to leap forward: Transcription enters a new era. Cell 157: 13–25. Levinson, Stephen C., and Judith Holler. 2014. The origin of human multi-modal communication. Philosophical Transactions of the Royal Society B: Biological Sciences 369: 20130302. Lévi-Strauss, Claude. 1969. Conversations with Claude Lévi-Strauss. ed. Charbonnier, Georges. London: Jonathan Cape.
246
References
Lewens, Tim. 2015. Cultural Evolution: Conceptual Challenges. Oxford: Oxford University Press. Lewin, Roger. 1984. Why is development so illogical? Science 224: 1327–1329. Lewis, James J., Karin RL van der Burg, Anyi Mazo-Vargas, and Robert D. Reed. 2016. ChIP-seq-annotated Heliconius erato genome highlights patterns of cis-regulatory evolution in Lepidoptera. Cell Reports 16: 2855–2863. Lewontin, Richard C. 1978. Adaptation. Scientific American 212–230. Lewontin, Richard C. 1991. Biology as Ideology: The Doctrine of DNA. Toronto: House of Anansi Press. Lewontin, Richard C. 1992. The dream of the human genome. New York Review of Books 39: 31–40. Lieberman-Aiden, Erez, Nynke L. van Berkum, Louise Williams, Maxim Imakaev, Tobias Ragoczy, Agnes Telling, Ido Amit, Bryan R. Lajoie, Peter J. Sabo, Michael O. Dorschner, Richard Sandstrom, Bradley Bernstein, M. A. Bender, Mark Groudine, Andreas Gnirke, John Stamatoyannopoulos, Leonid A. Mirny, Eric S. Lander, and Job Dekke. 2009. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science 326: 289–293. Lieberman, Erez, Jean-Baptiste Michel, Joe Jackson, Tina Tang, and Martin A. Nowak. 2007. Quantifying the evolutionary dynamics of language. Nature 449: 713–716. Lind, Johan, Patrik Lindenfors, Stefano Ghirlanda, Kerstin Lidén, and Magnus Enquist. 2013. Dating human cultural capacity using phylogenetic principles. Scientific Reports 3: 1785. Lindblad-Toh, Kerstin, Claire M. Wade, Tarjei S. Mikkelsen, Elinor K. Karlsson, David B. Jaffe, Michael Kamal, Michele Clamp, Jean L. Chang, Edward J. Kulbokas III, Michael C. Zody, Evan Mauceli, Xiaohui Xie, Matthew Breen, Robert K. Wayne, Elaine A. Ostrander, Chris P. Ponting, Francis Galibert, Douglas R. Smith, Pieter J. deJong, Ewen Kirkness, Pablo Alvarez, Tara Biagi, William Brockman, Jonathan Butler, Chee-Wye Chin, April Cook, James Cuff, Mark J. Daly, David DeCaprio, Sante Gnerre, Manfred Grabherr, Manolis Kellis, Michael Kleber, Carolyne Bardeleben, Leo Goodstadt, Andreas Heger, Christophe Hitte, Lisa Kim, Klaus-Peter Koepfli, Heidi G. Parker, John P. Pollinger, Stephen M. J. Searle, Nathan B. Sutter, Rachael Thomas, Caleb Webber, Broad Sequencing Platform members, and Eric S. Lander. 2005. Genome sequence, comparative analysis and haplotype structure of the domestic dog. Nature 438: 803–819. Linell, Per. 2005. The Written Language Bias in Linguistics: Its Nature, Origins and Transformations. New York: Routledge. Loemker, Leroy E., ed. 1989. Leibniz: Philosophical Letters and Papers. Dordrecht: Kluwer. Lovelace, Augusta Ada. 1843. Notes by A. A. L. Taylor’s Scientific Memoirs, London III: 666–731. Lumsden, Charles J., and Edward O. Wilson. 1981. Genes, Mind, and Culture: The Coevolutionary Process. Cambridge, MA: Harvard University Press. Lupas, Andrei N., Chris P. Ponting, and Robert B. Russell. 2001. On the evolution of protein folds. Journal of Structural Biology 134: 191–203. Lupyan, Gary, and Rick Dale. 2016. Why are there different languages? The role of adaptation in linguistic diversity. Trends in Cognitive Sciences 20: 649–660. Lyman, R. Lee, and Michael J. O’Brien. 2003. Cultural traits: Units of analysis in early twentieth-century anthropology. Journal of Anthropological Research 59: 225–250. Lynch, Kathleen S., Annmarie Gaglio, Elizabeth Tyler, Joseph Coculo, Matthew I. M. Louder, and Mark E. Hauber. 2017. A neural basis for password-based species recognition in an avian brood parasite. Journal of Experimental Biology jeb.158600.
References
247
Madden, Andrew D., Jared Bryson, and Joe Palimi. 2006. Information behavior in preliterate societies. In New Directions in Human Information Behavior, eds. Spink, A., and C. Coles, 33–53. Dordrecht: Springer. Mallet, James, Nora Besansky, and Matthew W. Hahn. 2015. How reticulated are species? BioEssays 38: 140–149. Mann, Charles C. 2018. The Wizard and the Prophet: Two Remarkable Scientists and Their Dueling Visions to Shape Tomorrow’s World. New York: Alfred A. Knopf. Margulis, Lynn (Sagan). 1967. On the origin of mitosing cells. Journal of Theoretical Biology 14: 255–274. Marr, David. 1982. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W. H. Freeman. Martin, William F. 2011. Early evolution without a tree of life. Biology Direct 6: 36–36. Marzke, Mary W. 2013. Tool making, hand morphology and fossil hominins. Philosophical Transactions of the Royal Society of London B: Biological Sciences 368: 20120414. Masters, Roger D. 1970. Genes, language, and evolution. Semiotica 2: 295–320. Matera, A. Gregory, and Zefeng Wang. 2014. A day in the life of the spliceosome. Nature Reviews Molecular Cell Biology 15: 108–121. Maynard Smith, John. 2000. The concept of information in biology. Philosophy of Science 67: 177–194. Maynard Smith, John, and Eörs Szathmáry. 1995. The Major Transitions in Evolution. Oxford: Oxford University Press. McCloskey, Deirdre N. 2016. Bourgeois Equality: How Ideas, Not Capital or Institutions, Enriched the World. Chicago: University of Chicago Press. McInerney, James O., Davide Pisani, Eric Bapteste, and Mary J. O’Connell. 2011. The public goods hypothesis for the evolution of life on Earth. Biology Direct 6: 41–41. McMahon, April, and Robert McMahon. 2003. Finding families: Quantitative methods in language classification. Transactions of the Philological Society 101: 7–55. McNeill, David. 2012. How Language Began: Gesture and Speech in Human Evolution. Cambridge, UK: Cambridge University Press. Merleau-Ponty, Maurice. 2002. Phenomenology of Perception. London: Routledge Classics. Mesoudi, Alex, and Michael J. O’Brien. 2008. The learning and transmission of hierarchical cultural recipes. Biological Theory 3: 63–72. Mesoudi, Alex, Andrew Whiten, and Kevin N. Laland. 2004. Is human cultural evolution Darwinian? Evidence reviewed from the perspective of the Origin of Species. Evolution 58: 1–11. Mesoudi, Alex, Andrew Whiten, and Kevin N. Laland. 2006. Towards a unified science of cultural evolution. Behavioral and Brain Sciences 29: 329–347. Michaels, Claire F., and Claudia Carello. 1981. Direct Perception. Englewood Cliffs, NJ: Prentice-Hall. Michailidou, Anna. 2010. Measuring by weight in the Late Bronze Age Aegean: The people behind the measuring tools. In The Archaeology of Measurement: Comprehending Heaven, Earth and Time in Ancient Societies, eds. Morley, Iain, and Colin Renfrew, 71–87. Cambridge, UK: Cambridge University Press. Monod, Jacques. 1959. Cybernétique enzymatique, unpublished typewritten document deposited at the Pasteur Institute, Paris. Partially translated and quoted in Gayon 2015. Monod, Jacques. 1971. Chance and Necessity. New York: Alfred A. Knopf. Monod, Jacques, Jean-Pierre Changeux, and François Jacob. 1963. Allosteric proteins and cellular control systems. Journal of Molecular Biology 6: 306–329.
248
References
Moore, Richard. 2017. The evolution of syntactic structure. Biology & Philosophy 32: 599–613. Morange, Michel. 2006. The protein side of the central dogma: Permanence and change. History and Philosophy of the Life Sciences 28: 513–524. Morange, Michel. 2008. What history tells us XIII. Fifty years of the Central Dogma. Journal of Biosciences 33: 171–175. Morin, Olivier. 2015. How Traditions Live and Die. Oxford: Oxford University Press. Morley, Iain. 2010. Conceptualising quantification before settlement: Activities and issues underlying the conception and use of measurement. In The Archaeology of Measurement: Comprehending Heaven, Earth and Time in Ancient Societies, eds. Morley, Iain, and Colin Renfrew, 7–18. Cambridge, UK: Cambridge University Press. Morris, Kevin V., and John S. Mattick. 2014. The rise of regulatory RNA. Nature Reviews Genetics 15: 423–437. Mufwene, Salikoko S. 2001. The Ecology of Language Evolution. Cambridge, UK: Cambridge University Press. Mullins, Daniel A., Harvey Whitehouse, and Quentin D. Atkinson. 2013. The role of writing and recordkeeping in the cultural evolution of human cooperation. Journal of Economic Behavior and Organization 90S: S141–S151. Nettle, Daniel. 1999. Linguistic Diversity. Oxford: Oxford University Press. Newell, Allen. 1980. Physical symbol systems. Cognitive Science 4: 135–183. Newport, Elissa, Henry Gleitman, and Lila Gleitman. 1977. Mother, I’d rather do it myself: Some effects and non-effects of maternal speech style. In Talking to Children, eds. Snow, Catherine E., and Charles A. Ferguson, 109–149. Cambridge, UK: Cambridge University Press. Nicholson, Daniel J. 2019. Is the cell really a machine? Journal of Theoretical Biology 477: 108–126. Nilsen, Timothy W. 2003. The spliceosome: The most complex macromolecular machine in the cell? BioEssays 25: 1147–1149. Nilsen, Timothy W., and Brenton R. Graveley. 2010. Expansion of the eukaryotic proteome by alternative splicing. Nature 463: 457–463. Nissen, Poul, Jeffrey Hansen, Nenad Ban, Peter B. Moore, and Thomas A. Steitz. 2000. The structural basis of ribosome activity in peptide bond synthesis. Science 289: 920–930. Norenzayan, Ara, Azim F. Shariff, Will M. Gervais, Aiyana K. Willard, Rita A. McNamara, Edward Slingerland, and Joseph Henrich. 2016. The cultural evolution of prosocial religions. Behavioral and Brain Sciences 39: doi: 10.1017/S0140525X14001356. North, Douglass C. 1991. Institutions. Journal of Economic Perspectives 5: 97–112. Novikoff, Alex B. 1945. The concept of integrative levels and biology. Science 101: 209–215. O’Brien, Michael J., R. Lee Lyman, Alex Mesoudi, and Todd L. VanPool. 2010. Cultural traits as units of analysis. Philosophical Transactions of the Royal Society of London B: Biological Sciences 365: 3797–3806. Odling-Smee, F. John, Kevin N. Laland, and Marcus W. Feldman. 2003. Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press. Olson, David R. 1977. From utterance to text: The bias of language in speech and writing. Harvard Educational Review 47: 257–281. Olson, David R. 1994. The World on Paper: The Conceptual and Cognitive Implications of Writing and Reading. Cambridge, UK: Cambridge University Press.
References
249
Olson, David R. 2006. Oral discourse in a world of literacy. Research in the Teaching of English 41: 136–179. Olson, David R. 2016. The Mind on Paper: Reading, Consciousness and Rationality. Cambridge, UK: Cambridge University Press. Olson, David R. 2020. On the use and misuse of literacy: Comments on “The Mind on Paper”. Interchange 51: 89–96. Orgel, Leslie E. 1968. Evolution of the genetic apparatus. Journal of Molecular Biology 38: 381–393. Osterloh, Ian H. 2004. The discovery and development of Viagra® (sildenafil citrate). In Sildenafil, 1–13. New York: Springer. Oyama, Susan. 2000. The Ontogeny of Information: Developmental Systems and Evolution. Durham, NC: Duke University Press. Padgett, Richard A. 2012. New connections between splicing and human disease. Trends in Genetics 28: 147–154. Page, Michael I. 1986. The specificity of enzyme-substrate interactions. In Accuracy in Molecular Processes, eds. Kirkwood, T. B. L., R. F. Rosenberger, and D. J. Galas, 37–66. London: Chapman and Hall. Pagel, Mark. 2006. Darwinian cultural evolution rivals genetic evolution. Behavioral and Brain Sciences 29: 360–360. Pagel, Mark. 2009. Human language as a culturally transmitted replicator. Nature Reviews Genetics 10: 405–415. Pagel, Mark. 2016. Language: Why us and only us? Trends in Ecology & Evolution 31: 258–259. Pattee, Howard H. 1953. The scanning x-ray microscope. Journal of the Optical Society of America 43: 61–62. Pattee, Howard H. 1966. Physical theories, automata, and the origin of life. In Natural Automata and Useful Simulations, eds. Pattee, Howard H., Edgar A. Edelsack, Louis Fein, and Arthur B. Callahan, 73–104. Washington: Spartan Books. Pattee, Howard H. 1968. The physical basis of coding and reliability in biological evolution. In Towards a Theoretical Biology, Volume 1: Prolegomena, ed. Waddington, C. H., 67–93. Chicago: Aldine. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 1969. How does a molecule become a message? Developmental Biology Supplement 3: 1–16. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 1972a. Laws and constraints, symbols and languages. In Towards a Theoretical Biology, Volume 4: Essays, ed. Waddington, Conrad H., 248–258. Chicago: Aldine. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 1972b. The nature of hierarchical controls in living matter. In Foundations of Mathematical Biology, ed. Rosen, Robert, 1–22. New York: Academic Press. Pattee, Howard H. 1973. The physical basis and origin of hierarchical control. In Hierarchy Theory: The Challenge of Complex Systems, ed. Pattee, Howard H., 73–108. New York: George Braziller. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 1974. Discrete and continuous processes in computers and brains. In Physics and Mathematics of the Nervous System, eds. Conrad, Michael, Werner Güttinger, and Mario Dal Cin, 128–148. New York: Springer-Verlag. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 1977. Dynamic and linguistic modes of complex systems. International Journal of General Systems 3: 259–266.
250
References
Pattee, Howard H. 1978. The complementarity principle in biological and social structures. Journal of Social and Biological Structures 1: 191–200. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 1979. Complementarity vs. reduction as explanation of biological complexity. American Journal of Physiology 236: 241–246. Pattee, Howard H. 1980. Clues from molecular symbol systems. In Signed and Spoken Language: Biological Constraints on Linguistic Form, eds. Bellugi, Ursula, and Michael Studdert-Kennedy, 261–274. Weinheim: Verlag Chemie. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 1982a. The need for complementarity in models of cognitive behavior: A response to Fowler and Turvey. In Cognition and the Symbolic Processes, Volume 2, 21–34. Hillsdale, NJ: Lawrence Erlbaum. Pattee, Howard H. 1982b. Cell psychology: An evolutionary approach to the symbolmatter problem. Cognition and Brain Theory 5: 325–341. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 2001. The physics of symbols: Bridging the epistemic cut. BioSystems 60: 5–21. Pattee, Howard H. 2005. The physics and metaphysics of biosemiotics. Journal of Biosemiotics 1: 281–301. Pattee, Howard H. 2007. The necessity of biosemiotics: Matter-symbol complementarity. In Introduction to Biosemiotics, ed. Barbieri, Marcello, 115–132. Dordrecht: Springer. [reprinted in Pattee & Rączaszek-Leonardi (2012)] Pattee, Howard H. 2008. Physical and functional conditions for symbols, codes, and languages. Biosemiotics 1: 147–168. Pattee, Howard H. 2009. Response by HH Pattee to Jon Umerez’s paper: “Where does Pattee’s ‘how does a molecule become a message?’ Belong in the history of biosemiotics?”. Biosemiotics 2: 291–302. Pattee, Howard H., Edgar A. Edelsack, Louis Fein, and Arthur B. Callahan, eds. 1966. Natural Automata and Useful Simulations. Washington: Spartan Books. Pattee, Howard H., and Joanna Rączaszek-Leonardi, eds. 2012. Laws, Language, and Life. New York: Springer. Paul, Robert A. 2015. Mixed Messages: Cultural and Genetic Inheritance in the Constitution of Human Society. Chicago: University of Chicago Press. Pearce, Ben K. D., Ralph E. Pudritz, Dmitry A. Semenov, and Thomas K. Henning. 2017. Origin of the RNA world: The fate of nucleobases in warm little ponds. Proceedings of the National Academy of Sciences 114: 11327–11332. Pelikan, Pavel. 2011. Evolutionary developmental economics: How to generalize Darwinism fruitfully to help comprehend economic change. Journal of Evolutionary Economics 21: 341–366. Peter, Isabelle S., and Eric H. Davidson. 2011. Evolution of gene regulatory networks that control embryonic development of the body plan. Cell 144: 970–985. Peter, Isabelle S., and Eric H. Davidson. 2015. Genomic Control Process: Development and Evolution. New York: Academic Press. Pinker, Steven. 2006. Deep commonalities between life and mind. In Richard Dawkins: How a Scientist Changed the Way We Think, eds. Grafen, Alan, and Mark Ridley, 130–141. Oxford: Oxford University Press. Pinker, Steven, and Paul Bloom. 1990. Natural language and natural selection. Behavioral and Brain Sciences 13: 707–727.
References
251
Pinker, Steven, and Ray Jackendoff. 2005. The faculty of language: What’s special about it? Cognition 95: 201–236. Planck, Max. 1925. A Survey of Physics. London: Methuen & Co. Ltd. Platnick, Norman I., and H. Don Cameron. 1977. Cladistic methods in textual, linguistic, and phylogenetic analysis. Systematic Zoology 26: 380–385. Platt, John R. 1962. A ‘book-model’ of genetic information transfer in cells and tissues. In Horizons in Biochemistry: Albert Szent-Györgyi Dedicatory Volume, eds. Kasha, Michael, and Bernard Pullman, 167–187. New York: Academic Press. Plotkin, Henry C. 2003. The Imagined World Made Real: Towards a Natural Science of Culture. New Brunswick, NJ: Rutgers University Press. Plotkin, Henry C. 1993. Darwin Machines and the Nature of Knowledge. Cambridge, MA: Harvard University Press. Polanyi, Michael. 1967. Life transcending physics and chemistry. Chemical & Engineering News 45: 54–66. Polanyi, Michael. 1968. Life’s irreducible structure. Science 160: 1308–1312. Popper, Karl R. 1972. Objective Knowledge: An Evolutionary Approach. Oxford: Oxford University Press. Port, Robert F. 2010. Language as a social institution: Why phonemes and words do not live in the brain. Ecological Psychology 22: 304–326. Post, Emil. 2004. Absolutely unsolvable problems and relatively undecidable propositions— account of an anticipation. In The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, ed. Davis, Martin, 340–406. Mineola, NY: Dover Publications. Powers, Simon T., Carel P. van Schaik, and Laurent Lehmann. 2016. How institutions shaped the last major evolutionary transition to large-scale human societies. Philosophical Transactions of the Royal Society B: Biological Sciences 371: 20150098. Pressman, Abe, Celia Blanco, and Irene A. Chen. 2015. The RNA world as a model system to study the origin of life. Current Biology 25: R953–R963. Priest, George L. 1977. The common law process and the selection of efficient rules. Journal of Legal Studies 6: 65–82. Progovac, Ljiljana. 2015. Evolutionary Syntax. Oxford: Oxford University Press. Puigbò, Pere, Yuri I. Wolf, and Eugene V. Koonin. 2010. The tree and net components of prokaryote evolution. Genome Biology and Evolution 2: 745–756. Pylyshyn, Zenon W. 1984. Computation and Cognition. Cambridge, MA: MIT Press. Quammen, David. 2018. The Tangled Tree: A Radical New History of Life. New York, NY: Simon and Schuster. Radzicka, Anna, and Richard Wolfenden. 1995. A proficient enzyme. Science 267: 90–93. Raible, Wolfgang. 2001. Linguistics and genetics: Systematic parallels. In Language Typology and Language Universals, eds. Haspelmath, Martin, Ekkehard Konig, Wulf Oesterreicher, and Wolfgang Raible, 103–123. Berlin: Walter de Gruyter. Ramachandran, Vilayanur S., and Edward M. Hubbard. 2001. Synaesthesia—A window into perception, thought and language. Journal of Consciousness Studies 8: 3–34. Rambidi, Nicholas G. 2014. Molecular Computing. Vienna: Springer. Ranea, Juan A. G., Antonio Sillero, Janet M. Thornton, and Christine A. Orengo. 2006. Protein superfamily evolution and the last universal common ancestor (LUCA). Journal of Molecular Evolution 63: 513–525. Ransom, Robert. 1981. Computers and Embryos: Models in Developmental Biology. New York: John Wiley & Sons.
252
References
Raoult, Didier, and Patrick Forterre. 2008. Redefining viruses: Lessons from Mimivirus. Nature Reviews Microbiology 6: 315–319. Ratner, Vadim A. 1974. The genetic language. Progress in Theoretical Biology 3: 144–228. Ravenhall, Matt, Nives Škunca, Florent Lassalle, and Christophe Dessimoz. 2015. Inferring horizontal gene transfer. PLoS Computational Biology 11: 1–16. Reed, Edward S. 1986. James J. Gibson’s revolution in perceptual psychology: A case study of the transformation of scientific ideas. Studies in History and Philosophy of Science Part A 17, no. 1: 65–98. Reed, Edward S. 1991. Cognition as the cooperative appropriation of affordances. Ecological Psychology 3: 135–158. Reed, Edward S. 1996. Encountering the World: Toward an Ecological Psychology. Oxford: Oxford University Press. Renfrew, Colin, and Iain Morley. 2010. Measure: Towards the construction of our world. In The Archaeology of Measurement: Comprehending Heaven, Earth and Time in Ancient Societies, eds. Morley, Iain, and Colin Renfrew, 1–4. Cambridge, UK: Cambridge University Press. Rich, Alexander. 1962. On the problems of evolution and biochemical information transfer. In Horizons in Biochemistry: Albert Szent-Györgi Dedicatory Volume, eds. Kasha, Michael, and Bernard Pullman, 103–126. New York: Academic Press. Richerson, Peter J. 2017. Recent critiques of dual inheritance theory. Evolutionary Studies in Imaginative Culture 1: 203–211. Richerson, Peter J., and Robert Boyd. 2005. Not by Genes Alone. Chicago: University of Chicago Press. Ritt, Nikolaus. 2004. Selfish Sounds and Linguistic Evolution: A Darwinian Approach to Language Change. Cambridge, UK: Cambridge University Press. Rivera, Maria C., Ravi Jain, Jonathan E. Moore, and James A. Lake. 1998. Genomic evidence for two functionally distinct gene classes. Proceedings of the National Academy of Sciences 95: 6239–6244. Robbins, Philip, and Murat Aydede, eds. 2008. The Cambridge Handbook of Situated Cognition. Cambridge, UK: Cambridge University Press. Robertson, Michael P., and Gerald F. Joyce. 2011. The origins of the RNA world. In RNA Worlds: From Life’s Origins to Diversity in Gene Regulation, eds. Atkins, John F., Raymond F. Gesteland, and Thomas R. Cech, 21–42. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Robertson, Michael P., and Gerald F. Joyce. 2014. Highly efficient self-replicating RNA enzymes. Chemistry and Biology 21: 238–245. Roederer, Juan G. 2005. Information and Its Role in Nature. Berlin & Heidelberg: Springer-Verlag. Rohwer, Forest, and Rebecca Vega Thurber. 2009. Viruses manipulate the marine environment. Nature 459: 207–212. Rosen, Robert. 1969. Hierarchical organization in automata theoretic models of biological systems. In Information Processing in the Nervous System, ed. Leibovic, K. L., 179–199. New York: Springer-Verlag. Sabour, Davood, and Hans R. Schöler. 2012. Reprogramming and the mammalian germline: The Weismann barrier revisited. Current Opinion in Cell Biology 24: 716–723. Salyers, Abigail A., and Carlos F. Amábile-Cuevas. 1997. Why are antibiotic resistance genes so resistant to elimination? Antimicrobial Agents and Chemotherapy 41: 2321–2325.
References
253
Sanger, Frederick. 1988. Sequences, sequences, and sequences. Annual Review of Biochemistry 57: 1–29. Sankoff, David. 1973. Parallels between genetics and lexicostatistics. In Lexicostatistics in Genetic Linguistics: Proceedings of the Yale Conference, Yale University, April 3–4, 1971, ed. Dyen, Isadore, 64–73. The Hague: Mouton. Sapir, Edward. 1921. Language: An Introduction to the Study of Speech. New York: Harcourt, Brace. Sarkar, Sahotra. 1996. Decoding “coding”: Information and DNA. Bioscience 46: 857–864. Sarkar, Sahotra. 2005. Molecular Models of Life: Philosophical Papers on Molecular Biology. Cambridge, MA: MIT Press. Schieber, Mark H., and Marco Santello. 2004. Hand function: Peripheral and central constraints on performance. Journal of Applied Physiology 96: 2293–2300. Schiffer, Michael B., and James M. Skibo. 1987. Theory and experiment in the study of technological change. Current Anthropology 28: 595–622. Schleif, Robert. 1988. DNA binding by proteins. Science 241: 1182–1187. Schmandt-Besserat, Denise. 1987. Oneness, twoness, threeness. The Sciences 27: 44–48. Schmandt-Besserat, Denise. 1996. How Writing Came About. Austin, TX: University of Texas Press. Schmandt-Besserat, Denise. 2010. The token system of the ancient Near East: Its role in counting, writing, the economy and cognition. In The Archaeology of Measurement: Comprehending Heaven, Earth and Time in Ancient Societies, eds. Morley, Iain, and Colin Renfrew, 27–34. Cambridge, UK: Cambridge University Press. Schneider, Thomas D. 2002. Consensus sequence Zen. Applied Bioinformatics 1: 111. Schneider, Thomas D., and R. Michael Stephens. 1990. Sequence logos: A new way to display consensus sequences. Nucleic Acids Research 18: 6097–6100. Schwander, Tanja, Romain Libbrecht, and Laurent Keller. 2014. Supergenes and complex phenotypes. Current Biology 24: R288-R294. Scott, James C. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven, CT: Yale University Press. Scott, James C. 2017. Against the Grain: A Deep History of the Earliest States. New Haven, CT: Yale University Press. Scott-Phillips, Thom. 2014. Speaking Our Minds: Why Human Communication is Different, and How Language Evolved to Make it Special. New York: Palgrave MacMillan. Searle, John R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3: 417–457. Searle, John R. 1995. The Construction of Social Reality. New York: Simon and Schuster. Searle, John R. 2010. Making the Social World: The Structure of Human Civilization. Oxford: Oxford University Press. Searls, David B. 1997. Linguistic approaches to biological sequences. Computer Applications in the Biosciences: CABIOS 333–344. Searls, David B. 2002. The language of genes. Nature 420: 211–217. Searls, David B. 2003. Trees of life and of language. Nature 426: 391–392. Sebeok, Thomas A. 1991. A Sign is Just a Sign. Bloomington, IN: Indiana University Press. Sereno, Martin I. 1991. Four analogies between biological and cultural/linguistic evolution. Journal of Theoretical Biology 151: 467–507. Shannon, Claude E. 1948. A mathematical theory of communication. Bell System Technical Journal 27: 379–423. Shapiro, James A. 2009. Revisiting the central dogma in the 21st century. Annals of the New York Academy of Sciences 1178: 6–28.
254
References
Sharp, Phillip A. 1994. Split genes and RNA splicing. Angewandte Chemie International Edition in English 33: 1229–1240. Sherrington, Charles Scott. 1953. Man on His Nature: The Gifford Lectures Edinburgh 1937. Cambridge, UK: Cambridge University Press. Shin, Jaeweon, Michael Holton Price, David H. Wolpert, Hajime Shimao, Brendan Tracey, and Timothy A. Kohler. 2020. Scale and information-processing thresholds in Holocene social evolution. Nature Communications 11: 1–8. Shumaker, Robert W., Kristina R. Walkup, and Benjamin B. Beck. 2011. Animal Tool Behavior: The Use and Manufacture of Tools by Animals. Baltimore: JHU Press. Simler, Kevin, and Robin Hanson. 2018. The Elephant in the Brain: Hidden Motives in Everyday Life. Oxford: Oxford University Press. Simon, Herbert A. 1969. The Sciences of the Artificial. Cambridge, MA: MIT Press. Simon, Herbert A. 1973. The organization of complex systems. In Hierarchy Theory: The Challenge of Complex Systems, ed. Pattee, Howard H., 1–27. New York: George Braziller. Simon, Herbert A. 2005. The structure of complexity in an evolving world: The role of near decomposability. In Modularity: Understanding the Development and Evolution of Complex Systems, eds. Callebaut, Werner, and Diego Rasskin-Gutman, ix–xiii. Cambridge, MA: MIT Press. Skinner, B. F. 1957. Verbal Behavior. New York: Appleton-Century-Crofts. Skinner, B. F. 1974. About Behaviorism. New York: Knopf. Smith, Stuart. 1994. The animal fatty acid synthase: One gene, one polypeptide, seven enzymes. FASEB Journal 8: 1248–1259. Soucy, Shannon M., Jinling Huang, and J. Peter Gogarten. 2015. Horizontal gene transfer: Building the web of life. Nature Reviews Genetics 16: 472–482. Souza, Valeria, Luis E. Eguiarte, Janet Siefert, and James J. Elser. 2008. Microbial endemism: Does phosphorus limitation enhance speciation? Nature Reviews Microbiology 6: 559–564. Spitz, François, and Eileen E. M. Furlong. 2012. Transcription factors: From enhancer binding to developmental control. Nature Reviews Genetics 13: 613–626. Steedman, Mark. 2009. Foundations of universal grammar in planned action. In Language Universals, eds. Christiansen, Morten H., Chris Collins, and Shimon Edelman, 174–199. Oxford: Oxford University Press. Stegmann, Ulrich, ed. 2013. Animal Communication Theory: Information and Influence. Cambridge, UK: Cambridge University Press. Stent, Gunther S. 1968. That was the molecular biology that was. Science 160: 390–395. Sterelny, Kim. 2001. Niche construction, developmental systems, and the extended replicator. In Cycles of Contingency: Developmental Systems and Evolution, eds. Oyama, Susan, Paul E. Griffiths, and Russell D. Gray, 333–350. Cambridge, MA: MIT Press. Sterelny, Kim. 2003. Thought in a Hostile World. Oxford: Blackwell Publishing. Sterelny, Kim. 2006. Memes revisited. British Journal for the Philosophy of Science 57: 145–165. Sterelny, Kim. 2012. The Evolved Apprentice. Cambridge, MA: MIT Press. Sterelny, Kim, and Paul E. Griffiths. 1999. Sex and Death: An Introduction to Philosophy of Biology. Chicago: University of Chicago Press. Sterelny, Kim, Kelly C. Smith, and Michael Dickison. 1996. The extended replicator. Biology and Philosophy 11: 377–403. Stockbridge, Randy B., Charles A. Lewis, Yang Yuan, and Richard Wolfenden. 2010. Impact of temperature on the time required for the establishment of primordial
References
255
biochemistry, and for the evolution of enzymes. Proceedings of the National Academy of Sciences 107: 22102–22105. Stout, Dietrich. 2011. Stone toolmaking and the evolution of human culture and cognition. Philosophical Transactions of the Royal Society of London B: Biological Sciences 366: 1050–1059. Strong, Michael, and Philip M. Prinz. 1997. A study of the relationship between American Sign Language and English literacy. Journal of Deaf Studies and Deaf Education 2: 37–46. Sudasinghe, P. G., C. R. Wijesinghe, and A. R. Weerasinghe. 2013. Prediction of horizontal gene transfer in Escherichia coli using machine learning. In Proceedings of International Conference on Advances in ICT for Emerging Regions, 118–124. Colombo: IEEE. Syvanen, Michael. 2012. Evolutionary implications of horizontal gene transfer. Annual Review of Genetics 46: 341–358. Szathmáry, Eörs. 2015. Toward major evolutionary transitions theory 2.0. Proceedings of the National Academy of Sciences 112: 10104–10111. Szustakowski, Joseph D., Simon Kasif, and Zhiping Weng. 2005. Less is more: Towards an optimal universal description of protein folds. Bioinformatics 21: ii66–ii71. Takeuchi, Nobuto, Paulien Hogeweg, and Eugene V. Koonin. 2011. On the origin of DNA genomes: Evolution of the division of labor between template and catalyst in model replicator systems. PLoS Computational Biology 7: e1002024. Talmy, Leonard. 2000. Toward a Cognitive Semantics, Volume 1. Cambridge, MA: MIT Press. Talmy, Leonard. 2003. The representation of spatial structure in spoken and signed language. In Perspectives on Classifier Constructions in Sign Language, ed. Emmorey, Karen, 169–195. Hove, UK: Psychology Press. Tamariz, Monica. 2019a. Replication and emergence in cultural transmission. Physics of Life Reviews 30: 47–71. Tamariz, Monica. 2019b. Action replication ultimately supports all cultural transmission. Physics of Life Reviews 30: 86–88. Tanford, Charles, and Jacqueline Reynolds. 2001. Nature’s Robots: A History of Proteins. Oxford: Oxford University Press. Taylor, Frederick Winslow. 1911. The Principles of Scientific Management. New York: Harper & Brothers. Taylor, Scott, and Leonardo Campagna. 2016. Avian supergenes. Science 351: 446–447. Thomas, Christopher M., and Kaare M. Nielsen. 2005. Mechanisms of, and barriers to, horizontal gene transfer between bacteria. Nature Reviews Microbiology 3: 711–721. Thomas, Frédéric, A. Schmidt-Rhaesa, Guilhaume Martin, C. Manu, P. Durand, and F. Renaud. 2002. Do hairworms (Nematomorpha) manipulate the water seeking behaviour of their terrestrial hosts? Journal of Evolutionary Biology 15: 356–361. Thompson, Martin J., and Chris D. Jiggins. 2014. Supergenes and their role in evolution. Heredity 113: 1–8. Todd, Annabel E., Christine A. Orengo, and Janet M. Thornton. 2001. Evolution of function in protein superfamilies, from a structural perspective. Journal of Molecular Biology 307: 1113–1143. Tomasello, Michael. 2008. Origins of Human Communication. Cambridge, MA: MIT Press. Traut, Thomas. 2008. Regulatory Allosteric Enzymes. New York: Springer. Tsang, Edward. 1993. Foundations of Constraint Satisfaction. New York: Academic Press. Tsukamoto, Yusuke, John S. Mattick, and Salih J. Wakils. 1983. The architecture of the animal fatty acid synthetase complex. Journal of Biological Chemistry 258: 15312–15322.
256
References
Tucker, Mike, and Rob Ellis. 1998. On the relations between seen objects and components of potential actions. Journal of Experimental Psychology: Human Perception and Performance 24: 830–846. Turchin, Peter, Harvey Whitehouse, Andrey Korotayev, Pieter Francois, Daniel Hoyer, Peter Peregrine, Gary Feinman, Charles Spencer, Nikolay Kradin, and Thomas E. Currie. 2018. Evolutionary pathways to statehood: Old theories and new data. SocArXiv doi:10.31235/osf.io/h7tr6 Turing, Alan M. 1936. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society 42: 230–265. Turner, Scott J. 2002. The Extended Organism: The Physiology of Animal-Built Structures. Cambridge, MA: Harvard University Press. Turvey, Michael T. 2019. Lectures on Perception: An Ecological Perspective. New York: Routledge. Vakoch, Douglas A. 2013. Astrobiology, History, and Society. New York: Springer. Venter, Craig J. 2014. Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life. New York: Penguin. Vetsigian, Kalin, Carl R. Woese, and Nigel Goldenfeld. 2006. Collective evolution and the genetic code. Proceedings of the National Academy of Sciences 103: 10696–10701. von Frisch, Karl. 1974. Animal Architecture. New York: Harcourt Brace Jovanovich. von Neumann, John. 1961. The general and logical theory of automata. In John von Neumann Collected Works, Vol. 5, ed. Taub, A. H., 288–326. Oxford: Pergamon. von Neumann, John. 1966. Theory of Self-Reproducing Automata, ed. Burks, Arthur W. Urbana, IL: University of Illinois Press. von Uexküll, Jakob. 1957. A stroll through the worlds of animals and men. In Instinctive Behavior, ed. Schiller, Claire H., 5–80. Madison, CT: International Universities Press. Vygotsky, L. S. 1962. Language and Thought. Cambridge, MA: MIT Press. Vygotsky, L. S. 1978. Mind in Society. Cambridge, MA: Harvard University Press. Wächtershäuser, Günter. 1997. The origin of life and its methodological challenge. Journal of Theoretical Biology 187: 483–494. Waddington, Conrad H. 1960. Man as an organism. In Evolution After Darwin, Volume 3: Issues in Evolution, eds. Tax, Sol, and Charles Callendar, 145–174. Chicago: University of Chicago Press. Waddington, Conrad H., ed. 1968. Towards a Theoretical Biology, Volume 1: Prolegomena. Chicago: Aldine. Waddington, Conrad H. 1969a. Paradigm for an evolutionary process. In Towards a Theoretical Biology, Volume 2: Sketches, ed. Waddington, Conrad H., 106–128. Chicago: Aldine. Waddington, Conrad H., ed. 1969b. Towards a Theoretical Biology. Volume 2: Sketches. Chicago: Aldine. Waddington, Conrad H., ed. 1970. Towards a Theoretical Biology, Volume 3: Drafts. Chicago: Aldine. Waddington, Conrad H. 1972a. Epilogue. In Towards a Theoretical Biology, Volume 4: Essays, ed. Waddington, Conrad H., 283–289. Chicago: Aldine. Waddington, Conrad H., ed. 1972b. Towards a Theoretical Biology. Volume 4: Essays. Chicago: Aldine. Wagman, Jeffrey B. 2020. A guided tour of Gibson’s theory of affordances. In Perception as Information Detection: Reflections on Gibson’s Ecological Approach to Visual Perception, eds. Wagman, Jeffrey B., and Blau, Julia J. C. New York: Routledge.
References
257
Wagman, Jeffrey B., and Julia J. C. Blau, eds. 2020. Perception as Information Detection: Reflections on Gibson’s Ecological Approach to Visual Perception. New York: Routledge. Walker, Sara Imari, and Paul C. W. Davies. 2017. The “hard problem” of life. In From Matter to Life: Information and Causality, eds. Walker, Sara Imari, Paul C. W. Davies, and George F. R. Ellis, 19–37. Cambridge, UK: Cambridge University Press. Wang, Zefeng, and Christopher B. Burge. 2008. Splicing regulation: From a parts list of regulatory elements to an integrated splicing code. RNA 14: 802–813. Washburn, Sherwood L. 1960. Tools and human evolution. Scientific American 203: 63–75. Waters, Dennis P. 1979. Correspondence. ETC: A Review of General Semantics. 36: 298–304. Waters, Dennis P. 1990. Natural Symbol Systems: The Evolution of Linguistic Constraints on Fit and Function. Ph.D. Dissertation, Binghamton University, Binghamton, NY. Waters, Dennis P. 2011. Von Neumann’s theory of self-reproducing automata: A useful framework for biosemiotics? Biosemiotics 5: 1–11. Waters, Dennis P. 2012a. The interplay of languaging and literacy: Clues from an RNA world. Presented at First International Conference on Interactivity, Language and Cognition, University of Southern Denmark, Odense. Waters, Dennis P. 2012b. From extended phenotype to extended affordance: Distributed language at the intersection of Gibson and Dawkins. Language Sciences 34: 507–512. Waters, Dennis P. 2014. How does language acquire children? Morten Christiansen and the road to a Gibsonian science of language. Poster at Finding Common Ground: Social, Ecological, and Cognitive Perspectives on Language Use, University of Connecticut, Storrs, CT. Watson, James D., Tania A. Baker, Stephen P. Bell, Alexander Gann, Michael Levine, and Richard Losick. 2014. Molecular Biology of the Gene, 7th Edition. New York: Pearson. Watson, James D., and Francis H. C. Crick. 1953. Molecular structure of nucleic acids. Nature 171: 737–738. Watson, James D., and Cyrus Levinthal. 1965. Molecular Biology of the Gene. New York: W.A. Benjamin. Weaver, Warren. 1949. Recent contributions to the mathematical theory of communication. In The Mathematical Theory of Communication, eds. Shannon, Claude E., and Warren Weaver, 3–28. Urbana, IL: University of Illinois Press. Wesenberg-Lund, Carl. 1931. Contributions to the Development of the Trematoda Digenea, Volume 1: The Biology of Leucochloridium Paradoxum. Copenhagen: Høst i Komm. Whittaker, John C. 1994. Flintknapping: Making and Understanding Stone Tools. Austin, TX: University of Texas Press. Wigner, Eugene. 1964. Events, laws of nature, and invariance principles. Science 145: 995–999. Wilkins, John S. 1998. What’s in a meme? Ref lections from the perspective of the history and philosophy of evolutionary biology. Journal of Memetics—Evolutionary Models of Information Transmission 2: 1–21. Williams, George C. 1985. A defense of reductionism in evolutionary biology. In Oxford Surveys in Evolutionary Biology, 1–27. Oxford: Oxford University Press. Wilson, David Sloan. 2002. Darwin’s Cathedral. Chicago: University of Chicago Press. Wiltschko, Wolfgang, and Roswitha Wiltschko. 2005. Magnetic orientation and magnetoreception in birds and other animals. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology 191: 675–693. Wimsatt, William C. 1980. The units of selection and the structure of the multilevel genome. In Proceedings of the Biennial Meeting of the Philosophy of Science
258
References
Association, Volume Two: Symposia and Invited Papers, 122–183. Chicago: University of Chicago Press. Wimsatt, William C. 1999. Genes, memes and cultural heredity. Biology and Philosophy 14: 279–310. Withagen, Rob, and Anthony Chemero. 2009. Naturalizing perception: Developing the Gibsonian approach to perception along evolutionary lines. Theory & Psychology 19: 363–389. Wittgenstein, Ludwig. 1958. Philosophical Investigations. New York: Prentice-Hall. Woese, Carl R. 1973. Evolution of the genetic code. Die Naturwissenschaften 60: 447–459. Woese, Carl R. 1998. The universal ancestor. Proceedings of the National Academy of Sciences 12: 6854–6859. Woese, Carl R. 2000. Interpreting the universal phylogenetic tree. Proceedings of the National Academy of Sciences 97: 8392–8396. Woese, Carl R. 2002. On the evolution of cells. Proceedings of the National Academy of Sciences 99: 8742–8747. Wolfenden, Richard. 2011. Benchmark reaction rates, the stability of biological molecules in water, and the evolution of catalytic power in enzymes. Annual Review of Biochemistry 80: 645–667. Wong, Leonard, and Stephen J. Gerras. 2015. Lying to Ourselves: Dishonesty in the Army Profession. Carlisle Barracks, PA: Strategic Studies Institute and U.S. Army War College Press. Wray, Alison. 1998. Protolanguage as a holistic system for social interaction. Language & Communication 18: 47–67. Yarus, Michael. 2011. Getting past the RNA world: The initial Darwinian ancestor. In RNA Worlds: From Life’s Origins to Diversity in Gene Regulation, eds. Atkins, John F., Raymond F. Gesteland, and Thomas R. Cech, 43–50. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. Young, J. Peter W. 2016. Bacteria are smartphones and mobile genes are apps. Trends in Microbiology 24: 931–932. Zhang, Linlin, Anyi Mazo-Vargas, and Robert D. Reed. 2017. Single master regulatory gene coordinates the evolution and development of butterf ly color and iridescence. Proceedings of the National Academy of Sciences 114: 10707–10712. Zipf, George Kingsley. 1935. The Psycho-biology of Language. New York: Houghton, Miff lin. Zuckerkandl, Emile, and Linus Pauling. 1965. Molecules as documents of evolutionary history. Journal of Theoretical Biology 8: 357–366.
INDEX
abstraction 6 –7, 31, 34, 63, 76, 96, 106, 111–112, 114, 118–119, 131n58, 132n65, 138, 142, 144, 168–170, 173, 176–177, 181n87, 183n112, 204–205, 212–216 abstraction, grammar of see grammar of abstraction action at a distance 22, 91, 94 Adamo, Shelley 90 affordances 6, 9, 70– 83, 85n49, 85n62, 87– 89, 91–106, 108n33, 111–115, 126, 129, 144, 156, 164, 171–174, 177, 182n110, 185–187, 193–198, 200, 203–205, 210n116, 213, 216–217 algorithm 10, 127, 135, 145, 212 allosterism 69, 107n26, 114–119, 123–124, 126, 130n12, 130n15, 138, 144, 160, 162, 178n19, 185, 201, 213–214, 217 alphabet 3, 13, 15, 22–23, 32–33, 36n37, 36n46, 56, 105, 127, 146, 149, 164, 173–174, 218, 222, 225–226, 228, 230n4 alternative splicing 81– 82, 86nn88–89, 96, 105, 123, 125–127, 131n46 Altmann, Stuart 24, 36, 37n65, 67, 108n42 Anfinsen, Christian 64 animal communication 6, 9, 21, 24, 94, 108n41, 217, 219 astrobiology 6, 8, 177, 219
asymmetric inheritance 144, 200, 206n9; see also chimera Atkinson, Quentin 19 Aunger, Robert 89, 175, 183n121 Baltimore, David 4 behavioral manipulation 89–91, 93–95, 97–99, 102, 106, 107n24, 107n26, 201 Benner, Steven 37n48, 62, 159, 161, 178n23, 180n50, 182n94 Bennett, Charles 4, 130n27 Biller, Steven 197 Blackmore, Susan 135 blue- and white-collar 31, 33, 117–119, 122–128, 131, 141, 176, 203 Bohr, Neils 150n2 bonds, strong and weak 64, 84nn18–19, 160, 188–190, 203, 209n99 Borghi, Anna 132n65 boundary conditions 47–51, 53, 58n25, 64, 72, 89, 99–100, 118, 134, 184–188, 190–191, 193, 201–205, 206n6; see also constraints Boyd, Richard 24, 191, 202, 206n9, 209n97 Brenner, Sydney 5 – 6, 44, 65, 68, 113, 145, 167 Campbell, Donald 50, 153n54, 188, 207n24 Carello Claudia 72–73
260
Index
catalysis 62, 65–71, 73–74, 77– 81, 84n26, 112–116, 118, 126, 130n12, 156–162, 170, 176–177, 178n23, 178n26, 179n40, 180n50, 182n94, 199, 215, 217 catalytic manipulation 69, 73–74, 77–78, 113, 115, 118, 157–158, 161, 215 Cech, Thomas 157–158, 161 central dogma 6, 142, 146, 148, 152n47, 161–162, 215 Chafe, Wallace 35n16, 96 Chater, Nick 6, 100–102 Chemero, Anthony 71, 122 chimera 90, 105, 186–188, 196, 201, 206n3; see also asymmetric inheritance Chomsky, Noam 11n29, 38n67, 56, 110n81, 129n3, 218–219 Christiansen, Morten 6, 18, 100–102 Clark, Andy 107n4, 110n81, 154n61, 195 classification 43– 47, 57, 58n20, 61, 68, 72, 79, 90, 123, 127, 169–171, 187, 196, 201–202, 204–205, 211n127, 215, 218; see also decisions; equivalence classes Cloak, F.T., Jr. 193, 198 code, computer 2 –5, 10n1, 23, 25, 27, 33–34, 57, 130n27, 173, 216, 218 code, genetic 13, 27, 37n47, 56, 105, 109n73, 117, 147, 179n36, 226–229 codons 13, 56, 105, 117, 147, 227–229, 230n5 communication, animal 6, 9, 21, 24, 94, 108n41, 217, 219 complexity, evolution of 23, 31, 57, 59n47, 105, 127, 129, 136–138, 155–156, 163, 165, 214–215, 217 complication, threshold of 135–137, 141, 155–156, 159–165, 170, 175–177, 179n37, 214–215, 217 computation 16, 19, 70–71, 121, 137–139, 145, 149, 173, 212–215, 220n3 configuration 49, 61– 62, 69, 89–94, 96–97, 101, 107n26, 115–118, 126, 29, 138–141, 143–144, 148–150, 152n36, 164, 187, 193–198, 200–201, 203, 208n62, 209n99, 213–214, 217; see also construction consciousness 8 –9, 11n26, 102, 206n6, 209n99 constraint 5 – 6, 9, 17, 29, 35n5, 35n13, 35n17, 39–59, 61, 69, 77–78, 80, 83n4, 84n19, 87– 88, 93–96, 99, 104, 114–115, 120, 123, 126–128, 130n15, 134, 137, 148–150, 154n62, 169–170,
173, 184–191, 194–196, 201–205, 206n1, 206n7, 209n99, 210n116, 212–213, 215–217 construction 61– 62, 69–70, 74, 77–79, 83, 88–91, 106, 116–117, 125–126, 129, 135–137, 139–141, 143–145, 147, 161, 164, 176, 187, 196, 198, 200–201, 203, 209n99, 213, 217, 222–224, 229; see also configuration constructor, universal 139, 140, 144, 153, 164, 196, 198 continuum of abstraction 34, 212–220 control automaton C 141–143, 146, 151n28, 161, 217 control hierarchy 5, 46– 47, 51, 53, 56–57, 74, 96, 99, 106, 112, 119–120, 123, 125–126, 141, 143, 150, 158, 170, 213–215 copying 26, 126, 133–135, 139–143, 145, 149–150, 159, 179n31, 193, 198; see also self-reproduction core genome 197–199, 201–202, 205, 208n68 Crick, Francis 5, 13, 28, 134–135, 142, 146–148, 150, 152–153, 161–162, 215, 224–225, 228 Croft, William 9, 97, 132n71, 209n97 cultural evolution 21, 152n45, 175, 191–194, 200–203, 206, 209n97, 216–217 Darwin, Charles 46, 198, 218 Darwinian 151, 160, 179, 191, 207 Davidson, Eric 123–125, 130n26, 205 Dawkins, Richard 5, 18, 20, 40, 61– 62, 67, 87–92, 95, 102–103, 106, 107n4, 107n10, 120, 147, 178n21, 188, 192–193, 202, 208n54 Deacon, Terrence 6, 67, 100–102, 187, 195 decisions 43– 45, 47, 58n13, 58n19, 58n25, 61, 68, 141, 143, 161, 169, 171 deictic 103–106, 122, 127; see also grammar of extension Delbrück, Max 28–29 Demir, Ebru 81– 82 Dennett, Daniel 8, 11n26, 65, 85n49, 108n38, 192, 206n6, 208n62, 209n99 descriptive hierarchies 52–54, 56 descriptive sequences 13–16, 18, 21–23, 30–33, 34n4, 35n6, 35n12, 44, 61– 62, 64, 74 development 13–14, 23, 27, 48, 55, 80, 105, 118–119, 121, 123–126, 128–129,
Index
131n56, 178n19, 187, 201, 205, 211n128 Dickson, Barry 81– 82 displacement (Hockett design feature) 21, 96, 104, 180 domains, protein 78– 81, 83, 123, 126, 175 don’t-care conditions 6, 44– 45, 47, 53–55, 58n14, 187, 191, 194, 217; see also underdetermination Doudna, Jennifer 156 downward causation 50 dual control 48–50, 53–54, 56 duality of patterning (Hockett design feature) 23, 36n37, 56 Dyson, Anne 166, 204 Dyson, Freeman 135, 144, 164, 199 ecological psychology 6, 9, 10n13, 71–72 economics 6, 26, 135, 141, 196, 209n80 Eigen, Manfred 62, 151n22 energy degeneracy 25, 33, 36n25, 37nn47–48, 131n61, 151n25, 179n46 entanglement 34, 97, 108n42, 121–122, 124–125, 142, 149, 157, 161–162, 165–166, 170, 176–177, 181n71, 181n75, 195, 213–215 enzymes 5, 21, 24, 43, 55, 62– 63, 65– 82, 84n24, 84n39, 105, 107n26, 112–128, 130n7, 130n12, 130n17, 130n26, 134, 138, 142–144, 152n47, 156–161, 165, 170–171, 174, 188, 201, 203–205, 213, 216–217, 225 equivalence classes 43– 44, 47, 93–94, 112, 116–117, 203–205; see also decisions Erwin, Douglas 205–206, 209n80, 211n128 Escherichia coli 119, 196–197 eukaryotes 86, 131, 178–179, 206 evolution, cultural 21, 152n45, 175, 191–194, 200–203, 206, 209n97, 216–217 evolution, reticulate 200–201, 206, 209n97, 217 exons 79– 83, 105–106, 147, 226 extended phenotype 5, 87–92, 94–95, 98, 103, 106n2, 107n10, 108n33, 171–172, 178n21 extension, grammar of 96, 102, 104–106, 111, 121, 129, 171, 216 Feynman, Richard 58n11 folding 62– 65, 70, 74, 78, 83n12, 83n16, 125, 145, 147, 152n34, 156–157,
261
160–162, 176, 179n40, 179n46, 180n48, 180n50, 188, 215, 217, 222, 225, 228, 230 formal systems 32–34, 56–57, 135, 137, 142, 145, 150n6, 173, 215 Forterre, Patrick 179n37, 195–196 fru gene 82 fungal parasites of ants 90–93, 102, 107nn23–24, 196; see also behavioral manipulation Gallagher, Shaun 205, 210n199, 210n121 Geertz, Clifford 3 gene regulatory networks 56, 125, 158, 203, 205, 211n128 genes in pieces 78, 80, 96, 105, 126 genetic code 13, 27, 37n47, 56, 105, 109n73, 117, 147, 179n36, 226–229 Gerras, Stephen 188–189 gesture 39– 40, 115, 121, 148, 166, 183n126, 195, 213, 215 Gibson, James 6, 9, 70–74, 76, 85n49, 85n51, 85n56, 91–92, 94–95, 144, 172, 182nn100–101 Gifford, David 4 Gilbert, Walter 78, 80, 157 Givón, Talmy 96, 101, 181n87 Glenberg, Arthur 97–98 Gontier, Nathalie 198, 207n29 Goody, Jack 6, 30, 32, 154n59, 163, 165, 169, 182n94, 202 Gott, J. Richard 218 government 28, 76, 166, 189, 204–205 grammar 3, 5 – 6, 9, 23, 31–32, 56–57, 79– 81, 83, 96–99, 102–106, 109n61, 111, 114, 121, 126–129, 171, 216 grammar of abstraction 96, 106, 111, 114, 128–129, 216 grammar of extension 96, 102, 104–106, 111, 121, 129, 171, 216 grammar of interaction 79– 81, 83, 96–98, 102–103, 105, 109n61, 111, 126, 129, 216 GRN see gene regulatory networks grounding of semantics 30, 34, 76, 101, 106, 126–129, 173, 213 Harnad, Stevan 45, 127–128, 180n57 Harris, Roy 36n46, 163, 168 Harris, Zellig 22, 31–32, 37n55, 151n18, 154n60 Heine, Bernd 97, 109n49, 110n95 hierarchy, control 5, 46– 47, 51, 53, 56–57, 74, 96, 99, 106, 112, 119–120,
262
Index
123, 125–126, 141, 143, 150, 158, 170, 213–215 hierarchy, descriptive 52–54, 56 hierarchy, statistical 52–53 Hockett, Charles 6, 21–24, 31, 56, 65, 87, 107n24, 108n39, 165, 180n61 Hodgson, Geoffrey 190, 203–204 holophrases 96–97, 108n41, 115, 214; see also allosterism Hopper, Paul 78, 96 horizontal gene transfer 196–197, 199–200, 206, 208n63, 208n65, 208n68, 217 Hughes, David 90, 107n20 Hull, David 6, 8, 15, 18, 36n27, 61, 83n5, 83n7, 142, 160, 190–191 incorporeal (laws of physics) 26, 28, 213 inexorable (laws of physics) 1, 9, 12, 28–30, 40, 43, 48, 52, 162, 212–214, 216 inheritance, asymmetric 144, 200, 206n9; see also chimera initial conditions 41– 43, 60 institutions 30, 35n14, 44, 166, 169, 184–190, 202–206, 210n109, 210n116, 210n121, 217–218 interaction, grammar of 79– 81, 83, 96–98, 102–103, 105, 109n61, 111, 126, 129, 216 interactor 6, 24, 61– 62, 64, 72–73, 75, 77, 83n5, 83n7, 114–115, 118, 126, 143, 147, 156, 162, 172, 201, 205, 213, 217 introns 79– 82, 105, 147, 226 irregular verbs 20, 123 Jablonka, Eva 126 Jackendoff, Ray 103–104, 182n110, 220n10 Jacob, François 3 – 4, 7, 119, 211n127, 230n3 Jakobson, Roman 4 Joyce, Gerald 37n48, 157–161, 177, 178n16, 178n19, 179n31, 179n41, 179n45 Kemeny, John 138 Kendon, Adam 166, 183n26 Kirby, Simon 18, 103 Knudsen, Thorbjørn 83n4, 204 Koonin, Eugene 152n47, 195 Koshland, Daniel 68
Küppers, Bernd-Olaf 35n17, 49, 58n25, 84n18 Kuteva, Tania 97, 109n49, 110n95 lac operon 119–121, 123–124 Laland, Kevin 89, 107n6, 194–195, 207n29 Landau, Barbara 103–104, 182n110 Landauer, Rolf 4, 58n10 language evolution 6, 96–97, 99–102, 104–106, 106, 109n60, 129, 159 laws and rules 27–30, 42 Legos 78– 80, 82, 125–126; see also domains; protein Leibniz, Gottfried 40, 42, 64, 190 Levinson, Stephen 103, 163, 166 Lévi-Strauss, Claude 183n131 Lewontin, Richard 26, 73–75, 130n27 Lieberman-Aiden, Erez 123 literacy 6, 30, 134, 143, 145, 148–150, 154n58, 156, 163–171, 176–177, 180n57 Lovelace, Ada 212–213, 215 Lumsden, Charles 84n19, 187 magnetoreception 75, 144, 171 manipulation 34, 69, 73–74, 77–78, 83n5, 89–91, 93–95, 97–99, 102, 106, 107n24, 107n26, 113, 115, 118, 157–158, 161, 164–165, 170–171, 174, 176–177, 183n126, 201, 215, 217 manipulation, behavioral 89–91, 93–95, 97–99, 102, 106, 107n24, 107n26, 201 manipulation, catalytic 69, 73–74, 77–78, 113, 115, 118, 157–158, 161, 215 manipulation, technologies of 156, 164–165, 170–171, 174, 176–177, 183n126, 217 Martin, William 36n39, 199 mathematics 8, 33–34, 56–57, 118, 149, 173 Mattick, John 158, 178n19 Maynard Smith, John 4, 10n6, 44, 155–156, 179n36 McCloskey, Deirdre 85n8, 100 McInerney, James 26, 198, 209n80 measurement 14, 18, 21, 35n12, 53, 58n12, 75, 80, 90–91, 134, 149, 154n62, 156, 170–177, 182n102, 182n108, 182n110, 217 measurement, technologies of 75, 156, 170–177, 182n102, 182n108, 182n110, 217
Index
263
Olson, David 6, 164, 167–169 openness (Hockett design feature) 23, 31 optix gene 128–129 Orgel, Leslie 157, 159, 207n28
parasites 29, 89–93, 95, 98, 102, 106, 107n24, 107n26, 110n81, 186, 192, 196, 217; see also behavioral manipulation Pattee, Howard 2 , 5, 9, 10nn12–13, 11n27, 16, 26–28, 30, 35n13, 36n45, 37n47, 37n57, 42– 44, 46– 47, 50–53, 57n8, 58n10, 59n34, 62– 65, 78, 109n49, 130n12, 136, 139, 144–145, 147, 150, 151n8, 180n46, 198, 207n23, 207n28, 209n99, 212, 221 pattern recognition 67– 68, 73, 77–78, 170, 204, 215; see also perception Paul, Robert 187–188 perception 9, 14, 22, 45– 46, 49, 61– 62, 68, 70–78, 84n47, 85n56, 87–103, 105–106, 108n26, 108n33, 111, 113, 115, 122, 126–128, 148, 164, 170–174, 176–177, 182n100, 193, 201, 204–205, 210n116, 217; see also pattern recognition, measurement phenotype, extended 5, 87–92, 94–95, 98, 103, 106n2, 107n10, 108n33, 171–172, 178n21; see also grammar of extension phrasal templates 80, 85n80, 149 phylogeny 20, 34, 84n48, 131n56, 154n58 Pinker, Steven 3, 218 Planck, Max 9, 67 pleiotropic 118, 128–129; see also grammar of abstraction Plotkin, Henry 51, 83n6 Polanyi, Michael 47– 49, 64 polymerase 102, 105, 116, 126, 130n17, 130n19, 133, 143, 146–147, 178n16, 224–226 preliterate 165, 167–171, 176–177, 180n57, 181n84, 184, 194, 214–215 prepositions 17–18, 98, 103, 105–106, 218; see also grammar of extension probability 9, 20–21, 25, 40, 42, 61, 65, 67, 89, 103–106, 108n33, 120, 129, 213, 215–216 Prochlorococcus 197–202, 209n80; see also horizontal gene transfer Progovac, Liljana 99, 104, 109n61 protein domains 78– 81, 83, 123, 126, 175 psychology, ecological 6, 9, 10n13, 71–72
p21ras gene 115 Pagel, Mark 40, 100, 191 pan-genome 197–199, 202, 208n86, 209n90; see also horizontal gene transfer
random access 17, 32, 57n1, 121, 123–124, 158, 160, 162, 164, 167–170, 176, 215, 217, 226 rate dependence 16–18, 22, 30, 35n13, 35nn15–16, 44, 50, 58n19, 63, 70,
memes 6, 192–196, 198, 200–203, 206, 208n62 Merleau-Ponty, Maurice 92, 180n62 Mesoudi, Alex 55, 192, 203 metabolic pathway 23–24, 55, 112–114, 119–121 metalanguage 31–32, 105, 127; see also self-reference metaphor 3 – 4, 7– 8, 68, 70, 125, 178, 195, 199 Michaels, Claire 72–73 modularity 55–56, 78, 83, 123, 125, 197 monkey, vervet 94–97, 99, 108n38, 115, 118, 144, 167, 213, 219; see also behavioral manipulation Monod, Jacques 5, 6, 50, 53, 66, 114, 119–120 Morse Code 13, 121, 226, 230n4 mutation 25, 47, 55, 63, 82, 105, 141–142, 153n54, 159, 170, 179n31, 225 networks 55, 95, 113, 124–126, 158, 162, 193, 197–201, 203, 205–206, 209n90, 209n97, 217–218 networks, gene regulatory 56, 125, 158, 203, 205, 211n128 Neumann, John von 6, 135–142, 144–148, 150, 150n5, 151n8, 151n14, 151n24, 151nn27–28, 152nn35–36, 153n52, 155–157, 159, 161–164, 176–177, 179n37, 196, 198, 201, 215, 217, 219 Newell, Allen 38n73 niche construction 88– 89, 99–100, 107n10, 108n33 nonholonomic constraint 57n8 Norenzayan, Ara 202, 210n116 North, Douglass 203 noun 31–32, 78, 80, 83, 96–97, 99, 103, 105–106, 108n47, 110n95, 111, 126–127, 129 numerals 173, 183n12
264
Index
109n49, 117–118, 121–122, 125, 147–148, 150, 157, 162, 166, 167, 170, 177, 191, 193–195, 204, 212–216 rate independence 16–18, 21, 30, 33, 35nn13–15, 58n19, 63, 70, 104, 109n49, 117, 121–122, 125, 130n21, 134, 140, 147, 157, 162, 166, 169–170, 176–177, 181n76, 191–195, 204, 212–216 recipes 13, 15, 17, 21, 23, 26, 140–143, 145, 149, 191, 193–198, 200–201, 203 reclassification and classification 43– 47, 57, 58n20, 61, 68, 72, 79, 90, 123, 127, 169–171, 187, 196, 201–202, 204–205, 211n127, 215, 218 Reed, Edward 54, 71, 73, 75 regulatory genes 31, 82, 118, 123–125, 129; see also blue- and white-collar; gene regulatory networks replication see copying; self-reproduction reticulate evolution 200–201, 206, 209n97, 217 ribosome 11n25, 81– 82, 102, 105, 116–117, 130n16, 140, 143–144, 153n33, 156–158, 195–196, 215, 217, 229 ribozyme 156–162, 166, 170, 174, 176–177, 178n16, 179n40, 179n46, 180n48, 215 Richerson, Peter 24, 191, 206n9, 209n97, 210n109 Ritt, Nikolaus 9, 102 Robertson, Michael 160–161, 179n31 robots 8, 21, 33–34, 62, 69–71, 84n44, 145, 171, 218 Rosen, Robert 52–54 rules and laws 27–30, 42
151nn27–28, 152n34, 152n36, 153n52, 155, 157, 161, 164, 198, 201, 217 self reproduction 133, 137, 139–142, 144, 150n6, 155, 159, 161, 164, 198, 217 Shannon, Claude 56, 127 Sharp, Phillip 80– 81 Simon, Herbert 6, 29, 54–55, 59n41, 59n47, 78, 83n12, 94, 112, 125 situatedness 60– 61, 95–97, 104, 107n4, 154n61 Skinner, B.F. 24, 57n3, 152n35, 186 specialization (Hockett design feature) 24, 66, 107n24 spliceosome 11n25, 79– 82, 102, 122, 157–158, 215, 217, 226 split genes see genes in pieces stability 14, 18, 29–30, 32, 74–75, 84n18, 97–98, 105–106, 109n49, 121, 123, 125–126, 159–160, 162, 164–165, 167, 169–170, 176–177, 178n26, 214–215, 217 statistical hierarchy 52–53 Steedman, Mark 97, 129 Steitz, Thomas 156–157 Stent, Gunther 10n6, 152n47, 220n3 Sterelny, Kim 6, 17, 61, 107n4, 108n38, 113, 152n45, 159, 200, 207n50, 219 structural genes 31, 118, 128, 130n26; see also blue- and white-collar substrate 43, 68– 69, 71, 73–74, 77, 84n39, 112, 114–117, 119–120, 160, 171, 174, 216 switch 24, 114–116, 128, 130n12, 184, 205 symbol grounding 30, 34, 76, 101, 106, 126–129, 173, 213 syntax see grammar Szathmáry, Eörs 155–156, 179n36
Sanger, Frederick 10n1 scalability 62, 74, 95, 113, 115, 129, 138, 155, 159, 161, 165, 169, 176, 180n67, 215, 217 Schmandt-Besserat, Denise 165, 173, 182n102 Scott-Phillips, Thom 39 Searle, John 6, 35n14, 38n66, 76, 117–120, 132n64, 186–188, 190, 202–204 second-hand perception 22, 94–95 self-reference 31–33, 37n65, 38n71, 46, 57, 58n20, 96, 111, 120, 123, 127, 138, 167–170, 173, 176, 217 self-reproducing automata 6, 135–145, 147–148, 150nn5–6, 151n24,
Talmy, Leonard 98, 104–105, 131n58 Tamariz, Monica 11n30, 207n29 Taylor, Frederick 44 technologies of manipulation 156, 164–165, 170–171, 174, 176–177, 183n126, 217 technologies of measurement 75, 156, 170–177, 182n102, 182n108, 182n110, 217 Thompson, Sandra 78, 96 threshold of complication 135–137, 141, 155–156, 159–165, 170, 175–177, 179n37, 214–215, 217 Tomasello, Michael 96, 101, 104, 108n41, 168, 213 transitivity 112–113, 129n3, 154n62
Index
265
Traut, Thomas 112, 125 TSRA see self-reproducing automata Turing, Alan 4, 33, 137–139, 148, 151n14 Turing Machine 4, 137–140, 142, 145, 148, 212, 215 Turvey, Michael 10n13, 85n51, 85n62
vervet monkey 94–97, 99, 108n38, 115, 118, 144, 167, 213, 219 viruses 27, 102, 107n5, 178n78, 179n31, 192, 195–196, 201, 207n54, 207nn62–63, 230n6 Vygotsky, L.S. 57, 148, 167, 215
Uexküll, Jakob von 85n51, 85n56, 170 umwelt 85n51, 85n56 underdetermination 6, 44, 58n19, 144, 194, 217; see also don’t-care conditions universal constructor A 139, 140, 144, 153, 164, 196, 198 universal copier B 140, 143, 147, 151n25, 196 universality 9, 28, 30, 33, 41, 51, 65, 85n62, 101, 105, 111, 127, 138–139, 142, 148–149, 173–174, 212–214 Universal Turing Machine 138–140, 148 UTM see Universal Turing Machine
Waddington, C.H. 3, 10n12, 83n6, 129, 178n26 Wagman, Jeffrey 84n47, 85n4 watchmaker 55, 78, 83n12, 125 Watson, James 13, 135, 152n47, 224–225, 228 white- and blue-collar 31, 33, 117–119, 122–128, 131, 141, 176, 203 Wilson, David Sloan 202 Wilson, E.O. 84n19, 187 Wimsatt, William 182n111, 195, 199, 201, 203, 207n40 Woese, Carl 5, 6, 19, 61, 109n73, 114–115, 151n12, 155–156, 160, 179n37, 198, 200, 218 Wong, Leonard 188–189 writing see literacy
verbs 20, 62, 78, 80, 83, 96–97, 99, 103–105, 108n47, 109n60, 111, 123, 129 verbs, irregular 20, 123