The Carousel of Time: Theory of Knowledge and Acceleration of Time 9781786304605, 1431441511, 1786304600, 9781119681502, 1119681502

Based around the image of a carousel, this book uses epistemological theory to tackle the paradoxical acceleration and d

269 79 3MB

English Pages 304 [309] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover......Page 1
Half-Title Page......Page 3
Dedication......Page 4
Title Page......Page 5
Copyright Page......Page 6
Contents......Page 7
Foreword......Page 11
Acknowledgments......Page 17
Introduction......Page 19
PART 1: Foundations......Page 29
1. Information, Communication and Learning......Page 31
1.1. Claude Shannon’s model......Page 32
1.1.1. Ralph Vinton Hartley, Claude Shannon’s forerunner......Page 33
1.1.2. Claude Shannon’s formula and the two fundamental theorems of telegraphic communication......Page 34
1.1.3. The eight main characteristics of the Shannonian theory of communication......Page 37
1.2.1. The immanent mind and the Batesonian definition of information......Page 39
1.2.2. The Batesonian categorization of learning......Page 41
1.2.3. The eight main characteristics of Batesonian communication theory......Page 44
2.1. Self-organization and information creation......Page 51
2.2.1. Order from noise versus organizational noise......Page 59
2.2.2. Complexity and complication......Page 61
2.2.3. Meaning of information in a hierarchical system......Page 64
3. Human Memory as a Self-organized Natural System......Page 69
3.1.1. The theory of functional localization......Page 70
3.1.2. Against functional localization......Page 73
3.2. Neural Darwinism and inventive memory......Page 75
4. Hypotheses Linked to the Model......Page 91
4.1. Six hypotheses relating to the structure of the network......Page 92
4.2. Eight hypotheses relating to the evolution of the network......Page 98
4.2.1. Assumptions related to inter-individual communication......Page 99
4.2.2. Hypotheses related to intra-individual cognition......Page 102
PART 2: Space......Page 109
5. Scope, Dimensions, Measurements and Mobilizations......Page 111
5.1. Inter-individual communication and learning......Page 113
5.2. Categorization and learning......Page 120
5.2.1. The creative analogy of weak novelty: the example of Planck’s formula......Page 123
5.2.2. The creative analogy of radical novelty: Gregory Bateson’s “grass syllogism”......Page 129
6. Provisional Regionalization and Final Homogenization......Page 141
6.1. Formation of clusters of actors and regionalization of the network space......Page 142
6.2. Instability and erasure of regions within the network......Page 152
6.3. Evolution of information production at the level of the global network and at the level of each cluster of actors......Page 160
PART 3: Time......Page 169
7. Propensities to Communicate, the Specious Present and Time as Such, the Point of View from Everywhere and the Ancestrality’s Paradox......Page 171
7.1. Propensities to communicate and the specious present......Page 172
7.2. Subjective time, objective time and time as such......Page 179
7.3. A point of view from nowhere or a point of view from everywhere?......Page 184
7.4. On an alleged “ancestrality’s paradox”......Page 189
8. Déjà-vu and the Specious Present......Page 199
8.1. A history of interpretations of the déjà-vu phenomenon......Page 200
8.2. Déjà-vu and the specious present: an interpretation......Page 207
9. The Acceleration of Time, Presentism and Entropy......Page 215
9.1. Historical time, irreversibility and end of time......Page 216
9.2.1. A psychological interpretation of the acceleration of time......Page 221
9.2.2. A socio-historical interpretation of the acceleration of time......Page 225
9.3. Irreversibility of time and entropy of the network......Page 230
9.3.1. A brief presentation of the genesis of the entropy concept......Page 231
9.3.2. The entropy law21 and network trajectory......Page 233
9.3.3. Entropy theory and trajectory of the complex socio-cognitive network of individual actors......Page 237
10. Temporal Disruptions......Page 241
10.1. The translation of beliefs......Page 244
10.2. Revisions of beliefs and the possible worlds semantics......Page 247
10.3. The weak transformation of beliefs: learning and normal science......Page 250
10.4. The radical transformation of beliefs: learning and scientific revolution......Page 254
Conclusion......Page 263
References......Page 277
Index......Page 297
Other titles from iSTE in Interdisciplinarity, Science and Humanities......Page 303
EULA......Page 307
Recommend Papers

The Carousel of Time: Theory of Knowledge and Acceleration of Time
 9781786304605, 1431441511, 1786304600, 9781119681502, 1119681502

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

The Carousel of Time

To Michèle

Series Editor Bernard Reber

The Carousel of Time Theory of Knowledge and Acceleration of Time

Bernard Ancori

First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK

John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2019 The rights of Bernard Ancori to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Control Number: 2019945757 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78630-460-5

Contents

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xv

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xvii

Part 1. Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 1. Information, Communication and Learning . . . . . . . . . . . . .

3

1.1. Claude Shannon’s model . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1. Ralph Vinton Hartley, Claude Shannon’s forerunner . . . . . . . . 1.1.2. Claude Shannon’s formula and the two fundamental theorems of telegraphic communication . . . . . . . . . . . . . . . . . . . 1.1.3. The eight main characteristics of the Shannonian theory of communication . . . . . . . . . . . . . . . . . . . . 1.2. Gregory Bateson’s model . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1. The immanent mind and the Batesonian definition of information . 1.2.2. The Batesonian categorization of learning . . . . . . . . . . . . . . 1.2.3. The eight main characteristics of Batesonian communication theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

4 5

. . . .

6

. . . .

9 11 11 13

. . . .

16

Chapter 2. Self-organization and Natural Complexity . . . . . . . . . . . . . .

23

2.1. Self-organization and information creation . . . . . . 2.2. Meaning of information in a hierarchical system . . . 2.2.1. Order from noise versus organizational noise . . 2.2.2. Complexity and complication . . . . . . . . . . . 2.2.3. Meaning of information in a hierarchical system

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . . .

23 31 31 33 36

vi

The Carousel of Time

Chapter 3. Human Memory as a Self-organized Natural System . . . . . . . 3.1. The theory of functional localization or invented memory . 3.1.1. The theory of functional localization . . . . . . . . . . 3.1.2. Against functional localization . . . . . . . . . . . . . 3.2. Neural Darwinism and inventive memory . . . . . . . . . .

. . . .

. . . .

42 42 45 47

Chapter 4. Hypotheses Linked to the Model . . . . . . . . . . . . . . . . . . . .

63

4.1. Six hypotheses relating to the structure of the network. . . . 4.2. Eight hypotheses relating to the evolution of the network . . 4.2.1. Assumptions related to inter-individual communication 4.2.2. Hypotheses related to intra-individual cognition . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

64 70 71 74

Part 2. Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

Chapter 5. Scope, Dimensions, Measurements and Mobilizations . . . . .

83

5.1. Inter-individual communication and learning . 5.2. Categorization and learning . . . . . . . . . . . 5.2.1. The creative analogy of weak novelty: the example of Planck’s formula . . . . . . . . . 5.2.2. The creative analogy of radical novelty: Gregory Bateson’s “grass syllogism” . . . . . .

. . . .

. . . .

41

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85 92

. . . . . . . . . . . . . . . . . .

95

. . . . . . . . . . . . . . . . . .

101

Chapter 6. Provisional Regionalization and Final Homogenization . . . . .

113

6.1. Formation of clusters of actors and regionalization of the network space . . . 6.2. Instability and erasure of regions within the network. . . . . . . . . . . . . . . 6.3. Evolution of information production at the level of the global network and at the level of each cluster of actors . . . . . . . . . . . . . . . . . . .

114 124

Part 3. Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

141

Chapter 7. Propensities to Communicate, the Specious Present and Time as Such, the Point of View from Everywhere and the Ancestrality’s Paradox . . . . . . . . . . . . . . . . . . . .

143

7.1. Propensities to communicate and the specious present . 7.2. Subjective time, objective time and time as such . . . . 7.3. A point of view from nowhere or a point of view from everywhere? . . . . . . . . . . . . . . . . . . . . 7.4. On an alleged “ancestrality’s paradox” . . . . . . . . .

132

. . . . . . . . . . . . . . . . . . . . . . . . . .

144 151

. . . . . . . . . . . . . . . . . . . . . . . . . .

156 161

Contents

vii

Chapter 8. Déjà-vu and the Specious Present . . . . . . . . . . . . . . . . . . .

171

8.1. A history of interpretations of the déjà-vu phenomenon . . . . . . . . . . . . . 8.2. Déjà-vu and the specious present: an interpretation . . . . . . . . . . . . . . .

172 179

Chapter 9. The Acceleration of Time, Presentism and Entropy . . . . . . .

187

9.1. Historical time, irreversibility and end of time . . . . . . . . . . . 9.2. On the sensation of acceleration of time and presentism . . . . . . 9.2.1. A psychological interpretation of the acceleration of time . . 9.2.2. A socio-historical interpretation of the acceleration of time . 9.3. Irreversibility of time and entropy of the network . . . . . . . . . 9.3.1. A brief presentation of the genesis of the entropy concept . . 9.3.2. The entropy law and network trajectory . . . . . . . . . . . . 9.3.3. Entropy theory and trajectory of the complex socio-cognitive network of individual actors . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

188 193 193 197 202 203 205

. . . . . . .

209

Chapter 10. Temporal Disruptions . . . . . . . . . . . . . . . . . . . . . . . . . .

213

10.1. The translation of beliefs . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2. Revisions of beliefs and the possible worlds semantics . . . . . . . . . . 10.3. The weak transformation of beliefs: learning and normal science . . . . 10.4. The radical transformation of beliefs: learning and scientific revolution

. . . .

216 219 222 226

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

235

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

249

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

269

. . . .

. . . . . . .

. . . .

Foreword

In these tormented times when time itself is swirling, this book is like a breath of fresh air, at the same time as it warms the heart: it is about the knowledge to which the sciences give us access, but not only this, insofar as it is difficult to get rid of a few doses of more or less reliable beliefs which sometimes enter it surreptitiously. For Bernard Ancori’s epistemology, which encompasses but goes beyond a philosophy of science, a set of logical and critical reflections on the nature of knowledge lead us into what Spinoza calls “a certain kind of eternity”, i.e. into timelessness. But it is this very fact that allows us to further question our diverse experiences of the passing of time. And, in particular, a certain acceleration that would characterize our present time, making it lean towards a kind of permanent present. Already at the end of the 19th Century, as Bernard Ancori points out, William James, a pioneer in psychophysics, had shown the specious, almost misleading, nature of our perceptions of the present between past and future. The concept of propensity to communicate introduced by Bernard Ancori constitutes a possible formalization of the notion of a specious present, and extends this notion to the spatial dimension of the network of individual actors whose model he proposes. As a result, this concept constitutes the pivotal point on which the spatial and temporal dimensions of this network are articulated, which justifies the notion of space/time of the latter, as it is highlighted here. The timeless aspect mentioned above is not an aboveground level, which would take us out of this world. We recognize in it a way of approaching one of the crises that the knowledge accumulated by the human species over the past two centuries has been going through. This is the gap reported by chemist-writer Charles Snow in the 1950s between the “two cultures”, that of the natural sciences and that of the social sciences and the humanities in general. From this point of view, we can ask ourselves whether the 21st Century will really be a new era or only a continuation of the physical and biological revolutions

x

The Carousel of Time

experienced by the 20th Century. Probably both, because the division of history’s time into hundreds of solar years is after all only a convention that is not without arbitrariness. And everything happens as if Bernard Ancori took up the challenge of closing this gap in his own original way, by building a bridge between these two aspects of Homo sapiens, a living being, the object of biology and a being of language, psychosocial. His multidisciplinary background confronted with the applications of the mathematical theory of information allows him first to identify, at the level of this meeting, a breaking point. Seeing it as at least one of the origins of the growing gap between the two cultures, he has found a way to clear a path and, in a way, to mend them. In doing so, he contributes to the ongoing realization of John von Neumann’s prediction of the evolution of 20th Century science. Von Neumann, a physicist and co-inventor of the electronic computer, also predicted in the 1950s that this century would be for the sciences the century of complexity, just as the 19th Century had been the century of energy. This prediction seems to be coming true, albeit with some delay, since we have entered the 21st Century. This raises a question about the passage of time and its possible creative role, which constitutes the basis of Bernard Ancori’s reflection and gives the book its title. This ambitious work addresses the diverse nature of our psychological, social, physical and temporal experiences, combined with the different ways in which we learn about things. Only the past is rigorously the object of knowledge, constituted by bases memorized and connected in different ways. The present seems to be perceived as such, felt or sensed, before being forgotten or memorized. As for the future, it is imagined or predicted with varying degrees of success based on projections from the past, while it is shaped in a concrete and largely unconscious way. But from all this results a form of timeless knowledge, that of what philosophers have called “eternal truths”, of which mathematics serves as a model. The use of reason tends to bring scientific activity closer to this ideal, in a more or less approximate way depending on the disciplines. The result is the creation of new concepts in the history of science, which seem to be all the closer to this ideal because they are supported by mathematical formulation and operations. But not all objects of investigation are equally suitable because the times of experience and experimentation, with their difficulties to overcome, cannot be neglected. Thus, according to von Neumann’s prediction, scientific knowledge has already been enriched in physics by the notion of information, mathematized in the eponymous theory, in relation to that of complexity – as well as of energy through that of entropy. And it is here that there is indeed a breaking point and a possible meeting between its uses in the natural sciences, first physical and then extended to biological physico-chemistry, and its possible but more problematic applications in the social sciences.

Foreword

xi

Indeed, as in the case of force in the 18th Century and energy in the 19th Century, information has been rigorously defined as a physico-mathematical quantity by borrowing the word from everyday language. But the latter is still a purely qualitative and relatively vague notion, used in the psychosocial context of interpersonal, language and other relationships. And the statistical theory of information produced by Claude Shannon and his successors, as well as the theory of algorithmic complexity of Kolmogorov–Chaitin’s computer programs, makes it undergo a transformation, by which it becomes precise and univocal enough to enter the language of the natural sciences. But this transformation makes it lose what was thought to be its very essence, namely, the meaning of the information processed, sent and received. In other words, the definition by theory has moved from the vagueness and polysemy of natural language to the univocal precision of the logico-mathematical form and its techno-scientific uses, first in telecommunications engineering, then in computer science. But this passage, as is often the case, makes us lose in semantic richness what it makes us gain in precision and operational efficiency. And what is lost in this case is precisely the meaning of the information usually transmitted in communications between speakers of a natural language. In exchange, mathematical theory, which allows its quantification and measurement, thus extends its applications to all kinds of non-human, physical and biological entities, known as “information carriers” in the sense of theory and treated as if they were telecommunications channels and computer machines. It is in this sense that the great successes of molecular biology have benefited from the discovery of molecules carrying information in the linear structure of DNA, RNA and proteins, which have been treated as sets of alphabetical or numerical letters. The genetic code has thus been treated as a communication channel whose physical material is recognized in the chemical mechanisms of protein synthesis. But in all this, as in computer science, the meaning of the information thus quantified is not taken into account. In the mathematical theory, the amount of information is expressed by a number of bits that say nothing about its meaning. This is why we can say that an algorithm or a computer does not understand what it does because the transmission of meaning involves speakers who understand it. This flaw in the theory is not such when dealing with message communication systems between speakers who transmit and receive, and are assumed to understand their meaning, without the need to formalize it in the theory. Similarly, algorithmic complexity does not suffer from the seemingly paradoxical fact that its definition implies maximum complexity for a random sequence of 0 and 1, as if it were meaningless.

xii

The Carousel of Time

This is why the transmission of meaning in communication channels and computer programs is carried out by specific additional operations: those of coding through several levels of programming languages up to the “machine language” reduced to sequences of 0 and 1, at the input of artificial machines, designed and manufactured for this purpose, followed by decoding at the output, for the use of human speakers. Hence the search, by analogy, for such coding systems in self-organized natural machines, particularly those constituted by organisms. The discovery of what is called the “genetic code”, the same in all organisms – which is in fact only a projection of the linear structures of DNA on those of proteins and thus carries out a transmission of information in the strict sense of the theory – is probably the most spectacular success of this research, although it is not strictly speaking the coding of a computer program, in line with what was believed for a long time. Indeed, the meaning of genetic information here is metaphorically reduced to the effects observed at the output of the communication pathway of a particular protein synthesis and its effects on the structure and functioning of the cells where it takes place; but we now know that these effects, because of the three-dimensional structure of proteins, depend only partially on their linear structure, the only one coded by that of DNA. Bernard Ancori opposes, or rather associates, the “telegraphic” theory of engineer Claude Shannon with that of anthropologist Gregory Bateson. The latter is the author of a more qualitative theory of communication, called “orchestral” by some of his commentators, and of two volumes on an immanent ecology of the mind. He inspired Paul Watzlawick and the famous Palo Alto school, among others. For Bernard Ancori, Bateson complements Shannon advantageously in his own theory of knowledge. In particular, he highlights the role of self-organization models by “complexity from noise” (Atlan) in attempts to formalize the creation of meanings. Incidentally, we are brought into contact with one of the excesses of considerations about cosmological time, measured in billions of years, when it is conceived within the framework of an unbridled idealism that makes the meaningful human consciousness play a truly creative role in scientific objectivity. This is the so-called “paradox of ancestrality” whose denunciation of paralogism is welcomed here. Astrophysical theories on the origins of the universe would be paradoxical – both true and not true – in that they would concern a reality that existed before the appearance of human life and consciousness that established its reality, in times when the human species did not exist. The paradox is here dismantled step by step by showing the various confusions on which it is itself built, including that between objectivity and intersubjectivity.

Foreword

xiii

Judging by another paralogism that has flourished among some physicists, it would seem that the time of origins in relation to human capacities to take cognizance of it is such as to derail the reason. This is the “anthropic principle” in its so-called “strong” interpretation. The universal physical constants are such that only they could have allowed the evolution of the universe as it occurred with the appearance of life and the human species of which we observe, obviously after the fact that the latter is capable of developing awareness. This observation gave rise to the idea of an initial adaptation of the universal constants to the future appearance of a conscious being capable of knowing them, with all that followed until the appearance of this being. According to this finalist interpretation, which is in no way binding, the universe would thus have been formed from its origin in such a way that human consciousness could appear. However, the so-called “weak” interpretation – from the point of view of the supporters of this teleological conception where the origin of the universe would be determined by a kind of divine project involving the subsequent appearance of humanity – is, in fact, quite simply reasonable in its use of the counterfactual, i.e. things being as they are, if physics were determined by other universal constants, another universe might have evolved, bringing to light beings otherwise organized, with the possibility for some, or not, to acquire a form of knowledge, which we can possibly imagine using other universe models. The question of information coding is currently building on the successes recorded by cognitive neurosciences, thanks to the extraordinary developments of functional explorations of brain activities correlated with subjective mental states that can only be expressed in the language of the subjects, objects of observation and experimentation. After several decades of research, the question, much more complex than that of coding genetic information, about the existence and nature of possible neural codes remains unanswered. On the one hand, these are brain activities described in the physico-chemical language of electrical and chemical activities, where we can identify transmissions of information, in the technical sense of the term, between neural circuits, and on the other hand, human or animal cognitive activities, expressed and described to some extent, including by experimenters, in vernacular or psychological languages with the practically infinite diversity of their semantic components. This is what may have made us speak about human cognition and its models of self-organization in the past as “machines for making meaning”. But we are dealing here with a confusion, whose heuristic interest is certainly not to be neglected, between the two notions of information, technical and typical. This is why the use of information and coding concepts, which do not have the same meaning in the two fields of investigation, cerebral and cognitive, only provides a very limited bridge from one language to the other.

xiv

The Carousel of Time

Thus, the meaning of information appears to be a place where the problems of the transition from physico-chemical to psychosocial are linked. Some models of functional and even intentional self-organization, as well as tests of formalization of algorithmic complexity carrying meaning, are presented by Bernard Ancori as tests to better formalize this passage. But like machine translation programs that can only work if they are limited to predefined fields, they are for the moment only attempts to build a bridge on both sides of the border between the two cultures, demonstrating the challenges and difficulties. Perhaps, before these difficulties are overcome, we need to know again those still posed by the classical philosophical problem of the relationship between body and mind. This problem is still far from being solved as long as we remain in the context, still largely prevalent, of dualistic ontologies, more or less assumed, failing to see in it on the contrary the expression of a radical monism, where the very question of interactions between matter and thought disappears into the unitary conception of their union. The original encyclopedic approach developed by Bernard Ancori is not the least of the interests of this book. Henri ATLAN MD, PhD, Biologist and Philosopher Professor Emeritus of Biophysics, Universitiy of Paris VI and the Hebrew University of Jerusalem, and Director of Studies at the École des hautes études en sciences sociales (EHESS, Paris)

Acknowledgments

The writing of a book, achieved by solitary labor, its progressive creation from the initial idea to the final product, is obviously a collective work. This is constructed through a succession of concentric waves centered on the author’s person. The most peripheral is occupied by colleagues, students or friends who, often without their knowledge, change a path that previously seemed secure with a word or sentence. It is them who must be thanked here. The influence of the following wave on the work is already more significant because this median wave is that of colleagues who were kind enough to invite the author to express himself in an institutional framework that would allow him to confront his ideas inter pares. For the past 15 years or so, Simon Laflamme, Pascal Roggero and Claude Vautier have allowed me to publish some of the aspects developed in this book in Nouvelles perspectives en sciences sociales, the excellent journal they edit in Sudbury and Toulouse. They must be thanked warmly for this because many of these aspects have been taken up and made consistent within a common perspective. More recently, Carlos Alberto Lobo and Bernard Guy gave me the opportunity to speak at their respective seminars, at the ENS in Paris and at Jean Moulin University Lyon 3. I would also like to thank them very much for this. The closest and most decisive wave is made up of colleagues and friends who, through their daily encouragement, or through a careful review of the various stages of the manuscript, have allowed it to mature and then prove publishable. In this regard, I would like to thank, in particular, my colleagues Isabelle Bianquis, Bernard Carrière, Patrick Cohendet, Jean-Luc Gaffard and Anne Kempf who have kindly temporarily taken off their anthropologist, physicist or economist hats to examine the transdisciplinary kaleidoscope I was giving them with a critical and benevolent eye.

xvi

The Carousel of Time

I would also like to thank Bernard Reber for having welcomed my book into the collection he manages at ISTE. My most important intellectual debt is probably the one I contracted, through his work, with Henri Atlan. I thank him for the honor of writing a foreword to this book. Wherever he is, I thank my dear friend Jean Gayon who would have been, alas too briefly, a part of all these waves, and especially the closest one. Finally, I would like to thank my wife, Michèle, who has been with me on this carousel for a long time.

Introduction

The actors of our societies say they feel a phenomenon of time acceleration, and this phenomenon would paradoxically lead to a new regime of historicity: a presentism (Hartog 2003). This presentism is often interpreted as being the symptom of a capitalism more eager than ever for immediate profitability, and whose ingrained passion for the short term would culminate in a perfect match between the past, present and future. In terms of the first as a field of experience, it would definitely be a clean slate, and of the second as a horizon of expectation, it would only keep the promise of an endless repetition (Laïdi 2000; Augé 2011)1, and yet nobody would have more time to themselves (Baier 2002; Rosa 2010, 2012; Birnbaum 2012; Baschet 2018). How can this sensation of time acceleration be combined with a tendency towards perfect immobility? What is the true meaning of the expression “acceleration of time” thus used? Does it have only one, given that the notion of acceleration is precisely defined in relation to time? Would it then only point to a kind of vertigo (Jeanneney 2001)? The purpose of this book is to answer these questions within the framework of a theory of knowledge. As we will see, this response combines a psychological and an historical explanation of the perceived variations in the speeds of time, and a socio-historical explanation of the phenomenon of acceleration that our 1 The concepts of “field of experience” and “horizon of expectation” are borrowed from Koselleck (1979). The presentism in question here is distinct from philosophical presentism, the doctrine according to which only the present is real and which is the temporal analogy of the modal actualist doctrine according to which everything is actual. This actualism is opposed to possibilism, according to which things do not actually exist. On this point, see Théodore Sider (1999). On the genealogy of the notion of “regime of historicity” and, beyond its meanings for historians, its declinations in anthropology, psychoanalysis and geography, see Delacroix et al. (2009).

xviii

The Carousel of Time

contemporaries say they are experiencing while seeming to be getting used to this eternal present. The field of the theory of knowledge proposed to explain this carousel of time goes far beyond that of the scientific sphere alone. From Rudolf Carnap to Karl Popper, the philosophy of science has failed to provide a certain foundation (empirical or methodological) for the statements produced by scientists’ activity so that the distinction between knowledge (as true beliefs reliably justified) and representations (as beliefs that may prove false) has little empirical relevance: it certainly remains analytically useful in a normative perspective, but, once the analysis is positive, it is more a matter of consensus among actors. However, from this point of view, science is not absolutely distinct from other types of knowledge within a broader set of representations among the members of a given society. We therefore agree with Susan Haack’s (2003) pragmatist position, using the expression “long arm of common sense” introduced by Gustav Bergmann, that scientific research is in perfect continuity with other types of empirical research, especially with those that everyone conducts when they wish to answer a question that arises for them: certainly, “scientists have devised many and varied ways to extend and refine the resources we use in our everyday empirical research” (op. cit., p. 297), but they are not of a fundamentally different nature, for at least three reasons. Firstly because knowledge is so widely distributed in our societies that it has become difficult to continue to establish the existence of a single hierarchy of knowledge, at the top of which would be a new clergy of scholars dominating a shapeless mass of ignorant people. This is why many voices are now calling for a more horizontal vision of the distribution of knowledge and the mobilization of all in our common adventure of exploring reality (Calame 2011; Lipinski 2011). Secondly because the cognitive processes implemented and the argumentation regimes used are similar for all actors, academic or not. Thus, the modes of revising beliefs follow the same paths everywhere: although they are often more sophisticated than those of the common man, scientists’ representations result identically from rearrangements of current beliefs in the face of new information: these rearrangements are such that Homo academicus and Homo vulgaris give the same priority to new information while demonstrating a concern for conservatism and memory according to a mix that depends on the context (Zwirn and Zwirn 2003). As for argumentation regimes, they use the same figures of rhetoric here and there: omnipresent in the cognitive processes of all actors (Lakoff and Johnson 1980, 1999; Hofstadter and Sander 2013), metaphors and analogies are identically mobilized to support their positions by the average

Introduction

xix

individual (Ortony 1993; Gineste 1997) and by those who want to promote artistic and scientific creation (Miller 2000)2. Finally, the “long arm of common sense” extends into the scientist’s sophisticated hand because the learning processes are put into action in the same way by all actors: in this matter, scientists and non-scientists demonstrate the same capacities, hierarchized into a theoretical plurality of logical levels (Bateson 1977, 1980, 1984, 1996)3. This is why it will be a question here of epistemology in the English sense of this term (i.e. the theory of knowledge in general) rather than in the narrower sense of the French tradition (i.e. the philosophy of science)4. Without refraining from illustrating our discussion with examples from the history of science, or from using the philosophy of science to shed light on this or that point, we will therefore consider in this book a global population of individual actors (scientific or not) whose representations will be analyzed in terms of their formation, structure and evolution, whether or not these representations are transformed into knowledge recognized as such. Such a position is not entirely self-evident, since the analysis of scientific phenomena has long been one of the reserved fields of philosophy, and for some, this should still be the case today. Whether it is a question of the French school of thought or the English-language approach, this philosophy of science has mainly focused on the conditions conferring scientific legitimacy to statements in which knowledge is materialized. To give only two particularly salient examples, it is in this normative perspective, and with a focus on “accomplished” science, that Gaston Bachelard focused on the formation of the scientific mind, or that Karl Popper attempted to define a criterion for distinguishing between scientific and non-scientific statements. More recently, the Social Studies of Science movement has emerged, which has focused on situating science and technology in their socio-historical contexts of production, as well as assessing the societal implications of their developments. With a positive view of an analysis of the production of scientific and technological representations like some “sciences in action”, the anthropology and sociology of science have extended and amplified – even diverted 2 Whether it is our most daily appeals or arguments deployed in the natural sciences or in the humanities and social sciences, analogies and metaphors are used everywhere to gain conviction (de Coster 1978; Lichnerowicz et al. 1980; Hallyn 2004; Durand-Richard et al. 2008). 3 It is on the basis of this categorization that Erving Goffman (1974) introduced frame analysis in the social sciences: what he calls “primary frameworks” and “frame transformations” refer to Batesonian learning at levels 2 and 3, respectively. 4 On the evolution of the different meanings of the term “epistemology” in German, English and French, see Chevalley (2004).

xx

The Carousel of Time

– the approach initiated by Thomas Samuel Kuhn (1972) by bringing out the field of science–technologies–societies and then multiplying their analyses of the latter (Vinck 1995, 2007; Pestre 2006). Normative or positive: in order to distinguish these two perspectives, it first seemed convenient to describe them as internalistic and externalistic, respectively. From the outset, such a dichotomy turned out to be outdated, and many voices rose up to ask to overcome it. Thus, the call made by Anouk Barberousse et al. (2000, p. 175) assumes that normative (i.e. philosophy of science) and descriptive (i.e. social studies of science) traditions “can, and even must, converge”. Indeed, if it is obvious that scientific activity is placed in a social and historical context, it is no less obvious that it is a cognitive activity of human beings: “Doing science means at least observing phenomena, trying to explain them, acting by building experimental systems to test these explanations, communicating the conclusions to other members of a community” (ibid., pp. 175–176). The convergence between normative and descriptive traditions therefore requires a unified analysis of cognitive (observe, explain), architectural (build) and social (communicate) gestures. We propose to place this analysis within the framework of a complexity5 paradigm that is particularly appropriate for our subject, particularly because it exhibits certain analogies with the theory of evolution. Consider Murray Gell-Mann’s (1995, p. 33 sq.) description of complex adaptive systems. According to this author, such systems obtain information about their environment and their interactions with it, identify regularities within this information and condense these regularities by formulating models to act in the real world based on them. In each case, there are several models in competition, and action in the real world has a retroactive influence on this competition. More precisely, each of these models is then enriched with additional information, including those that had been overlooked when extracting regularities from the initially observable data stream. This is in order to obtain a result applicable to the “real world”, i.e. the description of an observed system, the prediction of events or the indication of a behavior for the complex adaptive system itself.

5 For a long time largely ignored by the traditional philosophy of science (Morin 1990), the concept of complexity is relatively recent – the first occurrence of the word in the title of a scientific text dates back only to Warren Weaver (1948). But since then, the multiple faces of complexity (Klir 1986) have taken over most disciplines (Fogelman-Soulié 1991; Capra 2003, 2004), to such an extent that one can legitimately wonder whether complexity will not constitute the epistemological framework favored by the 21st Century (according to the title of a special issue of La Recherche, December 2003).

Introduction

xxi

Without reducing itself to it, this very general description applies, in particular, to what Gell-Mann calls “the scientific enterprise”. Models are theories here, and what happens in the “real world” is the confrontation between theories and observations. New theories can compete with existing ones, thus engaging in competition based on the coherence and degree of generality of each, the outcome of which will ultimately depend on their respective capacities to explain existing observations and correctly predict new observations. Each theory of this kind constitutes a highly condensed description of a very large class of situations and must therefore be supplemented by a detailed description of each particular situation in order to make specific predictions (ibid., p. 94). The theory of knowledge proposed in this book will therefore have an evolutionary character, in the analogical (rather than literal) sense of this term: without identifying human cognitive faculties with the product of a biological process of variation and natural selection, we adopt here a mode of explanation similar to that of evolutionary biological theories (Soler 2000a). It will also be part of a naturalized epistemology in the sense of Willard Van Orman Quine (1969) by being in continuity with certain scientific results currently in force, in particular, the cognitive sciences and a renewed sociology of networks (Latour 2006). In this perspective, we will propose a model of the structure and evolution of a complex socio-cognitive network of individual actors, thus aiming to formulate a theory of the construction of our cognitive space–time. As this dash indicates, the categories of space and time are absolutely inseparable here, and this model is also that of our cognitive space–time in a dual sense. First, because, according to the bias described above, it is as much about “field” or “experiential” knowledge as it is about what is called “scientific knowledge” so that it is the cognitive territory common to all these representations that is ours. Secondly, because the space–time thus shared by all the individual actors takes on a particular meaning in our current situation, characterized by the acceleration of the time felt and presentism, mentioned from the outset in this introduction. The structure of this book is derived from the global perspective we have just outlined. It is divided into three parts. The first part, entitled “Foundations”, presents the main ingredients of our model and is divided into four chapters. The realization of the cognitive and social gestures mentioned above implies the realization of learning processes by the individual or collective subject, and these processes lead to gains in information that can be communicated. The conceptual basis for the convergence sought between normative and positive approaches thus consists of the critical integration of a nebula of notions that is organized around those of information, communication and learning. It is therefore through the search for this critical integration that this book begins, and we confront in this regard two alternative approaches to the nebula of notions mentioned: that of the engineer

xxii

The Carousel of Time

Claude Elwood Shannon and that of the anthropologist Gregory Bateson, each presenting advantages and disadvantages (Chapter 1). The following chapter proposes to discern a first synthesis of these two approaches in the paradigm of self-organization of complex systems developed by Henri Atlan. This is a natural complexity, which characterizes systems not built by humans and whose purpose, if it exists, is unknown to the observer – such as biological and social systems – and not the algorithmic complexity introduced by Andrei Kolmogorov and Gregory Chaitin, which essentially concerns the world of theoretical computing. This paradigm of natural complexity contains a concept of structural and functional self-organization that makes it possible to preserve the formalism used by C. Shannon while integrating into it effects of meaning similar to those put forward by G. Bateson (Chapter 2). We will then show that the Atlanian paradigm is an excellent model of how human memory works, as, demonstrated by Israel Rosenfield’s analysis based on the work of Gerald Maurice Edelman. It shares three main characteristics with this operation: – both consider learning to be non-directed, in the sense that it does not respond to any pre-established program, neither in the system under consideration (e.g. human memory) nor in the environment of that system; – the efficient cause of this learning lies in the random encounter of the system and certain factors from its environment; – the product of this learning consists of the construction of patterns (H. Atlan) or psychological categories (I. Rosenfield), as well as in an ever finer differentiation of the products thus constructed, whose list, and the very mode of construction, are likely to be called into question at each stage of the process. The combination of these three shared characteristics also makes it possible to rediscover the richness of the approach to communication, information and learning concepts proposed by G. Bateson (Chapter 3). The path taken by these first three chapters leads to the following two stages: – the quantitative theory of communication introduced by C. Shannon, enriched by the analysis of the meaning of information and learning developed by G. Bateson, lends itself to the first steps of a synthesis within the natural complexity paradigm proposed by H. Atlan; – this paradigm then provides an appropriate analytical framework for the functioning of human memory, as conceived by I. Rosenfield and G. M. Edelman. On the basis of the range of conceptual tools gathered by these two stages, we are able to propose a series of hypotheses concerning the structure and evolution of the

Introduction

xxiii

complex socio-cognitive network of individual actors whose model constitutes the central subject of this book. This model is generic rather than explanatory6: far from putting in order experimental data whose detail would be sufficiently known to enable us to control the mechanisms of phenomena observed in their entirety, it is content to produce some conditions of possibilities. Its usefulness is to suggest a likely logical structure for this global phenomenon which is too complex to be analyzed in all its details, namely, the space–time in question. Thus, although it occasionally encounters some empirical data, this model does not rely on specific experimental data (Chapter 4). Despite the ontologically unbreakable nature of the space and time of our network, these two dimensions are necessarily explored successively during the progressive construction of our model. The analysis of the spatial aspects of the network is developed in the second part of this book, entitled “Space” and divided into two chapters. We first specify the nature of the boundaries delimiting the perimeter of the network, by defining their dimensions and giving them a measure and then by analyzing their modes of mobilization. Since the network space is thus circumscribed in relation to its environment, we introduce these two driving forces governing its evolution, namely, inter-individual communication and the analogy that is a source of categorization7. The comparison of the weak novelty, resulting from the first, and the strong novelty, created by the second, leads to three possible types of network trajectories (Chapter 5). The following chapter focuses on the analysis of the internal structure of the network space thus delimited and measured, as well as on the evolution of this structure. We propose a concept of propensity to communicate between the actors. Under the assumption that communication would be the only driving force behind the evolution of the network, the model shows that these actors tend to merge by engaging in a cumulative process of forming cognitive clusters of increasingly similar actors within a given cluster and simultaneously increasingly cognitively dissimilar from the actors contained in all the other clusters. This cumulative process of spatial regionalization is quickly accompanied by a tendency towards homogenization, which eventually prevails over this partition of the network into distinct regions so that the network adopts one of the three trajectories previously analyzed (Chapter 6). The third part of this book, entitled “Time”, is divided into four chapters. Thanks to our concept of the propensity to communicate, we begin by linking the temporality of the network to its spatial characteristics because this concept is both a 6 We borrow this distinction between generic and explanatory models from H. Atlan (2011, p. 9). 7 For a recent perspective on the relationship between social sciences and cognitive sciences, see Kaufmann and Clément (2011).

xxiv

The Carousel of Time

marker of the internal structure of the network’s space and of the evolution of this structure. It represents a possible formalization of the notion of the specious present, popularized in the 19th Century by William James and taken up by our modern cognitive sciences, even though the space of the network is nothing more than the gathering of all the specious presents of individual actors. This space thus reveals itself to be only a moment of time as such, which overstates the subjective time of the actors observed within the network and the objective time of the latter’s observer. The combination of the point of view of a particular observed actor with that of this observer makes it possible to conceive the asymptotic existence from the point of view of everywhere, as well as to affirm the primacy of subjective time over objective time by revisiting in a critical mode the paradox of ancestrality that claims to deny it (Chapter 7). In connection with the notion of the specious present, the following chapter proposes an interpretation of the phenomenon of déjà vu which shows that historical time is irreversibly constructed by storing in the memory the events produced and perceived by individual actors within the network: far from being part of a temporal framework that is always “already there”, these events produce time as a sequence of successive specious presents of the actors (Chapter 8). This type of temporality is not always experienced by the latter in a uniform way, and we know that the actors of our societies say they feel a phenomenon of acceleration of time paradoxically associated with presentism. For some, the expression “acceleration of time” would be meaningless since an acceleration or deceleration of time is defined and measured precisely in relation to time itself, while for others, a notion of acceleration of the time felt in relation to an objective measure of duration retains all its meaning. Our model explains such a phenomenon while dispelling the apparent paradox of its conjugation with the presentism mentioned, and it proposes an interpretation of the content that the concept of entropy could present in this respect (Chapter 9). The last chapter of our book explains the theoretical articulation between the succession of discrete states made up of the specious gifts of individual actors and the continuity of thought flows that William James has never ceased to proclaim. This articulation is based on the distinction between two hierarchical levels of inscription of psychological categories in the individual memories of the actors: a conscious level, at which such categories are combined in order to ensure overall semantic consistency in their registration in memory; and a non-conscious level, that of psychological meta-categories that govern the mode of selection and organization of the categories present at the conscious level. As long as the composition of this meta-category level is invariant, the continuity of the actors’ thought flows responds to that of the mode of selection and organization of the psychological categories consciously recorded in memory, although this flow is simultaneously divided, at

Introduction

xxv

the level of the contents of these categories thus selected and organized, into a succession of discrete states. But when this meta-category level is modified, the type of learning then achieved causes a temporal disruption in individual actors’ flow of thoughts and consequently in the evolution of their network. This form of temporal disruption marks the Kuhnian periods of “extraordinary science” (Kuhn op. cit.), which illustrate in an exemplary way the learning processes that everyone achieves when their confrontation with a new problem forces them to change their point of view on the world around them (Chapter 10).

PART 1

Foundations

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

1 Information, Communication and Learning

Today, there are many approaches to human communication that deal with its multiple aspects at various levels of abstraction and delimit what has become the field of information and communication sciences. The domain is divided into many regions, in line with the diversity of its objects. To name the most important: communication issues and models, interpersonal communication, communication in groups and organizations, communication and media, digital communication and communication and social networks give rise to research and publications, academic or otherwise, that increasingly reflect the growing importance of the communication phenomenon in our hyper-connected societies1. It is therefore natural that these sciences have found an institutional foundation in the academic world of developed countries. In France, this has taken the form of a section of the Conseil national des universités (French National Council of Universities) responsible for assessing the work produced in these various fields of research and teaching, and the Institut des sciences de la communication (French Institute of Communication Sciences), created in 2007 as part of the Centre national de la recherche scientitique (French National Center for Scientific Research) with the mission of conducting and supporting interdisciplinary research focused on communication issues at the interfaces between science, technology and society. Since 1974, there has also been a Société française des sciences de l’information et de la communication, SFSIC (French Society for Information and Communication Sciences), which brings together about 500 academic researchers (universities, CNRS, INA) and has a growing number of members from professional circles (media, social communication, scientific and technical information). 1 On the invention of the concept of communication, see Mattelart (1994). For a genealogy of the communication-world that forms the technical and political context for the rise of communication thinking, see Mattelart (1999).

4

The Carousel of Time

It would be perfectly futile to attempt here, if only in outline, an overview of the subject, problems and methods developed today by the information and communication sciences2. And it would also be irrelevant because our subject encourages us to focus on two approaches that everything seems to oppose: on the one hand, the mathematical model of communication introduced by the pioneering work of C. Shannon (Shannon 1948; Weaver and Shannon 1949) and which is still today a canonical model, and on the other hand, the anthropological vision of communication previously proposed by G. Bateson (1977, 1980, 1984, 1996), which is at the heart of a “new communication”3. We will thus have the necessary ingredients to fully understand in our next chapter how the theory of self-organization of complex natural systems formulated by H. Atlan can constitute a starting point for an overview of these two approaches. As for the concept of learning considered from our psychological point of view here, it has long resulted in a permanent change in behavior following the realization of an experience by the individual subject. It is only more recently that this concept has integrated mental, and especially cognitive, activities (Tiberghien 2002, pp. 30–35), and it is mainly under this last aspect that we will approach learning phenomena here. Telegraphic communication versus orchestral communication: these two terms were introduced by Y. Winkin (1996) to contrast the Shannonian (“telegraphic”) and Batesonian (“orchestral”) theories of communication. We will successively present the pioneering model presented by the engineer C. Shannon at the end of the 1940s and that proposed by the anthropologist G. Bateson and his successors from the 1970s onwards (Winkin 1988). 1.1. Claude Shannon’s model The founding article of C. Shannon’s mathematical theory of communication was first published in 1948 as two successive issues in the Bell Telephone Laboratories’ internal journal, before being reprinted the following year in one volume with an introduction by Warren Weaver. Undertaken with the aim of improving the performance of the telegraph lines deployed by this company, this article 2 Although relatively old, the presentation of communication models proposed by Gilles Willett (1992) is still very useful. For a selection of important texts on information and communication sciences, see Bougnoux (1993), and for an introduction to communication sciences, see Bougnoux (2001). For a recent and very comprehensive overview of the current issue of communication, from interpersonal relationships to social networks, see Dortier (2016). 3 According to the very title of a book by Yves Winkin, La nouvelle communication, (1981).

Information, Communication and Learning

5

generalized the results published 20 years earlier in the same journal by Ralph V. Hartley (1928). A good understanding of the scope and limits of the Shannonian approach to communication therefore requires a brief presentation of this pioneering work4. 1.1.1. Ralph Vinton Hartley, Claude Shannon’s forerunner Twenty years before C. Shannon, R.V. Hartley was employed as an engineer at Bell Telephone Laboratories and was looking for a theory on information transmission. On this occasion, he was the first to use the expression of quantity of information, which implied a notion of measuring it. More precisely, the problem was that of measuring the amount of information transmitted by a written message with the symbols of a repertoire (or alphabet) that would contain n, these symbols being equiprobable and this repertoire being identically known by the sender and receiver of the message in question. He considered that receiving one and only one of these symbols constitutes an event for the message’s receiver and that the mere fact that this event has occurred, regardless of its meaning, provided the receiver with a certain amount of information. How then to measure the latter? The natural idea here was to consider that the occurrence of this event conveyed all the more information since it was unforeseen: an event whose occurrence would a priori be certain in our eyes would not tell us anything by occurring, and it would therefore bring us an amount of information that was strictly zero. But on the other hand, any event that was not related to this told us something by occurring. This “something” was information and it was measurable when the initial probability of the production of the event in question was measurable. However, it was clear that the occurrence of each event, among the n equiprobable events consisting of receiving this or that symbol among the n’s of the repertoire considered, was all the less probable a priori if n was large. The amount of information associated with receiving one and only one symbol from a repertoire that contained n equiprobable symbols, noted I, was therefore an increasing function of n: I = f (n), with f’ (n) > 0. The problem now involved determining the precise form of this function. For this purpose, let us consider the simultaneous reception of two independent messages, each containing one and only one symbol, one of these messages being written in a repertoire containing n1 equiprobable symbols and the other in another 4 For a genealogy of the Shannonian concept of information, see Segal (2003, pp. 67–142) and Triclot (2008, pp. 48–70). For the theoretical presentation of this concept, we use the very pedagogical version proposed by H. Atlan (1972, pp. 5–61).

6

The Carousel of Time

repertoire containing n2 equiprobable symbols. According to the above, the first message provides us with a quantity of information I1 equal to f (n1), and the second a quantity of information I2 equal to f (n2), with f’ (n1 > 0 and f’ (n2 > 0). What is then the total amount of information – the amount of information provided by receiving all of these two messages? R.V. Hartley adopted here a crucial assumption of additivity of the information by assuming that this total information, noted here It, was equal to the sum of I1 and I2: It = I1 + I2. It should be strongly emphasized that nothing, absolutely nothing, justified this precise hypothesis: any expression of It maintaining the growing relationship established above between the values of I and n would have been acceptable here – Hartley could just as easily have written, for example, It = I1.I2. The choice of this additivity hypothesis was, in fact, based on the fact that it had the advantage of leading to a simple determination of the precise form of the function f (n). Let us consider from another point of view the same experiment, by posing that this one, instead of consisting of simultaneously receiving the two previous messages, is reduced to one and only one reception of a written message in a repertoire which would contain n1.n2 equiprobable symbols. The amount of information provided by the reception experiment thus considered is then equal to f (n1.n2). And since it has no reason to be different from the total amount of information obtained by the receiver of the two previous messages – it is indeed the same experiment – we can write f (n1.n2) = f (n1) + f (n2). However, the function that transforms products in total is by definition the logarithmic function, from which it follows that I = a logb n, with a > 0 and b > 1, where b is a constant depending on the units of measurement. If we put a = 1, the amount of information provided by receiving one and only one symbol from a repertoire of n equiprobable symbols as defined by R.V. Hartley is as follows: I = logb n. And the amount of information provided by m symbols of this type is equal to m.I, i.e. m.logb n. Finally, if b = 2, we obtain I = log2 n, which is a convenient specification of the amount of information for computer scientists, who use binary codes in 0 and 1, because it provides a unit of measurement for the information: I = 1 for n = 2. An information bit (a binary digit, contracted as a “bit”) is therefore the amount of information provided by the reception of a message containing one and only one symbol drawn from a repertoire containing exactly two of them, and where they are equiprobable. 1.1.2. Claude Shannon’s formula and the two fundamental theorems of telegraphic communication It is on this basis that C. Shannon proposed in 1948 a model generalizing the concept introduced 20 years earlier by R. V. Hartley by extending it to any probability

Information, Communication and Learning

7

distribution of the symbols of the repertoire under consideration5. First of all, it should be noted that the result obtained by R.V. Hartley in terms of the number of symbols in the repertoire concerned can easily be written in terms of the equiprobability p (i) of these symbols: since I = log2 n and p (i) = 1/n, we immediately have only I = – log2 p (i) – where we verify that I = 0 for p (i) = 1. Let us call f (i) the “self-information” associated with the reception of the symbol i drawn from a repertoire of n signs whose probabilities are arbitrary: f (i) = – log2 p (i). Since the probabilities of the symbols in this repertoire are arbitrary, the “self-information” of symbol j whose probability p (j) is generally different from p (i) will be different from that of symbol i: if p (j) ≠ p (i), then f (j) ≠ f (i). C. Shannon considered the experience of receiving a large number of messages, each containing one and only one symbol, to be true, and he showed that the amount of information obtained on average by the receiver of these messages was equal to the average of the “self-information” of the n symbols in the repertoire weighted by the respective probabilities of the latter. Thus, if n is reduced to symbols i and j, the amount of information is equal to H = [– log2 p (i)] . p (i)] + [– log2 p (j)] . p (j)]6. More generally, with any n, C. Shannon’s formula is written:

H =−

i =n

 p(i).log2 p(i) i =1

It is easy to verify that this expression constitutes a generalization of R.V. Hartley’s result by assuming that p (i) = 1/n, ∀i : as there is here n p (i) of this kind. 5 C. Shannon validated and generalized the logarithmic definition proposed by Hartley since the logarithmic function is the only one that fulfils a number of fundamental properties that he considered essential to be able to report messages transmitted by a discrete and quiet channel from a stationary source (whose properties do not change over time, such as AEAEAEAEAE, unlike AEAAEEAAAEEE) and ergodic (special case of a stationary source, whose sequence means are equal to the mean of the set, such as a coin or an unbiased die). On all this, see Triclot (op. cit., pp. 48–56). 6 With a two-symbol alphabet, 0 and 1, H = – p (0) . log2 p (0) – p (1). log2 p (1). It is obvious that H = 0 for p = 0 or for p = 1 (because the event that occurs, whether it is 0 or 1, is always certain). The function H (p), with p (0) = p and p (1) = 1 – p, reaches a maximum of 1 bit for p = ½. A bit is therefore the maximum amount of information per binary symbol when the probabilities of these symbols are equal. As shown in the library example developed by Emmanuel Dion (1997, pp. 58–63), a bit is also the amount of information that reduces uncertainty by half. Moreover, H. Atlan insists that “what is important to understand is that the H function applies, not to a particular message, but to a set of messages that all use the same number of different symbols with the same probability distribution for these symbols” (1972, p. 12).

8

The Carousel of Time

C. Shannon’s formula is then written H = − n. 1/n log2 p (i) = − log2 p (i) = log2 n. Finally, as in the particular case analyzed by R. V. Hartley, the additivity of the information implies that the receipt of m symbols from a repertoire that contains n symbols whose probabilities are now arbitrary corresponds to a quantity of information equal to m.H. This conception of information measurement only interested C. Shannon from the perspective of transmitting messages along a channel between a source and a destination. This transmission is carried out through an encoding operation between the source and the channel, and a decoding operation between the channel and the destination, the code used obviously being a common knowledge between the source and the message’s recipient. Written with the Latin alphabet, for example, messages are coded as 0 and 1 and transmitted in this form. The coding problem consists of transcribing in a faithful and unambiguous way a written message with a repertoire containing n symbols (such as the Latin alphabet with its 26 letters) using another repertoire containing n' symbols (such as the binary alphabet with its two symbols), with generally n ≠ n’. The general outline of a communication system is then as follows: the source transmits a message to a coder who transforms it into binary signals transmitted by a sender along the channel to a receiver and transformed by a decoder into a message transmitted to a destination. The part between the output of the source and the input of the destination is called the communication channel, and it is to this part that the Shannonian theory of communication mainly focuses its attention (Triclot op. cit., p. 51). This is presented as shown in Figure 1.1. Information source

Receiver

Sender

Signal received

Signal Message

Destination

Message

Noise source

Figure 1.1. Diagram of a communication system7

Regarding the transmission of information through this channel, C. Shannon demonstrated more than 20 theorems, the two most significant for us being the “noiseless coding theorem” and the “noisy-channel coding theorem”. The first considers a message transmission channel that is not disrupted by anything so that the 7 This diagram is shown on p. 35 of the French edition of W. Weaver and C. Shannon’s book (1949), published in 1975.

Information, Communication and Learning

9

message perceived by the recipient is always strictly identical to the message from the source. The theorem then establishes that, whatever the number n of symbols contained in a repertoire to be coded and the probability distribution of these symbols, Shannon’s formula expresses the minimum average number of binary symbols to be used per symbol to be coded. This minimum is reached immediately each time the probabilities of the n symbols to be coded are all integer powers of ½, and it is the limit towards which we lean towards when we replace the individual coding of the n symbols by that of groups bringing together an increasingly large number of them. Shannon’s formula can therefore be interpreted in two ways: it expresses the uncertainty raised by the occurrence of an event (or a group of events), as well as the minimal representation of this event (or a group of events) in a binary language. The second theorem takes into account the possibility of a noise occurring on the channel – i.e. a phenomenon occurring during a communication that does not belong to the intentional message sent. The message perceived by receiver Y is then different from the one from source X, and it contains a different amount of information H (y) than the one contained in the latter, noted H (x). The amount of information of the input when the output is determined, H (x/y), is called the equivocation, and symmetrically, the amount of information of the output when the input is determined, H (y/x), is called ambiguity. Since the capacity C of a channel is defined as the maximum amount of information that can be transmitted by the latter, the theorem establishes that when messages are transmitted from a source whose amount of information is H in a channel of capacity C: – if H ≤ C, there is a coding method such that messages can be transmitted with an error frequency (equivocation) as low as desired; – if H > C, there is a coding method such that the equivocation may be less than H – C + ε, where ε is as small as desired, but there is no coding method such that this equivocation could be less than H - C8. In this sense, the transmission of a coded series of symbols can never create information. 1.1.3. The eight main characteristics of the Shannonian theory of communication

As it has just been presented from the interpretation of its original version by H. Atlan (1972), this model of communication, information and learning has eight

8 The coding methods whose existence is thus confirmed are all based on the introduction of redundancies in the input message up to the equivocation to be deleted; implied by this introduction, the reduction in the amount of information per symbol of the message is compensated by the coding by the grouping of symbols.

10

The Carousel of Time

characteristics, which we state in order to better contrast them further with those of G. Bateson’s model: 1) This is clearly a linear or unidirectional model: the flow of information conveyed by the message goes from the source and sender to the receiver and destination, with no return. 2) It is an intentional communication whose modality is not specified, but which can be thought of, in view of the construction of Shannon’s formula and the two theorems cited here, to be mainly verbal. 3) The main focus is on the pole of message transmission, the pole of message reception being only an inverted figure: C. Shannon’s central problem was to configure the input messages in such a way that, despite the possibility of noise on the channel, the output messages are as accurate as possible, it being understood that source and destination share the same cognitive universe (common knowledge of the codes used). 4) The laws governing information are additive in nature – it is on the basis of this hypothesis, introduced in the pioneering work of R. V. Hartley, that we have seen the famous Shannon formula constructed. 5) The model is totally indifferent to the meaning of the messages: a text of 100 symbols from a text typed on a computer keyboard by a monkey contains exactly the same amount of information as a text of 100 symbols from an article by Albert Einstein. This is a guarantee of the generality of the model: whatever the meaning of the message, this quantity remains identical to itself; but at the same time, it is what makes it incapable of reporting on a social communication that is never indifferent to the meaning of the messages exchanged9. 6) The model does not allow us to think about the creation of information: it merely analyzes the transport from one place to another of existing information, hence the question of the origin of information previously raised by R. Ruyer (1954). 7) The learning achieved by the recipient is reduced to a stack of symbols that they passively receive, without prior filtering or subsequent restructuring of their beliefs. 8) Like the title of the book published by W. Weaver and C. Shannon, this is a quantitative theory of communication10. 9 Shannon’s problem is sometimes compared to the postman’s problem of delivering the “right” messages to the “right” mailboxes, without having to worry about their content. However, it is precisely these contents that are generally exchanged during social communication and not addresses. 10 It is remarkable that the fascination of the humanities and social sciences for the mathematical formalization of statements has succeeded in masking, behind the appearance of

Information, Communication and Learning

11

1.2. Gregory Bateson’s model

Anthropologist G. Bateson (1904–1980) was the inspiration and central character of what has been called “new communication” (Winkin 1981, 1988, 1996)11, whose many aspects he explored over a period of about 40 years12. 1.2.1. The immanent mind and the Batesonian definition of information

The conception of communication developed by G. Bateson focuses on the notion of mind that we can introduce here through a question that the precursors of cognitive sciences have all asked themselves (Pélissier and Tête 1995): can we say that a computer thinks? Bateson’s answer to this question is negative, not to deny this ability to the computer but rather to deny that humanity has a monopoly on it: according to him, it is the human–computer–environment system that is the subject of the act of thinking as it is engaged in a process of trial and error, and it is therefore this

scientificity linked to its eighth and final characteristic, the fundamental inability of C. Shannon’s “telegraphic communication” model to account for social communication, revealed by its first seven characteristics. In fact, a plethora of books applying this model to problems in psychology, linguistics, economics, organizational theory, etc., quickly emerged, triggering an extremely lively scientific debate. As early as his 1948 article, however, C. Shannon himself warned against such an operation, the technical problem not to be confused with the semantic problem, and he returned to this subject in a 1956 article saying that his theory of information was oversold (Dion op. cit., pp. 26, 39–40). Yet, it is this model that underlies the current conception of popular science: it is symptomatic, in that Abraham Moles, who wrote a long preface to the French edition of W. Weaver and C. Shannon’s book (1975, pp. 11–27), contributed to this application of Shannonian theory to various problems of psychology and sociology (Moles 1972, 1986) and was also the one who described the popularizer as “third party” located between the learned and the ignorant (Moles and Oulif 1967). For a radical critique of this conception, see Ancori (2016). 11 From 1952 to 1959, G. Bateson was a member of the famous Palo Alto school he had helped found (Wittezaele and Garcia 1970). This school developed a systemic approach to psychotherapy (family therapy and brief psychotherapy), popularized by Paul Watzlawick, G. Bateson’s best-known student, in a series of books he wrote alone (Watzlawick 1978, 1980, 1991), or with others (Watzlawick et al. 1972; Watzlawick et al. 1975; Watzlawick and Weakland 1981), and even edited (Watzlawick 1988). 12 We will use here the works published in 1977, 1980 and 1996 which collected 57 of his articles written between the 1930s and 1970s, as well as one from 1984, which is the last work entirely written by him and constitutes his intellectual will. We can reasonably consider that this corpus of texts is representative of his thinking. We will thus leave aside other texts because they were written before his encounter with cybernetics, so important in the rest of his work (G. Bateson 1971), or in collaboration (G. Bateson and Ruesch 1988), or by his daughter after his death using notes (G. Bateson 1989).

12

The Carousel of Time

system that is the place of the mind (Bateson 1980, p. 240)13. Let us immediately point out that the latter is not transcendental: it is a mental process immanent to certain physical structures of appropriate complexity and which circulates within systems operating as units. To clarify this point, G. Bateson proposes the example of a man who fells a tree with an axe: “Each stroke of the axe is modified or corrected, according to the shape of the cut face of the tree left by the previous stroke. This self-corrective (i.e. mental) process is brought about by a total system: tree-eye-brain-muscles-axe-stroke-tree; and it is this total system that has the characteristics of the immanent mind” (Bateson 1977, p. 233; 1996, pp. 232–233). What are these characteristics? A list is quickly compiled in G. Bateson (1980, p. 240) and then taken up in a very developed form in G. Bateson (1984, pp. 97–136). There are six of them, four of which can already be mentioned: – a mind is a set of interacting parts or components; – the interaction between the parts of a mind is triggered by difference, a non-material phenomenon. To use the example above, the system of interactions would be: (differences in the tree)-(differences in the eye)-(differences in the brain)-(differences in the muscles)-(differences in the movements of the axe)-(differences in the tree); obviously, the differences mentioned here are those that separate an observed value from an expected value14, and their perception leads to feedback at each point of the system, thus configured in a circuit, intended here to reduce or even cancel them; – the mental process occurs in the world of form and requires collateral energy, unlike the world of substance; G. Bateson very clearly opposes these two worlds, governed according to him by very different types of laws, and it is exclusively the first that interests him: cybernetic thinking does not concern events or objects, but the information conveyed by these events and objects. Like Kant, G. Bateson posits that we have no direct access to the object, but only to a form (1996, p. 253), and what he seeks consists of laws of evolution of forms, entirely different from those 13 G. Bateson thus appears to be a precursor of the actor–network theory now developed in the sociology of science and technology, in that it puts human and non-human actors (instruments, graphs, laboratory notebooks, etc.) on the same level in terms of scientific production (Akrich et al. 2006). 14 The notion of underlying information is therefore very similar to that which was mathematically formalized by R. V. Hartley and then C. Shannon: the common point here is the surprise effect of receiving information. However, whereas in C. Shannon’s case the pragmatics of this reception concern a symbol; in G. Bateson’s case, it calls for a difference.

Information, Communication and Learning

13

governing the substance with which it would, he says, be a major epistemological error to confuse them (ibid., pp. 233 sq.); – the mental process requires circular (or more complex) determination chains. These first four characteristics lead to G. Bateson’s definition of information: “a unit of information [..] can be defined as a difference that produces another difference. Such a difference that moves and transforms successively in a circuit is the elementary idea” (1977, p. 231; 1996, p. 228 sq., author’s translation) This definition can be found throughout his work – notably in the glossary that closes his last work (Bateson 1984, p. 234) and even in the draft of what he would have liked to call his “last conference”, written in September 1979 and included in G. Bateson (1996, pp. 404–412). It is closely linked to the form/substance distinction mentioned above. Indeed, in the world of substance, the cause of an event is a force, or an impact, that a certain part of the material system exerts on another – for example, a billiard ball hitting another one is the cause of the event consisting of the displacement of the latter. In the world of ideas, in the sense given above to this term, it takes a relationship between two parties (or between a party at a time t and the same at a time t+1) to activate a third party called the receiver. The latter reacts to a difference or change. Thus, the expected notch in the movement of the axe and the notch produced by the same movement are the two components of the difference, external to the eye, that activates it: the difference in the tree is encoded (or transformed, or converted) into a difference in the eye and then into a difference in the brain, etc. Hence, a fifth characteristic of the immanent mind: in mental processes, the effects of difference must be considered as transformations (coded versions) of the difference that preceded them. In this sense, the above difference is interpreted as an information sender to the next one so that each point in the circuit is both an information sender (for the next point) and an information receiver (for the previous point). 1.2.2. The Batesonian categorization of learning

In order to be able to state the sixth and last characteristic of the immanent mind, we must quickly expose the Batesonian theory of the categories of learning. This theory is based on an idea that G. Bateson claims to grasp from Samuel Butler that the better an organism knows something, the less conscious it is of this knowledge: “there is a process by which knowledge (or ‘habit’, whether of action, perception or thought) sinks to deeper and deeper levels of the mind” (1977, p. 146). Any organism would thus push back into the depths of the unconscious certain knowledge in order to keep at a conscious surface level only that which would allow it a certain flexibility of adaptation. More specifically, “the economics of the system, in fact, pushes organisms towards sinking into the unconscious these generalities of relationship which remain

14

The Carousel of Time

permanently true and towards keeping within the conscious the pragmatics of particular instances” (ibid., p. 146; 1996, pp. 153–154, 161–162, 237–238). Based on this idea, G. Bateson argues that any continuous learning process involves at least two levels: primary learning, where the subject receives simple information in a given context and responds conditionally to it, and deutero-learning, where the subject shows an increasing ability to treat given contexts as if they were to be expected in their universe. In short, primary learning is about learning and deutero-learning is about learning to learn (1996, p. 274): in a learning process whose different phases are repeated, each time we learn, we learn to learn so that our performance improves with each iteration. These two levels of learning correspond to two types of changes in any system: the first occurs within a system, thus affecting the variables of the system, such as a change in the external temperature of a house with a thermostatically controlled heating system: “the temperature of the house will oscillate, it will get hotter and cooler according to various circumstances, but the setting of the mechanism will not be changed by those changes” (ibid. p. 178). The second occurs at the system level as a whole, thus affecting both the system parameters, such as a change in the critical temperature recorded by the thermostat in the previous example. These two types of change result, respectively, in a displacement on a given curve – the parameters of this curve are fixed; only the values of the variables change – and by a displacement of a curve – it is here that the parameters of the latter change. In short, if the first type of change corresponds to changes in values that are members of a given class, the second type corresponds to changes in these value classes themselves. According to G. Bateson, the conception of learning must therefore obey the formalism of the theory of logical types developed in the Principia Mathematica published in Cambridge by Bertrand Russell and North Whitehead in the years 1910–1913, in the sense that this theory carefully distinguishes the logical level of the members of a class from that of the class itself on the basis of two assumptions15: a class cannot be a member of itself – “the elephant class has no trunk and is not itself an elephant” (G. Bateson 1977, p. 178, author’s translation); a class is not a member of the class of elements that are its non-members – the chair class is not an element of

15 B. Russell recounts his trying experience of an entire summer sitting in front of a blank sheet of paper every morning and trying to solve Cretan’s paradox and then standing up every night in front of this still blank page until he had the idea to formulate these two postulates (Russel 1961, pp. 92–126). G. Bateson (1996, p. 252) notes that this paradox – the Cretan says that all Cretans are liars: if he is a liar, then he tells the truth; if he tells the truth, then he lies – is only one if the “if-then” relationship is a logical operator; on the other hand, the paradox disappears if this relationship is causal and temporal: “By introducing time into the if-then relationship, all classical logic becomes obsolete” (op. cit., p. 253, author’s translation).

Information, Communication and Learning

15

the non-chair class (ibid., p. 254). The combination of these two assumptions implies that the concept of class is of a higher logical level than the concept of membership, and on this basis, Bateson distinguishes at least five logical levels of learning, each of which corresponds to a change in learning at the next lower level (ibid., pp. 253–282). Level 0 learning is that in which an “entity shows minimal change in its response to a repeated item of sensory input” (ibid., p. 257). The response here is stereotypical; so the link between stimulus and response is invariable due to the absence of trial-and-error processes. Learning then consists of: “the simple receipt of information from an external event, in such a way that a similar event at a later (and appropriate) time will transmit the same information: through the factory siren, I learn that it is noon” (ibid., pp. 257–258, author’s translation). Level 1 learning refers to a change in level 0 learning, i.e. cases where “an entity gives a different response at time 2 than it did at time 1” (ibid., p. 260, author’s translation). This is the learning in the case of Pavlovian conditioning where, before being conditioned, the dog did not salivate when hearing the sound of a bell (time 1), while after being conditioned, he salivated in response to his perception of this sound (time 2)16. More generally, this level of learning implies a “change in specificity of response by correction of errors…” (ibid., author’s translation), and the implementation of this type of correction constitutes the mark of a trial-and-error process. Level 2 learning corresponds to a change in level 1 learning. It is deutero-learning, as mentioned above, which consists of a progressive installation of a cognitive routine that can go as far as the perfect anchoring of the latter, which then completely ceases to be conscious (Bateson 1996, p. 239). Let us add for our part that, when anchored at this point, a given cognitive routine paradoxically has a double face: on the positive side, it maximizes the saving of cognitive resources that it is intended to achieve, and frees the maximum amount of resources thus made available for learning other tasks; on the negative side, the more anchored a routine is, the more difficult it is to extract when the need arises – for example, when the

16 Let us insist with G. Bateson (1977, p. 262) on the absolute necessity of considering the contexts of times 1 and 2 as strictly identical, failing which no classification into logical types could be given. This simply means that change can only be assessed against a background of permanence: without such a background, there would be no change as a process, but only a set of situations entirely different from each other, each welcoming only level 0 learning.

16

The Carousel of Time

environment changes radically and abruptly. As a result, maximizing short-term advantage can result in maximization of long-term disadvantage (Bateson 1996, p. 154, p. 290 sq). A perfectly anchored level 2 learning is similar here to a level 0 learning, in that it implies, like this one, the absence of trial-and-error processes – non-existent from the outset in this one, eliminated at the end of it. Ultimately, these two types of learning thus constitute a negation of the very concept of learning. Level 3 learning corresponds precisely to a change in level 2 learning, i.e. a “corrective change in the set of alternatives from which choice is made” (1996, p. 266)17. For a given entity, it is a question of learning to learn how to learn, i.e. to give themselves the capacity to change their vision of the world each time the world itself changes18. Finally, level 4 learning would theoretically correspond to a change in level 3 learning, i.e. a change of change of change of change that we would readily admit, following G. Bateson, “but probably does not occur in any adult living organism on this earth” (ibid.). This categorization of learning finally leads to the sixth and final characteristic of the immanent mind: the description and classification of the processes of transforming differences specific to mental processes, revealing a hierarchy of logical types, immanent to the phenomena that we call thought, evolution, ecology, life and learning. 1.2.3. The eight main characteristics of Batesonian communication theory

Unlike the Shannonian model, which focuses on the technical aspect of communication by focusing its attention on the only communication channel located between the output of the source and the input of the destination, the Batesonian model emphasizes the social aspect of communication by affirming that any communication system is necessarily hierarchical so that the level encompassed is 17 If level 2 learning is what scientists achieve in the long periods of “normal science” as defined by T. S. Kuhn (1972), level 3 learning is put into practice by them in the short episodes of “revolutionary science”. See Ancori (2008b) and Chapter 10 of this book. 18 As G. Bateson states, “the unit of survival is organism in environment, not organism against environment” (Bateson 1996, p. 241, author’s use of italics). The survival of the organism may then depend on the speed of the change thus achieved, which must be at least equal to that of the change occurring in its environment. It then demonstrates the “dynamic flexibility” put forward by economists theorizing the world of radical uncertainty and the reactivity of the firm, which is that of today’s economies (Cohendet and Llerena 1992).

Information, Communication and Learning

17

that of the enunciation of the communication actors and the level encompassed that of their statements. Let us clarify this point by borrowing an example from G. Bateson himself (1977, p. 144; 1996, p. 192). Suppose I say “it is raining” to my interlocutor and he looks out the window to check the conformity of my assertion with the reality of his extralinguistic referent. In doing so, the receiver of my message obtains information about the nature of our relationship – I tell the truth or I lie, I joke or I do not, etc. This situation can be represented as follows: [(“it’s raining”/drops of rain)/relationship between him and me] In this representation, the redundancy (or the absence of redundancy) materialized by the bar located inside the universe contained in the round brackets is a message within the larger universe in square brackets. Thus, if he gives me his trust a priori and he actually sees raindrops when he looks out the window and thus verifies the truth of my assertion, the relationship between him and me is not modified in any way: he gives me his trust a posteriori, as well as a priori. In this case, in fact, the bar located in the universe contained in the round brackets expresses the redundancy existing between my message and its extralinguistic referent, and this redundancy confirms to my interlocutor the idea of our rapport, that it was made before it received this message so that the bar located in the universe contained in brackets also expresses a redundancy. It is exactly the same if, on the other hand, he was suspicious of me a priori and now he sees dry weather through the window when I just told him “it’s raining”. On the other hand, in the other two possible cases – he trusted me and realizes that I am lying, he was suspicious of me and discovers that I am telling the truth – the absence of redundancy in the message in the universe contained in the round brackets implies the absence of redundancy in the message in the larger universe in brackets, against which this message therefore constitutes information: a difference (between my message and its extralinguistic referent) is the cause of another difference (between the relationship between him and me, considered successively at the moment t preceding my message and then at the moment t+1 following the emission of the latter). In the context of social communication, any message exchanged between the protagonists about a third-party object is also, and first of all, a message about the relationship established between these protagonists themselves. Let us now contrast the eight main characteristics of the theory of “orchestral” communication developed by G. Bateson with those we recognized above in “telegraph” communication by C. Shannon: 1) This is at least a circular model: the flow of information conveyed by the message goes from the source constituted by each given point of the communication system to the destination constituted by the next point with a possibility of return to the initial point – which can be any point of the circuit thus formed – in the form of

18

The Carousel of Time

positive feedback (which G. Bateson describes as symmetrical, such as that involved in the arms race during the Cold War) or negative (which he describes as complementary, such as that which informs the physiological process of perspiration leading to homeostasis). 2) It is communication that may be intentional or unintentional and that is likely to be subject to various modalities (verbal, iconic, kinesthetic, etc.). In connection with what precedes it, G. Bateson emphasizes the joint development in humans during the evolution of the verbal and iconic modalities of communication, the former being better adapted to meanings related to the universe “message-plus-environment” and the latter to those related to the universe “organism-plus-one other-organism” (Bateson 1980, p. 180); thus, in the example used above, the message referring to the universe in round brackets is expressed more effectively in verbal form, while an iconic expression would be more appropriate for the one referring to the universe in brackets – for example, my erubescence could better reflect the awareness I would have of seeing my possible lie uncovered than the statement of an admission. 3) G. Bateson places the main emphasis on the reception pole of messages, not on the transmission of them: on several occasions, he explicitly agrees with Bishop Berkeley’s philosophy of knowledge that what happens in the forest is meaningless if no one is there to be affected (1980, p. 73; 1984, p. 106; 1996, p. 262), and argues that difference is only effective if it is perceived (1980, p. 42, p. 236). In other words, as long as a transmitted message is not received, this transmission has no impact and everything happens as if it had never existed. 4) The laws governing the world of the form of information, communication and learning are of a multiplicative or combinatorial nature. G. Bateson refers here to the example of what neurophysiologists call a “synaptic addition”: two neurons A and B having a synaptic link with a third neuron C, the firing of A or B separately is not sufficient to fire C, unlike the combined impulses of A and B when they are fired simultaneously (1980, p. 214; 1996, p. 230). In the world of the substance where additive laws operate, it will then be said that the “addition” of A to B (or vice versa) makes it possible to exceed the threshold C* of firing of C. In the world of form, governed by combinatorial laws, it will be said that: “what happens is that the system operates to create differences. There are two differentiated classes of firings by A: those firings which are accompanied by B and those which are unaccompanied. Similarly there are two classes of firings by B. From this point of view, the so-called ‘addition’, when both neurons are excited, is not an additive process. It is rather the formation of a logical product, a process of fractionation rather than an addition” (ibid., author’s translation).

Information, Communication and Learning

19

The description of the world of the form will always be hierarchical according to an order of complexity absent from the world of the substance; “each effective difference denotes a demarcation, a line of classification, and all classification is hierarchic” (ibid.). 5) The Batesonian model of communication deals mainly with the meaning of information. Indeed, in a hierarchical system, this notion can be defined in a very general way “as the effect of the receipt of this information by its recipient. This effect may appear either in the form of a change of state or as an output of this recipient itself considered as a subsystem” (Atlan 1979, p. 86). However, the very strong emphasis placed by G. Bateson on the receiver of information through his conception of the event, which only makes sense if it is perceived, in perfect coherence with his definition of information as a difference that produces another difference, shows that the effect of the receipt of information by its recipient is in his view absolutely essential. Moreover, in his model, this effect simultaneously takes the two forms mentioned here by H. Atlan: each point of the system changes state following its receipt of information because of the difference thus perceived by it, and as a subsystem, each of these points transmits to another point of the global system an “output” in the form of a difference consisting of a coded version (or transformed, or converted) of the one corresponding to the information received. 6) The model developed by G. Bateson implicitly allows us to think of a form of information creation: each transformation of a perceived difference into a difference emitted throughout the communication system can include a part of creation19. 7) As we have seen, the recipient’s learning leads here to a categorization of learning along the lines of the logical type theory of B. Russell and N. Whitehead. From a purely theoretical point of view, the number of learning levels involved is infinite since any class can be a member of a class at a higher level (called “superordinate”), and each element of a given class can itself constitute a class of elements located at a lower logical level (called “subordinate”): there is no class of all classes that could limit their extension. But from an empirical point of view, it seems reasonable to stop it at level 3. For a given entity, learning at this level is actually about restructuring beliefs as a result of receiving a message that is inconsistent with previous beliefs. In turn, the restructured beliefs constitute a kind of filter to welcome and evaluate the information that will be received. As long as the latter do not

19 As we will see in our next chapter, one of H. Atlan’s major contributions (1972, 1979, 1991, 2011) is precisely to link meaning and creation of information.

20

The Carousel of Time

contradict the filter formed by the current beliefs of the entity concerned, the latter carries out a level 2 learning20. 8) The Batesonian theory of information, communication and learning remains qualitative. Despite the inspiration that logical type theory provided for its categorization of learning, and despite the influence of cybernetics on his thinking, G. Bateson never developed a mathematical theory of communication; although he knew the work that C. Shannon presented at the seventh Macy conference (March 1950)21, he did not mention it once. In summary, these two models are therefore opposed point by point on the following eight characteristics (Ancori 2016), as shown in Table 1.1.

20 While level 2 learning can be formalized with the tools of classical logic, the restructuring of beliefs following the reception of information that is contradictory to current beliefs must use possible worlds semantics (Ancori 2008b, p. 37 sq.). We will return to this point at length in Chapter 10. It should be noted for the time being that the Batesonian categorization of learning, initially formulated in an article published in 1964 and included in the last chapter of G. Bateson (1977, pp. 253–282), anticipated most of the notion of contextual effects contained in the theory of relevance presented by Dan Sperber and Deirdre Wilson (Sperber and Wilson 1989): what are the contextual effects (implication, reinforcement, contradiction) represented in the classificatory conception of relevance, if not different levels of Batesonian learning – respectively, levels 1, 2 and 3? See, in particular, “Le message du renforcement”, written in 1966 and included in G. Bateson (1996, p. 192 sq.). On the other hand, however, D. Sperber and D. Wilson follow in the footsteps of the philosopher Paul Grice, according to whom a principle of economics is at work in language, which thus aims to say only what is relevant. The inferential theory of communication then defines communication as regulated by the principle of inference: a sign means when, combined with the context, an interlocutor can infer/deduce the meaning of the latter. And it is then up to the producer of this sign to ensure that this is the case – to produce messages with contextual effects. In this sense, relevance theory focuses on the sender of messages, unlike the Batesonian approach, which focuses on the receiver of messages. In addition, D. Wilson and D. Sperber (2004) use the principle of relevance to show that any statement exploits a similar relationship between itself and a thought. 21 In May 1942, Bateson participated in Warren McCulloch’s conference on brain inhibition in New York, which was the founding event of the 10 famous Macy conferences held from March 1946 to April 1953. In September 1946, he himself organized a special sub-conference on teleological social mechanisms, designed to enable social science researchers (sociologists Talcott Parsons and Robert K. Merton, as well as anthropologist Clyde Kluckholn) to interact with representatives of the “hard” sciences – Norbert Wiener and John von Neumann. On this occasion, it was recommended to the main group of Macy conferences to clarify the notion of Gestalt, and this notion was indeed the central purpose of the second Macy conference which followed in October 1946 and then gave rise to an intervention by Wolfgang Köhler on this theme at the fourth conference in October 1947 – an intervention considered to be lacking empirical foundations and severely criticized as such by Walter Pitts and Warren McCulloch (Segal 2003, pp. 176 sq.).

Information, Communication and Learning

21

As this table shows, the title of W. Weaver and C. Shannon’s work is misleading because this book focuses on a theory of measuring the information transmitted and received in a “telegraphic” transmission rather than on a theory of communication itself. As a counterpoint, Bateson’s work analyzes the multiple aspects of such communication, including the meaning of the information received during the communication and the different levels of learning induced by this reception. We therefore have, on the one hand, a quantitative model of communication whose semantics are extremely poor and, on the other hand, a theory whose semantics are very rich, but which remains purely qualitative. One of the merits of the theory of complex and self-organizing natural systems developed by H. Atlan is that it allows us to think about the related notions of creation and meaning of information, as well as that of levels of learning, in a formal framework that fundamentally remains Shannonian, i.e. quantifiable. C. Shannon’s model

G. Bateson’s model

Flow of information

Linear

At least circular

Communication

Intentional, verbal

Intentional or not, verbal or iconic

Main focus

Transmission of messages

Receiving messages

Information laws

Additive

Combinatorial

Meaning of the information

Indifferent

Central

Creation of information

None

Implicit

Learning

Level 0 learning by simply stacking symbols: no prior filtering of messages, no restructuring of beliefs after their reception

Hierarchical categorization Receipt of messages (level 1 learning) filtered by a cognitive context thus reinforced (level 2 learning), as well as likely to be restructured (level 3 learning)

Type of theory

Quantitative

Qualitative

Table 1.1. Eight characteristics opposing the models of C. Shannon and G. Bateson

2 Self-organization and Natural Complexity

This chapter aims to explain the notions of information, communication and learning in the context of the complex adaptive systems theory, under the version proposed by M. Gell-Mann first presented in the introduction to this book. Within the class of such systems, we will focus our attention on complex and self-organized natural systems as analyzed by H. Atlan (1972, 1979, 1991, 2011, 2018). The model gradually developed and enriched by the latter constitutes a starting point for a summary of the two approaches of communication presented in the previous chapter. We will first detail the epistemological context and the formal definition of self-organization according to H. Atlan, because this definition results from a re-interpretation of the Shannonian theorem of the noisy channel by applying it to a system organized into several hierarchical levels. However, such a re-interpretation allows us to think of a phenomenon of information creation that we have seen totally ignored by the Shannonian theory, but implicitly introduced by the Batesonian approach to communication. We will then gradually restore Atlan’s construction of the notion of the meaning of information in a hierarchical system, by contrasting the principles of order from noise and organizational noise and by accordingly distinguishing the concepts of complexity and complication. This progressive construction will finally lead to a notion of the meaning of information very similar to that of G. Bateson. 2.1. Self-organization and information creation According to H. Atlan, biological organizations, being dynamic and not totally controlled by humanity (because they are not built by them), constitute a kind of compromise between two extremes: “a perfectly symmetrical repetitive order whose crystals are the most classical physical models and an infinitely complex and unpredictable

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

24

The Carousel of Time

variety in its details such as the evanescent forms of smoke” (Atlan 1979, p. 51). It is then a question of understanding this type of organization, and more broadly unbuilt systems (natural or social), by avoiding two obstacles: first of all that of vitalism, which begs us to deny that the phenomenon of life is explicable in physicochemical and mechanistic terms – a purely negative and sterile attitude, according to H. Atlan; then, the teleonomic notion of the program, introduced by Jacques Monod (1970) in order to avoid the finalism implicitly present in the discourse of biologists until the early 1960s, but which only displaces the problem without solving it (Atlan 1979, pp. 14–18). In fact, the questions raised by molecular biology, such as cybernetic metaphors applied to biology, reflect a new language and call for answers involving: “not a reduction from living to physico-chemical, but an extension of it to a biophysics of organized systems applicable to both artificial and natural machines” (ibid., pp. 23–24). This is the direction in which all the work on the logic of self-organization applied to living beings2 is heading. In such a framework, the fundamental premise on which this logic is based is that the most extraordinary performances of living organisms are the result of particular cybernetic principles that must be discovered and specified: “As particular principles, they must reflect the specific character of living organisms that these performances have. But as cybernetic principles, they are postulated in conjunction with the other domains of cybernetics, the best-known, those that apply to artificial automata. The consequences of this premise are twofold: (a) the specificity of living organisms is related to organizational principles rather than irreducible vital properties; (b) once discovered, nothing should prevent their application to artificial automatons whose performance would then become equal to that of living organisms” (ibid., p. 24, author’s use of italics). In this perspective, H. Atlan brings a rigorous formalism to the notion of the self-organization of a system disturbed by a random “noise” coming from its environment and triggering a:

1 The citations taken from Atlan are translations from French. 2 In this section, we mainly use H. Atlan (1979), which largely develops ideas initiated in H. Atlan (1972, pp. 242–284) and then repeated and extended in H. Atlan (2011, pp. 69–97). For a genealogy of the concept of self-organization, from Alan Turing’s (1952) work on flow-coupled morphogenesis to the multiple figures that this concept is known for today, see H. Atlan (2011, pp. 7–32).

Self-organization and Natural Complexity

25

“a process of increasing both structural and functional complexity resulting from a succession of recaptured disorganizations, followed each time by recovery to a higher level of variety and lower redundancy” (Atlan 1979, p. 49). Each term in this general definition of self-organization takes on a particular meaning by borrowing the notions of complexity, structure, function, organization, variety and redundancy from systems theory, cybernetics and information theory, to reinterpret them within the self-organizational paradigm that H. Atlan wants to promote. It is therefore important to understand their precise meanings in such a context. To this end, let us start from an expression formally expressing the self-organizing property thus presented by certain systems. Let H be the amount of information contained in a message, R the degree of redundancy of that message and Hmax the maximum amount of information that would be contained in that message if its redundancy were zero: by definition, R = (Hmax – H)/Hmax, and therefore R = 0 if H = Hmax. We can therefore write H = Hmax . (1 – R) and, by hypothesis, this quantity H varies over time due to random factors: dH/dt = d[Hmax . (1 – R)]/dt. After a series of elementary transformations, it becomes: dH/dt = – Hmax . dR/dt + (1 – R) . dHmax/dt The interpretation of this formula requires a return to C. Shannon and more particularly to the “noisy channel theorem” that we mentioned in the previous chapter, according to which: “the amount of information of a message transmitted in a noisedisrupted communication channel can only decrease by an amount equal to the ambiguity introduced by this noise between the input and output of the channel. Error-correcting codes introducing a certain redundancy in the message may reduce this ambiguity so that at the limit the amount of information transmitted will be equal to the quantity transmitted; but in no case may it be greater” (ibid., pp. 44–45). Applied to a message composed of a sequence of symbols and transmitted in a communication channel disturbed by noise, this theorem therefore excludes any possibility of an organizational role for the latter. What if we extend C. Shannon’s formalism to the theory of organized systems by measuring the degree of organization of such a system by the amount of corresponding information? A priori, such an extension seems all the more legitimate since the noisy channel theorem can be applied generally, in the sense that it both:

26

The Carousel of Time

– refers to any sequence of symbols composing a message as parts of it; – and its validity does not depend in any way on the physical nature of the transmission channel in question. Although it has notoriously been established in order to solve certain telecommunication problems, this theorem therefore seems to fit quite naturally into a theory of the organization of general systems by applying, in particular, to the particular case in which the transmission path in question is established between the system in question and its observer. And in such a context, there is nothing to prevent the assimilation of any sequence of elements of this system to a message sent by a source to a receiver–observer, nor to the mathematical characterization of this sequence using C. Shannon’s formula. It is therefore possible to show that, unlike what happens at the level of an elementary transmission channel where, in the absence of error-correcting codes, noise has the inevitable effect of reducing the amount of information transmitted, at the level of the channel established between the system and the observer considering the latter as a whole, the ambiguity introduced by noise factors can generate information. To show this, let us start by more precisely linking the complexity (or the degree of organization) of a system to the total amount of information contained in a message. We know that this quantity measures, over a large number of messages written in a given language with a given alphabet, the average amount of information per symbol of that alphabet, multiplied by the number of symbols in the message. However, the average amount of information per symbol is the amount measured by C. Shannon’s formula. Let us extend this formalism to a system composed of parts. The amount of information in such a system is then defined: “from the probabilities that can be assigned to each type of its components on a set of systems assumed to be statistically homogeneous to each other, or from the set of combinations that can be achieved with its components, which constitutes the set of possible states of the system. In all cases, the amount of information in a system measures the degree of improbability that the assembly of the different components is the result of chance. The more a system is composed of a large number of different elements, the more information it contains, because the more improbable it is to constitute it as it is by randomly assembling its constituents. This is why this magnitude could be proposed as a measure of the complexity of a system, in that it is a measure of the degree of variety of the elements that constitute it” (Atlan 1979, p. 45). Although such a measure is unsatisfactory, because of the static and solely structural nature of the complexity it envisages by confining itself to the assembly of

Self-organization and Natural Complexity

27

the elements of the system in question, while the functional and dynamic complexity due to the interactions between these elements should also be taken into account, it clearly highlights the link between the quantity of information contained in a message and in an organized system. Let us call such a system S, and suppose that it comprises two subsystems A and B connected by a communication channel transmitting a flow of information from A to B. For S to truly possess the organized overall character of a system, this transmission of information must be such that it does not correspond to either of the following two cases: – a total absence of constraint from A to B, in the sense that all the information transmitted by A would be lost during its transmission to B; – the existence of a total constraint from A to B, in the sense that this transmission would take place without any loss of information. Indeed, the first case would be that of a total independence of B’s structure from A’s: when transmitting from A to B, the ambiguity would be equal to the total quantity of information of A, and the quantity of information of sets A and B would be obtained by adding that of A to that of B. To the extent that the very existence of the S system depends on the effective establishment of a relationship between A and B through the communication channel connecting them, the situation described at the time would involve the destruction of S as a system. In the second case, B would be a simple copy of A, and the amount of information in sets A and B would be equal to that of A (or B): instead of the S system, we would then only have a replicated subsystem. In short, the notion of an organized system only makes sense for the intermediate situations between these two extreme cases, i.e. those where the structure of A actually exerts a constraint on that of B, without this constraint being total: the notion of organization of the system S assumes that there is a transmission of a non-zero quantity of information between A and B (we are not in the case of a total absence of constraint from A to B) and that this transmission takes place with a non-zero ambiguity (we are not in the case of a total constraint from A to B). Since this dual condition is met, it can be shown that ambiguity plays diametrically opposed roles depending on whether one is placed at the level of the elementary information transmission channel (between A and B) or at the level of the channel established between the S system as a whole and an external observer. It is indeed obvious that the “noisy channel theorem” applies well to the elementary channel of information transmission considered: ambiguity here measures the uncertainty that remains about subsystem B when subsystem A is known, in that it reflects the lack of constraint exerted by A on B – a lack that would be maximal if A and B were perfectly independent, and zero if B was only a copy strictly consistent with A. Similar to the exercise of a constraint from one subsystem to another, the transmission of information from A to B refers here to the primary philosophical meaning of the verb “to inform” – from the Latin word informare which means “to

28

The Carousel of Time

give a form, to put in a form”. By definition, the greater the uncertainty remaining on B knowing A (the higher the ambiguity), the less important the amount of information transmitted in the channel: since ambiguity plays this negative role on the amount of information transmitted at the level of the elementary channel between A and B, we can agree to associate it here with a negative algebraic sign. It is exactly the opposite that prevails when we consider the system as a whole, at the level of the information transmission channel established between the system and an external observer, and no longer from within that system. Because at this level, the constraint exerted by A on B plays a role diametrically opposed to the previous one: if this constraint were total (zero ambiguity), the knowledge of B knowing A would not bring any additional information to the observer since the second subsystem would only constitute an identical copy of the first, and if this constraint were on the contrary zero (maximal ambiguity), the knowledge of B knowing A would bring to the observer an additional quantity of information strictly equal to the quantity of information contained in B. Thus, in the case of a true organized system, where we impose the constraint of being neither total nor null, there is always a certain ambiguity about B when the observer only knows A. But this remaining uncertainty, measured by the amount of information of B independently of A, no longer represents lost information here: the amount of information transmitted to the observer by successively considering A and B is equal to the sum of the amount of information of A and that provided by the observation of B although the observer already knows A. Far from reflecting a loss of information, the difference between A and B generates information for the system observer by increasing the variety of the system in their eyes. We must therefore give a positive algebraic sign to ambiguity here. H. Atlan marks this bivalent role of noise with the concepts of destructive ambiguity (< 0) and ambiguity–autonomy (> 0), and it is in terms of these concepts that we must interpret the formal expression, given above, of the self-organizing property of certain systems, because each element of the sum appearing on the righthand side of this expression corresponds to one of the opposite roles of ambiguity, highlighted at the time. In this formal expression, the organization of the system concerned is defined by a structural character, measured by the quantity H, and by a functional character, expressed by dH/dt: dH/dt = – Hmax . dR/dt + (1 – R) . dHmax/dt Consider the right-hand side of this equation, which expresses the instantaneous evolution of the amount of information contained in the system S. Its first term, Hmax . dR/dt, measures the variation in the redundancy of this system over time. If this redundancy is initially high enough, it will decrease under the effect of ambiguity–autonomy: dR/dt < 0. And since Hmax is by definition always positive, the

Self-organization and Natural Complexity

29

product formed in the first term of the right side, i.e. Hmax . dR/dt, will be positive, which will result over time in an increase in H. This first term thus reflects a process of differentiation of the subsystems or an increase in the structural and functional complexity of the system, by measuring the increase in the variety of the latter: acting within the system, “noise” thus has a positive effect on the channel established between the system and the observer. As for the second term (1 – R) . dHmax/dt, it measures the variation of Hmax over time, i.e. the instantaneous variation in relation to the state of maximum complexity that the system can achieve without taking into account its degree of redundancy. According to H. Atlan, the process of disorganization in relation to a given (instantaneous) organizational state, under the effect of destructive ambiguity, is expressed by a decreasing function of Hmax so that dHmax/dt and the entire second term are negative, contributing to a decrease in the amount of H information in the system. In short, “noise” acts here directly on the transmission channel from the system to the observer by reducing the amount of information conveyed by it: the effect of destructive ambiguity is therefore very negative from the observer’s point of view. Ultimately, the functional organization of a system can be described by the rate of change dH/dt, obtained by adding two time functions, one of which expresses the rate of decrease in the redundancy of the system and the other the rate of decrease in the maximal amount of information in the system. Under certain conditions of initial structural redundancy and system reliability, the system is able to integrate into its own organization any “noise” caused within it by random factors from its environment. The consequent reduction in the redundancy of the system and the corresponding increase in its complexity therefore reflect the self-organizing nature of this system. From a formal point of view, this situation corresponds to the first of the three possible trajectories of a naturally complex system: dH/dt > 0 if, and only if, – Hmax . dR/dt > (1 – R) . dHmax/dt  dH/dt < 0 if, and only if, – Hmax . dR/dt < (1 – R) . dHmax/dt  dH/dt = 0 if, and only if, – Hmax . dR/dt = (1 – R) . dHmax/dt  Self-organization is therefore not conceived here as a state, but as an uninterrupted process of disorganization–reorganization. Two important parameters are involved in the two time functions that sum up the dH/dt rate that describes the functional organization of the system: the initial structural redundancy Ro and the reliability, a functional characteristic that expresses the organization’s effectiveness in resisting random changes. More precisely, if the reliability of the system depends on its redundancy, it is not reduced to it. We have seen that the initial redundancy of

30

The Carousel of Time

a system must have a minimum value for it to have self-organizing properties consisting of an increase in complexity by destroying redundancy. These conditions are absolutely necessary for the variation curve H(t) to have an increasing initial part: “Reliability will also measure the duration of this ascending phase, i.e. the time tm at the end of which the maximum is reached, all the longer because the reliability is greater” (Atlan 1979, p. 52). This produces a curve that is representative of the evolution of the amount of information in the system over time: up to a date tm, ambiguity–autonomy prevails over destructive ambiguity so that in total, dH/dt > 0; at date tm, the curve reaches a maximum and dH/dt = 0; after date tm, destructive ambiguity prevails over ambiguity–autonomy and dH/dt < 0. The profile of such a curve is quite typical of living organisms: growth and maturation (dH/dt > 0 for t < tm) and then, after a maximum (dH/dt = 0 for t = tm), aging and death (dH/dt < 0 for t > tm). It should be noted that it is always random factors from the system’s environment that feed the system’s development and then lead to death. These factors are random because, at the time they cause “noise” within the system, the effect of which is successively organizational (t < tm) and disorganizing (t < tm); they do not correspond to any pre-established program contained in the environment that is intended to organize or disorganize the system. We are dealing here with an unintentional noise, whose occurrence is therefore a priori meaningless, but whose effect consists, in the growing part of H(t), of creating information by increasing the degree of organization of the system by increasing the degree of differentiation of the subsystems that compose it. Leaving aside the Shannonian theory of information, which focuses exclusively on the conditions for the effective transmission of existing information, the problem of information creation is therefore conceivable in the context of an expansion of this theory such as that proposed by the self-organization paradigm presented by H. Atlan. By being a direct extension of Shannonian formalism, in that it, in no way, contradicts the “noisy channel theorem”, this paradigm thus admits the possibility of an increase in the amount of information transmitted from the system to the observer under the effect of a noise creating an ambiguity–autonomy superior to destructive ambiguity. This possibility results from a simple change of point of view on the role played by ambiguity towards the system, because it depends on the observer’s explicit consideration in the very definition of systems not consciously and deliberately constructed by people: it is by alternately considering the system from within (from the point of view of transmission between subsystems A and B within S) and from outside (from the point of view of transmission between the S system and the observer) that a destructive ambiguity (first point of view) has been opposed to an ambiguity–autonomy (second point of view). More generally, these different

Self-organization and Natural Complexity

31

points of view actually determine different levels of organization of any integrated system at hierarchical levels: “The introduction of the observer’s position is not only a logical step in the reasoning: this observer outside the system is in fact, in a hierarchical system, the higher level of organization (including) in relation to the elementary systems that constitute it; it is the organ in relation to the cell, the organism in relation to the organ, etc. It is in relation to this that the effects of noise on a channel within the system can, under certain conditions, be positive. In other words: for the cell that looks at the communication channels that constitute it, the noise is negative. But for the organ that looks at the cell, the noise in the channels inside the cell is positive (as long as it does not kill the cell) in that it increases the degree of variety, and therefore the regulatory performance, of its cells” (Atlan 1979, p. 70, author’s use of italics). 2.2. Meaning of information in a hierarchical system The above developments suggest that the meaning of the information changes according to the hierarchical level considered with regard to a system. In order to fully understand this point, we will proceed here in three steps: the difference between the principle of order from the noise of Heinz von Foerster and that of the organizational noise of H. Atlan will allow us to establish a clear distinction between the concept of complication and that of complexity, which will lead to the notion of the meaning of information in a hierarchical system. 2.2.1. Order from noise versus organizational noise As initially presented by H. Atlan (1972, pp. 245–249; 1979, pp. 83–85), the model proposed by H. von Foerster (1960) is that of a qualitative model in which the order of a system is considered as a repetition (or redundancy) that increases following the disturbance of that system from a “noise”. This system is that of magnetic cubes whose initial arrangement within a container is that of a shapeless cluster and are agitated randomly at the same time as the container holding them. As this agitation, synonymous with noise, progresses, these cubes are arranged in an increasingly “ordered” way because of the magnetization of their attracting faces. We immediately notice that this process presupposes that the mechanism of form construction is immediately known: the observer knows from the beginning of the experiment that the cubes will come together according to the forces of attraction due to magnetization. Noise therefore has the function and effect of allowing the potential stresses, constituted by these attraction forces, to be realized, and the system finally constructed corresponds exactly to the knowledge that we had

32

The Carousel of Time

a priori of the construction mechanisms. It is indeed the redundancy of the system that increases due to noise: compared to the initial formless cluster, within which the position of a given cube could not, by definition, be logically deduced from those of other cubes, the state of the system evolves in an increasingly predictable manner for the observer. From the observer’s point of view, it is indeed increasingly easy to predict the position of a given cube from the knowledge one has of those of other cubes3. The principle of organizational noise introduced by H. Atlan is the opposite: one of its essential interests lies precisely in the fact that here noise, far from increasing redundancy, reduces it by increasing complexity. If we do not know the mechanism of form construction, as is the case in unbuilt systems, where learning takes place without a prior program, then the form that gradually appears is more complex than the initial cluster. Indeed, here, the “pieces” were perfectly interchangeable without modifying the overall form so the number of different elements that had to be specified to reconstruct a statistically identical cluster was relatively small compared to the case 3 H. Atlan (2011, pp. 26–27) describes the model of H. von Foerster’s cubes as “selforganization in a weak sense”. They are transposed to that of watches that wind themselves under the effect of the random movements of wrists and thus add the realization of a function to this model of shape construction. In Chapter 6 of this book (pp. 181–217), he presents a typology of functional self-organizations in three categories: weak, strong and intentional. This typology is based on the more or less partially programmed nature of the emerging functions and their effects of significance on the external observer: from “weak” to “strong”, then to “intentional”, the degree of self-organization increases all the more as the emerging properties and effects of meaning are less well-explained. In the former, such as most applications of neural network computing to artificial intelligence techniques for manufacturing learning and distributed memory machines, functions are defined in advance and the model is designed to make them appear. However, this is self-organization because the details of the structures performing the desired function are not explicitly programmed, and these structures therefore partially emerge through self-organization (op. cit., p. 194–199). In “strong sense” self-organizations, the task to be accomplished is itself an emerging property of the machine’s evolution. This is the case in natural systems where the emergence of macroscopic structures and functions is observed from constraints at the microscopic level: the absence of a predefined goal leads to the emergence of what appears after the event as a meaningful functional behavior; this is also the case with the emergence of goals in some expert systems or classification procedures in Boolean networks (ibid., pp. 199–205). Finally, the analysis of “intentional” self-organizations implies, unlike previous ones, taking into account the observer’s point of view – in the sense of objective conditions of observation and measurement (ibid., pp. 205–207). This is the case with human systems, both machines (programmed or self-organizing) and designers/observers of these machines: their conditions as observers noticed require that the question of the creative intentionality of projects be taken seriously, and “intentional” self-organization models must therefore “mechanically simulate our experiments in conscience intentionality by integrating in particular the position of the observed observer, i.e. self-observation mechanisms” (op. cit., p. 207).

Self-organization and Natural Complexity

33

of a given geometric form, where each “piece” occupies any place – where the portions of space are not interchangeable when it comes to their respective probabilities of being empty or occupied by such or such a “piece”. The redundancy of the system has decreased, and its complexity has increased as a result of organizational noise. Between the principle of order from the noise of H. von Foerster and that of the organizational noise of H. Atlan, the difference is that the first concerns systems that evolve in an increasingly predictable way in the eyes of the observer, whose knowledge of the rules of evolution of these systems allows them to predict more and more the evolution of the corresponding subsystems, while the second envisages systems whose law of evolution is not known by the observer. Therefore, it is more and more difficult for the observer to predict the evolution of the corresponding subsystems. 2.2.2. Complexity and complication This difference between order from noise and organizational noise immediately leads to another difference, fundamental to our purpose: one that distinguishes complication from complexity. Ultimately, the complication of a system expresses only the number of steps necessary to fully describe it from its components. It is therefore an attribute of systems built, or constructible, by humans who know and fully understand their structure and functioning. In this sense, if it contains a large number of cubes, then the system illustrating the principle of order from noise according to H. von Foerster can be described as complicated. However, the same system cannot be considered complex, since: “complexity is recognized as a negative notion: it expresses that we do not know, or do not understand, a system despite a global knowledge foundation that makes us recognize and name this system. A system that can be explicitly specified, whose detailed structure is known, is not really complex. Let’s just say it can be more or less complicated. Complexity implies that we have an overall perception of it with the perception that we do not control it in detail. This is why it is measured by the information that is not available and that would be needed to specify the system in detail” (Atlan 1979, p. 76, author’s use of italics). H. Atlan joins Léon Brillouin’s (1959) conception of information by interpreting the amount of information defined by Shannon’s formula as a measure of the information that is missing from the observer to explicitly specify a system. Ultimately, however complicated it may be, a perfectly known system can be reduced to a single element: its construction program. In this case, for H. Atlan as

34

The Carousel of Time

for L. Brillouin, H = 0 (and R = 1): an ordered complexity, in the form of a known supposed construction program, is no longer a complexity but only a complication. Conversely, a disorder is not necessarily synonymous with complexity or is complex only in relation to an order that is supposed to exist and that we are seeking; “or, complexity is an order whose code we do not know” (Atlan 1979, p. 78, author’s use of italics). Let us illustrate the negative nature of this notion by giving three measures of the complexity of a system, which differ according to the degree of knowledge we have of the latter. The first is the complexity of a system composed of a large number of elements and for which the construction mechanism is unknown. In short, it is the canonical situation of complexity, the one in which it is the most complex: all we know is the number n of elements composing this system. In this situation, complexity is measured by the variety of the system, as defined by W. R. Ashby (1958): this is a particular case of Shannon’s H function, where there are n different elements, whose respective probabilities are not really taken into account because they are not known. More precisely, such ignorance is expressed by considering these elements as equiprobable. This is the case analyzed by R.V. Hartley: ∀i, ∀j, pi = pj =1/n, and H = log2n = H1. Let us now imagine that we also know the probability distribution of the elements of the system, and this with any pi probabilities. We then simply find C. Shannon’s formula: H =−

i =n

 p(i).log 2 p(i) = H 2 i =1

It is clear that H2 < H1. Indeed, the additional information consisting of the knowledge of the probability distribution of the system’s elements, which we did not have in the previous case, corresponds to our best knowledge of this system’s code, which implies a reduction in its complexity compared to the previous case: we are faced here with zero redundancy (the n elements of the system are perfectly independent), and R = 0, H2 = Hmax. Finally, we may be aware of internal constraints within the system, established between the system’s elements. Compared to the previous case, the complexity is further reduced, due to the redundancy that measures these constraints (whose formal expression consists of conditional probabilities of the type p(i)/j). This complexity is then measured by the quantity H3 = Hmax (1 – R) = H2 (1 – R). Finally, H3 < H2 < H1, where H1 corresponds to the maximum of H2 (due to the equiprobability of the distribution), and where H2 corresponds to the maximum of

Self-organization and Natural Complexity

35

H3 (due to the independence of the elements): H1 expresses a maximum maximorum of ignorance or complexity. We can thus design a first approach to the meaning of information. Since the information is measured by the Shannon formula, which expresses the amount of information that is missing to the system’s observer in order to reconstruct the system identically, this measure depends critically on the choice of what is considered to be the components of the system. For example, in the case of a biological system, this quantity will vary depending on whether the observation is made at the level of elementary particles, atoms, molecules, macromolecules, organelles, cells, organs or the entire organism, considered as a system encompassing the previous subsystems. According to H. Atlan, it is precisely in these terms that we must pose the problem of the meaning of information in a hierarchical system. Indeed: “it is precisely here that the meaning (for the system) of the information (for the observer, i.e. the information they do not have) is located, even though it is measured in a probabilistic way that ignores the meaning: it comes from what it is, again, information that we do not have on the system. The choice, at the beginning, is left to the observer’s decision” (Atlan 1979, p. 75). With H. Atlan, let us consider, for example, the observer of a biological system who would choose to describe it, either from its constituent atoms or from its molecules. In the first case: “H will measure the additional information required when only the type of atoms encountered in a statistically homogeneous set of identical systems and their frequency in that set are known. This necessary information is obviously very large compared to the information that would be required if the system were described from its molecules. This is because in this case we would use additional information that we already have on how atoms are associated in molecules. This is all the complexity that disappears compared to the previous case” (ibid.). The more we know about how the elements are assembled to build the system, the more the H function decreases. It is now easy to understand the conceptual links between redundancy, complexity and coding in systems not built by humans. Perfectly dual in terms of the notion of complexity, the notion of redundancy translates into information terms the effect of the existence of constraints internal to such a system on the observer: because of these constraints, established between these subsystems, which are the elements of the system in question, the observer’s

36

The Carousel of Time

knowledge of one of these elements influences the knowledge they may have of the others. As a result: “redundancy is a measure of simplicity and ‘order’. Thus the order would be rather repetitive or redundant. It does not have to be physically repetitive, as in a crystal, in the sense of a single element or pattern repeated many times. It only needs to be redundant, i.e. deductively repetitive: knowledge of one element provides us with some information about others (by reducing uncertainty about them) and this is what makes us perceive an order” (ibid., p. 79). A priori, the conceptual relationship between coding and complexity may seem more confusing: it seems paradoxical that the H function expresses the amount of information transmitted in an established channel between the system and the observer, even though it measures the amount of information that the observer does not possess, since it is precisely the information they would need to specify the system. In reality, there is no paradox here: to dissolve it, it is enough to stress once again that what the system transmits to the observer is always a lack of information4. It is thus information without code that is transmitted during the observation of the system, which obviously does not imply that the notion of the meaning of information is absent from the paradigm of self-organization according to H. Atlan – quite the contrary, as will now be shown by the full explanation of this notion in the context of a hierarchical system. 2.2.3. Meaning of information in a hierarchical system In a very general way, the meaning of a given piece of information can be defined in the way already mentioned above5. More precisely: “as the effect of the receipt of such information by its recipient. This effect may appear either as a change of state or as an output of the recipient itself as a subsystem” (ibid., p. 86).

4 The quantification of what is transmitted in this way does not directly take into account the coding (at the input of the channel) or the decoding (at the output) of the meaning of the messages. The measured information is that which is transmitted from the system to the observer from a strictly Shannonian perspective, i.e. within a channel: in the calculation itself of quantity H, the boundaries of the transmission channel are almost ignored because Shannon is only interested in the communication channel – see the diagram of a communication system, Chapter 1, Section 1.2. 5 See Chapter 1, Section 1.2.

Self-organization and Natural Complexity

37

It is obvious that a given event, conveying a certain amount of information to those who perceive it and thus acting as a recipient of the associated information, has different meanings for different recipients. In particular, the event consisting of the occurrence of “noise” disrupting a self-organizing system has different meanings according to the different hierarchical levels at which one is placed when observing this system: as we have seen, this event has a different meaning according to whether one considers the elementary channels constituting the system itself, or the encompassing level at which this system is considered as a whole by the observer. At the elementary pathway level, the connotation of this organizational noise is clearly positive, since the system uses it to rebuild itself in a new way – by increasing its complexity by reducing redundancy: although noise is a priori random, in the sense that its occurrence from the environment corresponds neither to a pre-established program intended to organize or disorganize the system, nor is it predetermined by the current state of the system; the notion of randomness thus loses its relevance a posteriori. In fact, the system is precisely self-organizing because it is able to integrate this noise into its own organization, and as soon as it is integrated in this way, this noise is no longer random. On the contrary, its occurrence means that the system now has better information about itself, which eventually allows it to function better within its environment: “The information that a system would have about itself, that is, the information that we have seen, can increase under the effect of what appears to us as noise (and that we then measure by information that we lack), and is what allows the system to function and even to exist as a system. It therefore refers to all the structural and functional effects of receiving the information transmitted in the system, on the different subsystems and different levels of organization of the system. This is the meaning of this information for the system” (ibid., p. 87). In short, although the system is devoid of any previous plan, everything happens as if it were making use of itself from an a priori random noise that allows it to learn by itself by seeing the subsystems that constitute it become more distinct. Therefore, the significance of this noise must be considered positive for the system itself. The situation is different at the observer’s encompassing level. In the latter’s view, organizational noise is the result of chance, and its occurrence produces effects on the system that are such that their knowledge of the system decreases: following the occurrence of noise, the increase in the complexity of the system is synonymous with an increase in the observer’s lack of information about it, because to be able to specify it, the observer now lacks even more information than they previously lacked. The connotation of noise is considered by them to be clearly negative, and the meaning of the information for the observer is therefore the opposite of what this information means for the system itself.

38

The Carousel of Time

More generally, any variation in the quantitative expression of information measured by Shannon’s H function can therefore have radically opposite meanings depending on the level at which the observation of a hierarchical system is located – here, in only two levels: this quantity being always captured as a deficit, which is the destruction of information at an elementary (or encompassed) level, can appear as the creation of information at an encompassing level. The principle of complexity from noise, which lies at the heart of the concept of self-organization proposed by H. Atlan, therefore represents a roundabout way of introducing meaningful effects into a quantitative theory of organization: “Of course, the meaning is only there in a negative way, like its shadow, since it is only theorized through the effects of noise, i.e. a negation of information. But it is still there because it is the negation of a negation, since everything happens as if the Shannonian information denied meaning, which is another way of saying that it measures what we do not understand about the system” (ibid., p. 88). Applied to the simplest case where there are only two levels within a hierarchical system – that of the elementary pathways between the n elements of the system and that of the observer – the self-organization model proposed by H. Atlan can ultimately be summarized as follows. At the initial date t0, the system state is characterized by a redundancy R0 and by the corresponding complexity Ho measured by the Shannon formula. This complexity varies depending on whether or not the observer knows the probability distribution of the system elements and/or the existence of initial constraints exerted by some elements on others. At the same time, random factors from the environment of the system cause noise inside the system. This noise has the effect of modifying the system so that it experiences a new state on the next date, t1 = to + dt. If the system is self-organizing and reliable in the sense of H. Atlan (if Ro is high enough and if t < tm), this new state will be such that the redundancy of the system will have decreased (R1 < Ro) and, correlatively, its complexity will have increased (H1 > H0). Measuring the net excess of the value of ambiguity–autonomy over the absolute value of destructive ambiguity, this increase in complexity has a positive meaning for the system, which now has a better knowledge of itself, and a negative meaning for the observer of this system, whose information deficit with respect to it has increased. The general idea of this model is therefore that, under certain conditions, a qualitative factor, such as organizational noise, induces both qualitative and quantitative modifications in the representation of a system not constructed by the observer: this modification is qualitative, since it results in the effects of meaning recalled at the moment, but it is also quantitative, since these effects of meaning are measurable in the context of Shannonian formalism. While maintaining this formalism, with the advantages associated with the quantitative expression of any

Self-organization and Natural Complexity

39

theory, this model paves the way to a rigorous analysis of the creation and meaning of information within a hierarchical system. By considering these concepts alone that C. Shannon left out of his theory, H. Atlan’s self-organization model is indeed an improvement over the mathematical theory of information previously proposed. Finally, by focusing on the reception of messages in his analysis, H. Atlan proposes a concept of information meaning that is very similar to that of G. Bateson. The first theorist states that the meaning of a given piece of information can be defined as the effect of the receipt of this information by its recipient, either in the form of a change of state or in the form of an output of this recipient itself considered as a subsystem. This formulation thus exactly matches that of the second defining an information unit as a difference that produces another difference, because this information unit moves and undergoes successive modifications in a circuit in which each point is both receiver and transmitter, and, as such, changes its state following the information received from the previous point of this circuit and then produces an output consisting of information transmitted to the next point of this same circuit. The self-organization model of complex natural systems proposed by H. Atlan also has a significant advantage over other dynamic system models, such as the principle of order by fluctuation by I. Prigogine: unlike the latter, it is not a deterministic model, but a principle of probabilistic representation of poorly known systems. In this sense, it seems well-suited to the representation of social systems, which are all historical and whose structure and functioning remain imperfectly known to us, although some aspects of them lend themselves to a probabilistic interpretation. To better appreciate the richness of this model, our next chapter will confront this with the theory of memory proposed by the neurologist and philosopher I. Rosenfield, based on the work of the biologist G. M. Edelman.

3 Human Memory as a Self-organized Natural System

The emphasis placed by H. Atlan, like G. Bateson, on the reception of messages during communication between subsystems leads to a conception of learning, and more generally of human memory, surprisingly close to that proposed by I. Rosenfield (1989, 1990, 1996) on the basis of the work of G. M. Edelman (1987, 1992). Relatively old, this theory of memory is nowadays a pioneering work in that most of the concepts introduced there were then adopted by the majority of authors. In addition to its connection with the Atlanian paradigm of self-organization, this is also why we continue to refer to it more than a quarter of a century after our first publication on this subject (Ancori 1992)1. 1 On the history of theories of memory, see Bergé (2010), Draaisma (2010) and Lieury (2013); on two centuries of history concerning three major aspects of the modern study of memory, see Dupont (2005); for a recent typology of the different forms of memory, see Tiberghien (2002, pp. 165–175); for a presentation of the two main families of models of human memory – computational-symbolic models built on the computer metaphor, and connectionist models that refer to brain metaphor, see Tiberghien (1997); for a synthesis (contemporary with the work of I. Rosenfield) of all theoretical and practical knowledge concerning human memory, Alan Baddeley’s classic work (1992) remains a useful reference, completed today by the synthesis of Francis Eustache and Béatrice Desgranges (2012); for an interesting intertwining of concepts of memory from literary analysis and some advances in neuroscience, see Tadié and Tadié (1999); for a description of Marcel Proust’s work and current advances in cognitive psychology, see Didierjean (2015); for an analysis of explicit memories and implicit memory, see Schacter (1999) as well as Lejeune and Delage (2017); for a recent overview of cognitive sciences, see Andler (2004), including the conclusion of this book where D. Andler lists an impressive series of new fields open to these sciences at the dawn of their second half-century (op. cit., pp. 609–700); for an interesting interpretation of the notion of the cognitive unconscious, see Buser (1998); finally, Jacques Ninio (2011a, 2011b) proposes new conceptions of memory based on very numerous visual memory tests.

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

42

The Carousel of Time

The invention of memory: it is under this two-way title – invented as well as inventive memory – that the mathematician, doctor and philosopher Rosenfield (1989) proposed new views on the functioning of the human brain, and the neurophysiological substrate of this functioning, before extending the perspective thus paved the way for a theory of consciousness and language (1990, 1996). These views were new in that they contradicted the traditional conception of memory, associated with the theory of functional location of the brain. After outlining the latter, we will mention some experimental data that Rosenfield considered likely to deny him. Following the latter, we will then reinterpret these data from the perspective of Edelman’s “neural Darwinism”, in which Rosenfield’s own contribution is explicitly included. 3.1. The theory of functional localization or invented memory 3.1.1. The theory of functional localization More or less explicitly, this theory conceives the human brain as a mosaic of areas (centers) where images and symbols forming what is known as memory would be printed and permanently deposited. Thus, each of us would constantly have a stock of memories consisting of identifiable, classified and recorded traces, and of past perceptions representing themselves as exact images of events that occurred in our environment. It is from this stock of memories that this brain function called recognition, considered distinct from the perception, would be carried out: at any moment, our brain would be dividing the world into sensitive data that it would compare, in order to recognize them, with images acquired previously and recorded. Although it does not answer the question of how the first images to which new sensitive data are engraved are compared – even though it implies that there is no perception without prior learning – this theory benefits from two types of support: one relatively old and the other more recent. The old support is that of Paul Broca, who showed in 1861 that the loss of certain uses of speech, caused by speech disorders, corresponded to a small lesion of a certain area of the left brain – since called “Broca’s area”. More precisely, following this work, the loss of the auditory image of words appeared quite naturally due to the alteration of a center where the memory of the movements necessary for articulated language would be located. By extension, as pointed out by Rosenfield: “Other language centres were later discovered, with specific action on speech, and brain areas controlling the motor skills of certain parts of the body: hands, fingers, tongue, etc. By the end of the 19th century, many neurologists had deduced that the brain was made up of a set of highly specialized regions that controlled speech, motor activity or

Human Memory as a Self-organized Natural System

43

vision. In addition to localization and functional specialization, there was a division of memory into many specialized subunits. There were memory centers related to ‘visual images of words’, ‘auditory images of words’; memory loss can then be explained by the disappearance of a specific memory image (or center), or by the brain’s inability to ‘consult its files’ due to a disruption of nerve conduction pathways […] On a theoretical level, the most significant consequence of this research was the doctrine of functional localization” (Rosenfield 1989, p. 20, author’s translation). The theory of functional localization therefore considers as established the existence of a whole series of rigid relationships between specific locations in the brain and certain major functions performed by the body. Such a conception has been at the root of brilliant clinical studies that have predicted the exact location of brain damage based on the symptoms of dysfunction presented by patients and their predictions were confirmed at autopsy. Without the belief in memories constantly present in memory, the theory of functional localization would not have been conceivable. However, on the contrary, the effect of these clinical studies was to reinforce and extend such a belief. We immediately note that the concept of memory, which thus underlies a theory of functional localization that in turn reinforces it, has a very broad extension: this concept covers both the type of memory necessary to (re)produce a sequence of sounds articulated into a spoken chain endowed with meaning from the auditory image of words, and the one that each of us needs to find our way back to a city that we have already visited. It is therefore both a declarative or explicit memory, deliberately and consciously acted upon, and a non-declaratory or implicit memory, in particular procedural memory, involuntarily and unconsciously activated. All types combined, this memory is ideally considered by the functional localization theory to be perfectly accurate – a point by point fidelity, we could say. Such a memory records the exact image of a perceived stimulus-event and keeps this image indefinitely without any alteration. Carefully classified and recorded, it is therefore ready at any time to be confronted – by a recognition function presumed distinct from the perception function – with the image of a stimulus-event candidate to be recognized as identical to the previous one. A stock of specific memories is permanently present in our brain independently of its current context and immediately available for the recognition function: this is the conception of memory inherent in the theory of functional localization. Such features are very similar to those we usually lend to computer memory. Hence, the functional localization theory has more recently found support in the development of computer-related activities – more specifically, noted Rosenfield (1989, p. 22), in the recent extension of computer simulation. Such support seemed

44

The Carousel of Time

so natural that it hardly needed to be explained. With Pierre Lévy (1987, p. 90), it is sufficient to extend the content of the concept of calculation beyond its limited mathematical meaning of sets of arithmetic operations, by bringing together under this term any sorting, classification, permutation, combination, comparison, substitution or transcodification operation, so that human memory is exemplary of what this author calls the paradigm of calculation (op. cit., p. 88–118). This paradigm is in the wake of the famous article by W.S. McCulloch and W. Pitts (1943) highlighting the isomorphism between the dynamics of the states of a network of idealized or formalized neurons and the propositional calculus in B. Russell’s system, and demonstrating this: “In addition, a network of forms (i.e. a simplified model of the brain) has the same computational capabilities as a universal Turing machine, provided that it is equipped with a ribbon of infinite length and a read-write head. This condition seems reasonable, since most men know how to trace mnemonic technical signs on different media and read them again. In short, physical automatons can perform all the intellectual operations describable in a finite and unambiguous way, and the brain could well be one of these automatons. This opens up both the program of part of contemporary neurobiology and that of artificial intelligence” (Lévy op. cit., p. 95, author’s translation). Similar to the computer, our brain would thus house a device that transforms input messages into output messages according to the logical structure previously printed in it by an algorithm for sorting, classifying, etc. In the rest of his article, P. Lévy’s perspective is clearly critical of certain extensions of this form of neomechanism which conceives the physical universe as being composed exclusively of automata that all would be processing operations in the mode implied by the computation paradigm, and which thus presents the entire universe as omnicalculant2. Nevertheless, despite his visible reluctance towards this computational metaphysics, he points out the following: “It is clear that the scientific and philosophical debate is increasingly polarizing around the problems and in the language that neomechanism has succeeded in imposing”3 (op. cit., p. 99). However, these types of problems and the language used by this form of neomechanism directly refer to the conception of memory associated with the 2 We say this form of neomechanism, and not neomechanism in general, because we adhere to the neomechanism specific to H. Atlan’s model outlined in the previous chapter. 3 As Jean-Noël Missa (1995) shows, the theory of functional localization is closely linked to a materialistic philosophy, and allows it to fight any form of spiritualism.

Human Memory as a Self-organized Natural System

45

functional localization theory. Now supported by the evidence of the operational successes of the tools built on its model as well as by certain clinical interpretations of the pathology of memory, such a conception is, nonetheless, open to criticism4. According to Rosenfield, it is indeed possible to reinterpret in a totally different way the clinical observations on which the theory of functional localization was initially based, or even to explain these new bases, many of the observations being neglected by the latter. 3.1.2. Against functional localization A good third of Rosenfield’s book (1989, pp. 27–93) presents in a first chapter a critical reinterpretation of clinical data that has apparently supported the conception of memory related to the functional localization theory for over a century. It is certainly not a question of listing here exhaustively the elements of the long debate opposing the supporters of the latter, or the supporters of functional modularity5, to other authors who have defended in a more dispersed order, who have a holistic theory of brain functioning and who have an approach that emphasizes the importance of context and meaning for the remembrance activity. Rosenfield considers and reinterprets a body of data6 that essentially leads to two related proposals:

4 For the intellectual context of the birth of cognitive sciences in the West immediately after World War II, see Pélissier and Tête (1995). It is not within our scope to discuss in detail the merits of what we call “computational metaphysics”, nor its subsequent abandonment for conceptions that contrast the functioning of the human brain – and the human memory closely linked to it from a reductionist perspective – with that of a computer. We will limit ourselves here to referring to I. Rosenfield’s criticism of the conception of memory that is supported by this metaphysics. 5 Introduced by Jerry A. Fodor (1986) following inspiration from Noam Chomsky, functional modularity is a modern version of localization, in the sense that it conceives the brain as being composed of specialized functional units called modules, which can be located anatomically or not. At the end of the 1980s, according to I. Rosenfield, this point of view was widely shared by most psychologists and neuroscientists (op. cit., p. 83). Indeed, many authors enthusiastically endorsed it – at the same time that J. A. Fodor (2003) himself revised his position by stating that only part of our cognitive abilities functioned according to this principle, some “higher processes” being able to derogate from it! On modularity as a psychological hypothesis, see Andler (2004, pp. 113–195), the second part of which devotes three chapters to this subject. 6 This body of data is gathered around three points. The first briefly examines the case of “the man who could not read what he had written”, which was reported in 1892 by Jules Dejerine (pp. 42–66). The second refers to the controversy between the English neurologist John Hughlings-Jackson and Sigmund Freud (pp. 69–80) whose views – particularly the very important role of affects in memory and recall – were indirectly confirmed by mnemonic “flashbacks” highlighted by W. Penfield as early as 1933, and whose analytical exploitation

46

The Carousel of Time

– memory activated by the human brain cannot consist of specific memory traces, organized into a collection of still images that the recognition function would facilitate comparison with that of newly perceived objects; – recall is not independent of the context in which the brain is invited to (re)produce a memory; on the contrary, it depends closely on the emotional aspects surrounding both “memorization” and “extraction of the memory”. The conception of the resulting memory is then very different from that associated with the theory of functional localization, to which it is fundamentally opposed to four related points: 1) perception and recognition are not two independent brain functions, as the theory of functional localization claims, but two different aspects of the same function: it is by a single gesture that the subject perceives and recognizes; 2) this gesture involves an ability to organize perceived stimuli into coherent pieces of information, not an ability to confront each of these stimuli with a specific memory trace, recorded and stored permanently in our brain; 3) the subject’s performance in a recollection situation is never independent of the precise context in which it is produced, unlike memory as conceived since P. Broca’s work. It is clear that these first three points are closely linked: it is because memories cannot be both fixed and variable according to the context that Rosenfield was led to abandon the notion of permanent memory, and it is this abandonment that encouraged him to attribute recognition and perception to a single brain function. From the combination of these three points then emerges a fourth difference, opposing the rigid aspect of absolute accurate memory associated with the functional localization theory – a memory that would hardly allow us to survive in an ever-changing world, Rosenfield noted (op. cit, p. 79) – the essentially dynamic nature of memory7. was completed in 1982 by Peter Gloor (pp. 152–156). The third point recalls the experimental results obtained by Abraham A. Low and Alexandre R. Luria (pp. 83–87). 7 The phenomenon of hypermnesia proves to be a form of evidence by the absurdity of the inanity of the conception of a memory that is both rigid and absolutely faithful to a changing environment. For example, the sad history of the man whom A. Luria observed for nearly 30 years, and whom he named Veniamin, bears witness to this. With a prodigious memory that was totally indifferent to the meaning of the objects in the thousands of lists that his activity as a “memonist” required him to keep in mind – a pictorial and synesthetic memory – Veniamin showed an exceptional imagination, but very different from the creative imagination that generated coordinated actions in response to changes in the outside world: an awakened dreamer’s imagination, whose elements of the inner world thus created concealed the real world, making Veniamin incapable, for example, of reacting properly to a fire that would break out if he had not already seen it in his imaginary world (Luria 1995,

Human Memory as a Self-organized Natural System

47

According to this conception – far from being immutable – memories must be considered as reconstructions of the past and in perpetual reworking, which give us a sense of continuity, the feeling of existing in the past, the present and the future. Rather than discrete units perpetuating themselves over time, memories constitute a dynamic system in the strongest sense of the term: this system only perpetuates itself by evolving. Memory thus conceived proceeds by generalizations, by mobilizing categories, themselves directly at the origin of the above-mentioned feeling of continuity and that specific memories would be powerless to suggest. Always taking place in the context of the very moment, hic et nunc, recall consists of the implementation of operational processes aimed at continuously integrating new stimuli into pre-existing categories that are themselves more or less deeply modified and reworked: produced by imagination, such reconstitution is a vision of the past adapted to the present moment. This is the overall conception that emerges from the negative evidence from Rosenfield’s review of experiments or clinical data that were previously intended to support the conception associated with the functional localization theory. After this refutation of a virtually invented memory, it remains to provide positive evidence in favor of the inventive memory whose notion he wants to promote. 3.2. Neural Darwinism and inventive memory After showing in a very short second chapter (p. 95–108) that meaning always goes from the general to the particular in a process of perception and learning (especially that of speech presented by a child), and that the perception of stimuli depends on the category we assign them and their organization according to other stimuli – and not on the structure specific to each stimulus (categorization vs. localization) – Rosenfield devotes Chapter 3 in its entirety (p. 109–145) to automatic pattern recognition and its limits. By contrasting the obvious ability of our brain to continually create new operating processes with the proven inability of automata to imitate us on this point, Rosenfield underlines the limits of the attempts that were then (i.e. in the 1980s) the most recent made in this field by computer science. For the reasons already mentioned, we will hardly develop this point, except to link it to the influence of the Darwinian problem of evolution on the conception of human memory proposed by Rosenfield. Indeed, it is no coincidence that it is also because of the inadequacy of the notion of exact and fixed memory to a world in perpetual change that this author rejects such a notion: the evolutionary problem lies at the very foundation of his conception of memory, in the sense that he links it very pp. 193–305). For a very recent and more “general public” review and description of hypermnesic people, see Portiche and Gerkens (2018).

48

The Carousel of Time

closely to the need, continually tested by our organism, to adapt to a changing environment: according to him, to recognize by perceiving is to perceive what can be useful for our survival in the current environment (the context) and by organizing what is perceived in a way that is consistent with our past experiences. With such a conception, our memory would make each of us a member of the Gell-Mann class of complex adaptive systems, and we will see here that within this class, this conception includes us more precisely in the Atlan subclass of complex natural systems. It should be noted that, in the 1980s, no computer was able to match the performance of human memory: the memory of the traditional computer made it possible to recognize, by comparing a perceived stimulus with a stimulus previously recorded by it, but not to generalize (to “recognize” and classify a stimulus that it had never “seen”), and that of the PDP (Parallel Distributed Processor) or neural machines (Personnaz, Dreyfus and Guyon 1988) certainly made it possible to generalize, but this always in the same pre-established way. In the sense of the categories of learning we saw in Bateson, the first was being able to learn, and the second to learn to learn, but no machine of this type then showed the ability apparently unique to the human brain: learning to learn in order to learn. To follow Rosenfield, it is mainly on this third hierarchical level of learning and functioning of human memory that we must focus our efforts to understand it. For our survival, it is indeed necessary that our memory allows us to decide, according to the context, that such a circle represents the sun, a wheel, a table top, or something else that we soon designate with a new name, absent until then from any lexicon – which no computer was then able to do. We must account for this capacity in a relevant theory of human memory that helps to establish “our ability to adapt to environments as different as the troglodyte thirty thousand years ago and the world of computing” (Rosenfield 1989, p. 119). We thus end up with the theory of “neural Darwinism” by Edelman8, by which Rosenfield was very much inspired – the very name of this theory, combined with the previous considerations, indicates enough reasons for such an approach – to build his own theory of memory. It was by combining Darwin’s theory of evolution with his own work on the human immune system – for which he was awarded the Nobel Prize in 1972 – that Edelman was led to formulate a novel explanation of how our brain works9. This explanation focuses on the idea that the brain functions as a 8 The notion of neuronal (or neural) Darwinism was introduced in (Edelman 1987) and then repeated in several subsequent works (Edelman 1992, pp. 109–129; 2004, pp. 49–65; 2007, pp. 37–49; Edelman and Tononi 2000, pp. 99–113). 9 Contrary to Linus Pauling’s 1940 theory that the presence of a bacterium in the body would determine the nature of the antibody produced, G.M. Edelman showed in 1969 that all animals were born with a complete repertoire of antibodies: instead of considering the organism as a

Human Memory as a Self-organized Natural System

49

selective system – what we call learning is a form of selection – and is based on three assumptions: – during the cerebral development of the embryo, an extremely variable and individualized connection pattern would be formed between the neurons; – after birth, some combinations of connections in this schema, rather than others, would be selected in response to sensory stimulation received by the brain; – this selection would occur more specifically at the level of groups of neurons linked in layers, i.e. “maps” interacting with each other to constitute categories of things and events (Rosenfield op. cit., p. 159). Far from any genetic determinism attributing behavior and adaptive flexibility to learning that produces brain organization patterns in response to external stimuli in the manner of programs that provide instructions to computers, the aim is to explain thought and action as effective and infinitely varied responses to unexpected conditions presented by the environment: the root of the problem is the extent of variability in brain structure, sufficient, despite a limited set of genes, to allow the body to adapt to an unpredictable environment. According to Edelman, this is only possible because during embryonic development, cell differentiation is not only determined by genetic mechanisms, but also depends on where each cell is at the beginning of the differentiation process, as well as the positions it previously occupied. The inevitable variation in its forms and movements according to individuals makes it impossible to accurately predict the position of a given cell at a given time: a differentiation completely determined by genes would imply disastrous consequences for the organism if a single cell would be poorly located. In contrast to the architectural rigidity associated with genetic determination alone, Edelman therefore considers a certain degree of flexibility with regard to cell organization patterns. These are the result of processes that shape cellular groups into structures capable of more complex functions and whose shape is determined by signals from other groups. Limits are created as tissues develop in the embryo, and system learning to fight a specific germ by making a specific antibody, it must therefore be conceived as immediately endowed with a molecule whose elements are likely to be modified to produce the millions of different types of antibodies necessary to protect the body from foreign organisms. The presence of the bacterium does not determine the nature of the antibody produced – it already exists in the body – but only its quantity. It is as if the bacterium or virus would select the appropriate antibody, which is different for each individual. I. Rosenfield recalls Darwin’s observation in this regard, pointing out that there is no such thing as a typical animal or plant in the world of biology, but only singular beings whose common qualities are abstractions that we invent by neglecting inter-individual variations. However, it is on the basis of variations in the population that selection can be made. It is the variation that is real, not the average (op. cit., p. 156).

50

The Carousel of Time

these limits determine the different functional areas of the body through intercellular adhesives called “cell adhesion molecules” or CAMs. There are different types of CAMs, whose molecular structure is determined by specific genes, but whose exact amount and degree of adhesiveness (which continuously varies during embryonic development) depend on the current and past locations of the cells that carry them. Boundaries are formed between groups of cells agglutinated by different CAMs, and these cells differentiate once these limits are formed. As this differentiation progresses, new boundaries appear, then new modifications are induced by the signals emitted by cellular groups towards the genes activating CAMs and those responsible for differentiation. Depending on the stages through which two cellular groups thus formed have passed, the signals exchanged between them determine the subsequent formation of extremely varied cellular types on either side of the boundary. As Rosenfield pointed out: “The function of this boundary between groups with different CAMs therefore depends on the context: the environment of the cells and their past. In addition, the rules governing CAM response are generally the same for brain neurons and other body structures. Because the boundaries of cellular groups depend on the dynamics of movement, there will be individual variations that are not simply determined by genes, and whose diversity will ensure a structure specific to each brain. However, the resulting overall structure and similar consequences of embryonic development could explain why the brains of individual members of a species are similar” (op. cit., p. 163). To the infinite potential diversity of the context, would therefore respond the infinite potential diversity of individuals of each species. Hence, it is impossible to predict, in detail, the definitive morphology of an animal based on knowledge of its entire genetic heritage, and it is also what makes it possible to understand that two brains cannot be absolutely identical. Even with a common genetic heritage, as is the case with identical twins, this would require that the context and cell development of each individual determine strictly identical CAM mechanisms, which is practically impossible. On the contrary, these mechanisms are vectors of diversity among the brain functions of different individuals, and this, adds Edelman, occurs through a process of selecting neural groups. This selection does not play the same role before and after birth. Indeed, we have seen that in the embryonic stage, the selection focused on the schema of organization into distinct neural groups: the selection unit corresponding to a set of interconnected neurons operating simultaneously, the changes that occur in the dynamics of CAMs determine the variety of groups, each of which will differently respond to the same stimulus captured by the sensory receptors of light, touch and

Human Memory as a Self-organized Natural System

51

sound, located at the eye, skin and ear. After birth, selection focuses on the degree of connections between neurons, rather than their organizational pattern: since a vast repertoire of different neural groups has already been created, changes in the intensity of neural connections now undermine the circuits that neural signals will use. One group, responding more actively than another to a stimulus from the environment, will see the synaptic junctions between its elements strengthened. It is as if the stimulus was selecting particular variants within the population of neural groups: “Indeed, the response of a group is likely to be amplified. As its connections are strengthened, it is possible to modify the intensity of its links with other groups and, by competing with them, to integrate their neurons into its own response activity. Strengthening synaptic zones creates what Edelman has called a secondary repertoire, consisting of neural groups that respond better to certain stimuli because they have been selected and their connections strengthened” (ibid., p. 166). However, stimuli are not organized from the outset into coherent elements of information: those that are more specifically relevant to higher brain functions, such as the one that leads us to place two different concretely recognized tulips in the same abstract class “tulip”, must be organized by the body in order to make sense for it10. According to Edelman, it is the role of brain maps made up of neural groups to create such an organization: “A map brings together several of these groups arranged in the brain, in order to preserve the pattern of relationships that unite either a layer of sensory receptors (such as those on the skin surface of the hand) and a layer of brain nerve tissue receiving sensory stimuli, or two layers of nerve tissue between them. Groups are arranged in ‘interactive’ maps to create categories of objects or events. There are various types of maps in different areas of the brain, and the study of their interactions – called reintroductions – represents what is essential and definitive in Edelman’s theory” (op. cit., p. 167). Brain maps sort the afferent stimuli according to their similarities and combining their properties: they organize the stimuli into patterns that help the body to cope with its environment. According to Edelman, there must be a permanent interaction between them in order to make categorization possible. Some experiments show, for example, that the location of objects in space is obtained through the interaction of a 10 As Rémy Lestienne expresses it, the brain is a machine for creating coherence (2016, pp. 74–76).

52

The Carousel of Time

large number of sensory maps classifying and combining sound, visual and other stimuli, and leading to an activity pattern that triggers a motor response. This is how the brain creates its categories and generalizations, and reshapes them if necessary – the organization of this mapping being not immutable, but able to rearrange itself if necessary. Thus briefly summarized, the theory of neural Darwinism provides positive evidence in favor of the Rosenfield’s theory of human memory. According to Rosenfield, Edelman’s explanation would constitute the biological basis of a new psychology, so that it would be possible to extend to the higher brain functions governing mental representations and their expression through articulated language, the results obtained on less sophisticated forms of perception-recognition and expression11. Ultimately, what we perceive is not the result of data analysis, but of what we have perceived and experienced in the past or at that very moment: our first contacts with the outside world encourage us to test different ways of organizing our sensations, and those that allow rational or useful behavior are reinforced later. Experiences, feelings and thoughts differ radically from one person to another, which would remain unexplainable in terms of processes as immutable as data processing. In reality, these variations are the result of constructions of ever singular combinations, organizing the various stimuli from an ever-changing environment: we must be able to cope with it, not by means of invariable images memorized in advance, but in a way that takes into account the new and the unexpected as well as the individual character of our past experiences. We therefore do not use saved images, but operational processes to understand and transform the world. Mobilized by our confrontation with a variable environment, memory is an invention of categories that classify the current perception, and not a mapping of old images with stimuli organized from the outset. At the beginning of this chapter, we stressed the close and reciprocal link between the theory of functional localization and the conception of memory, which we have just seen radically refuted by Rosenfield: without the belief in memories 11 Suggested as early as 1989, such an extension was only fully assumed by I. Rosenfield in a later work (Rosenfield 1990). Nevertheless, since The Invention of Memory, he has supported the extension of G. M. Edelman’s theory to higher brain functions (governing language) by referring to the existence of “Darwin II”, an automaton built by the latter and George N. Reeke Jr. in accordance with the principles of neural Darwinism. In fact, the performance of this automaton seemed to go far beyond that obtained by the traditional computer approach or by the PDPs of the 1980s: intended to simulate the mapping activity of the brain, Darwin II was apparently able to abstract (organize stimuli into categories) without first being provided with specific instructions for this (Rosenfield 1989, pp. 171–176, 195–196).

Human Memory as a Self-organized Natural System

53

constantly present in memory, the theory of functional localization would not have been conceivable; on the contrary, the effect of clinical studies carried out in the 19th and 20th Centuries was to strengthen and extend such a belief by “discovering” memory centers related to visual images of words, auditory images of words, etc. This has encouraged neuroscientists to divide memory into many specialized subunits. Such a conception still largely exists today among these specialists, but with a serious downside brought by the discovery of neural (or cerebral) plasticity by Eric Kandel, winner of the Nobel Prize for Medicine in 2000. In fact, following observations on a cat deprived of vision of shapes since birth, and whose territories in principle dedicated to vision turned out to be the place of auditory and tactile projections, as well as the monkey, even adult, whose tactile projection of the hand could be enlarged after a repeated solicitation of one of the segments represented, here a finger: “The idea has been imposed of a flexibility of connections, of which it must be recognized that the former authors had not foreseen it, and that it is related to a somewhat magical function of the central nervous system; that of ‘plasticity’. The principle of localizations was not questioned, but this plasticity could encourage caution on the part of experimenters who were too convinced of the fixed nature of traditional topography” (Buser 1998, p. 24, author’s translation). As a power to modify its rules of functioning over time and according to certain external or internal influences, neural plasticity shows that the brain is a dynamic system in perpetual reconfiguration, and that this reconfiguration is based on the saving of lasting traces imbued with its interactions with the environment12. So that if we accept the reality of the close and reciprocal link between the theory of functional localization and the ancient conception of memory, we must also accept that the plasticity introduced today into the topography postulated by the former must respond with an evolutionary dynamic substantially similar to that advanced by Rosenfield and Edelman in the latter13. This is all the more so since this memory theory has 12 The discovery of neural plasticity replaces the “interactive model”, which saw the environment acting on the expression of the genotype in the phenotype, with a “plasticity model” where genotype and environment combine through plasticity to produce a unique phenotype (Ansermet and Magistretti 2004, pp. 22–23). Plasticity and epigenetics are thus linked, and on this link, another link can be established between neurosciences and psychoanalysis. According to F. Ansermet and P. Magistretti (op. cit., pp. 24–25), by making it possible to go beyond the genetic determination of human behavior as proposed by H. Atlan (1999), whom they explicitly mention on this subject, plasticity would make it possible to carry out a scientific revolution in the sense of T. S. Kuhn (op. cit.). 13 It is significant in this regard that a recent book could be entitled L’Error de Broca (Broca’s error). Written by a neurosurgeon who has shown, after more than 600 brain tumor operations, that the removal of the brain area supposed to be that of language – the famous

54

The Carousel of Time

several characteristics in common with Atlan’s self-organizational paradigm, even though the latter advocates a radical monistic ontology in the relationship between body and mind – the famous mind–body problem – originating in Baruch Spinoza. According to this radical monism that we also adopt, the body and mind do not proceed from two different substances, as in René Descartes, but constitute one and the same thing, expressed in two ways (Atlan 2011, p. 249–266; 2018)14. Let us now identify the characteristics shared by the memory theory proposed by Rosenfield and the self-organization paradigm developed by Atlan, in order to outline a problematic of non-directed learning. There are three of these characteristics. The first consists of the rejection of any pre-established program during learning carried out by the entities constituting the respective objects of study of Rosenfield and Atlan. The analysis of certain operations of unbuilt systems (natural or social) in terms of self-organization led Atlan to formulate a model of non-directed learning, in the strongest sense of this expression: not only does he immediately reject any finalism, but also he explicitly constructs his model against the concept of teleonomy introduced by J. Monod in order to counter such finalism. Even more than opposed to any notion of the purpose, plan or execution of a previous program with regard to the functioning of unbuilt systems, Atlan’s contribution is therefore in complete disagreement with any conception that would assimilate the functioning of our brain – this particular case of an unbuilt system – to that of traditional computers. In his latest overview on the concept of self-organization, Atlan (2011) thus returns to the crucial distinction he has made between complications, such as that of a computer program, which can be measured in the frame of A. N. Kolmogorov and G. J. Chaitin’s algorithmic complexity (Kolmogorov 1965; Chaitin 1969, 1977; Li and Vitányi 1997) and natural complexity, such as that

“Broca’s area” – did not deprive patients of speech, the book concludes that there are no brain areas, each of which would be rigidly dedicated to cognitive function, and confirms its organization into dynamic interactive networks with surprising plasticity, see Hughes Duffau (2016). 14 For an overview of the different conceptions, past and present, of the relationship between “mind and brain”, depending on the very title of the chapter concerned, see P. Buser (op. cit., pp. 13–41). For an enlightened – although not scientific in the strict sense – vision of the aporias that the body/mind problem still conceals today, see Hustvedt (2018). We refute here the dualist position of G. Bateson, taking up the contrast introduced by Carl Gustav Jung, in Seven Sermons to the Dead, between the world of the pleroma and that of the creatura, to distinguish the world of substance from that of form (see Chapter 1, section 2.1.3, Hypothesis 4 of G. Bateson’s model). For an analysis of the difference between this Batesonian position and that of Claude Lévi-Strauss, see Ingold (2013, pp. 25–33), which joins and extends in a dynamic sense the static monism of the latter.

Human Memory as a Self-organized Natural System

55

of a system neither constructed nor constructible by humans, which we have seen measured by extending C. Shannon’s15 canonical model. It is on this precise point that Atlan (1972, 1979) announced the conception of memory proposed by Rosenfield (1989). Because the latter makes a strong case against a theory which, like that of traditional functional localization, would suggest that memory would work in the same way as data processing carried out by a conventional computer (PDP and neural machines included). On the contrary, our memory is inventive. Instead of searching for information previously recorded, classified and permanently stored in a specific place to compare it with information currently offered at its collection according to a list of preliminary instructions, it combines the structural features of stimuli that it recognizes at the same time as it perceives them, and simultaneously classifies them into categories according to arrangements and procedures that are eminently evolving. No one has instilled, in any particular brain, the instructions for this uninterrupted reconstruction of memory, and it therefore operates without any pre-established program. Nevertheless, the memory theory proposed by Rosenfield is largely based on Edelman’s neural Darwinism, and thus seems to reintroduce – at the species level – the finalism it claims to have expelled to that of the individual. If no particular memory can be credited with behavior aimed at the precise realization of an explicit program, the ultimate objective of the functioning of any memory would be the best possible adaptation of any human organism to its environment. In short, this theory would not be alien to any idea of finalism, but only to that associated with the notion 15 With H. Atlan, let us give an example of the need to distinguish complication and complexity. In the first case, a random binary sequence (i.e. without apparent meaning) achieves maximal algorithmic complexity when it is impossible to represent it by a shorter sequence than it, and is thus defined by an incompressibility property. This is the case with the sequences generated in the context of Turing machines, where a binary sequence can be indifferently processed as a program or as data. The meaning here remains implicit in the form of the objective or program implemented by the programmer, and although it exists it does not have to be taken into account by theory. However, this measure of meaningless complexity is insufficient when it comes to natural objects, whose purpose, if it exists, remains unknown to us: here, a random sequence must be considered as carrying zero complexity, and formalizing this complexity carrying meaning requires distinguishing between the program and data parts of a description, because it is the program part that makes explicit a meaning by defining a class of objects sharing the same structure, while the data specify a given object within that class. “Sophistication” is then defined as the minimum length of the only program part of the minimum description. Consequently, a long random sequence with a high algorithmic complexity can have almost no sophistication if, to reproduce it as it is, its minimal description contains a program part that is reduced to the “print” instruction, and a given part that consists of the sequence itself (Koppel and Atlan 1991; Atlan 2011, pp. 60–63). Batesonian level 3 learning can then be interpreted as being produced by an algorithm of infinite sophistication (Atlan 2011, pp. 213–217).

56

The Carousel of Time

of a computer program, and it would explicitly reject the latter only to better accommodate the adaptive purpose apparently underlying the Darwinian theory of evolution. Without reopening old debate here, let us note that this dynamic adjustment of the individual in the middle refers to the notion of homeostasis of a system: “Since Claude Bernard’s time, there have been two definitions of homeostasis […]: 1) homeostasis as an end, or state, more precisely the existence of a certain constancy despite (external) changes; 2) homeostasis as a means, i.e. the negative feedback mechanisms that serve to mitigate the effects of a change. The ambiguity of this dual use, and consequently the very extensive applications of this term, often very vague, have obscured its convenience of precise analogy and explanatory principle. It is better to talk about the stable or constant state of a system nowadays, a state that is generally maintained through negative feedback mechanisms” (Watzlawick, Helmick-Beavin and Jackson op. cit., p. 144, author’s translation). Similarly, the adaptation of the individual to his environment expresses both the express purpose of the individual during the functioning of his memory and the adequacy of the means used during the same functioning. Beyond the disadvantages caused by such polysemy, the latter is therefore in itself revealing of the fact that the presumed purpose can perfectly transform itself into a pure and simple condition of existence – in the sense that being adapted simply means being alive. In fact, to deny such equivalence would immediately imply the circularity of reasoning pinned down by Cornelius Castoriadis: “We cannot say that the global functioning of living organisms aims to preserve something definable. It cannot be said to be aimed at the conservation of the individual, because this would be circular (the functioning of the living individual would be aimed at the conservation of the individual as a living individual, obviously […] in short: conservation is invoked by neglecting the fact that such conservation, if it is something, is the conservation of a state that would only be definable by reference to conservation” (1978, p. 182, author’s translation). The preceding developments show that Rosenfield’s model of memory loses none of its meaning or scope by avoiding the behavioral hypothesis of an adaptive purpose. Ultimately, this model simply describes the normal functioning of memory in a living being as a complex adaptive system, whose ontology itself has an adaptive dimension as an obvious quality. The learning perspective associated with Rosenfield’s conception of memory is, therefore, as free of any finalist connotation

Human Memory as a Self-organized Natural System

57

as that of the self-organization model proposed by Atlan: neither of them bases learning on a notion of a prior program, and this is the first common point existing between their respective theories16. This point immediately leads to the second characteristic shared by the theory of memory proposed by Rosenfield and the paradigm of self-organization developed by Atlan. If they identically refuse to consider learning in terms of a pre-established program, it is because Rosenfield and Atlan agree that the role of the environment on the evolution of the systems they consider is of considerable importance. With regard to the process of recognition-perception that the first one claims to be at work when our memory functions, we have seen the importance of the current context in the recollection of the past, and this recollection inevitably acquires an aspect of reconstruction, even invention: no recollection is possible without an appropriate context, and such a context necessarily makes its mark on the memory currently produced. Recall, therefore, consists of a constant reassessment of the past in light of the present context. However, the latter is nothing more than the relevant part of our environment when we are in a situation of recollection. This is the broad meaning that should be associated with the notion of context so readily put forward by Rosenfield: the one that brings together the multiple channels – emotional as well as purely sensory – by which we are affected by the environment in which we are integrated at the precise moment of our recollection. It is a similar logic that underlies the phenomenon of self-organization, analyzed by Atlan in the context of unbuilt systems in general – beyond that of memory alone. Indeed, it is due to the ability of a system of this kind to integrate into its own organization the noise produced within it by random factors from the environment that it is self-organizing. And for such a system, having this self-organizing property means nothing more than being able to learn about itself, by seeing its degree of complexity increases to the exact extent that its degree of redundancy decreases. But conversely, without organizational noise, there is no ambiguity-autonomy or increased complexity: no learning, therefore, without an environment that is a source of noise factors. It should also be noted that here, as in Rosenfield’s conception, the environment is no more programmed than the system to react in the form of recollection or increased complexity. Just as Rosenfield constantly points out that the stimuli from our environment are not immediately organized – conversely, the brain maps have the task of organizing them by categorizing them – Atlan constantly insists on the random nature of organizational noise, whose occurrence does not correspond to a pre-established program in the environment and intended to organize or disorganize 16 See on this point the discussion of the relationship between structure and function – an old question that runs through the entire history of biology (Atlan 2011, pp. 184–186).

58

The Carousel of Time

the system. In the end, for these two authors, the learning carried out by an unbuilt system, particularly that carried out by our memory during the recollection, always results from an unscheduled interaction between such a system and its environment. One of the major implications of this second characteristic shared by their respective theories is that it opens up the possibility of analyzing the phenomenon of innovation. Indeed, Rosenfield and Atlan both pay particular attention to the problems raised by the emergence of new meanings in their respective objects of investigation. For the first, such a concern responds to an obvious necessity: as he repeatedly states, a relevant theory of memory must be able to explain our dynamic adaptation to an ever-changing world. Such a theory is therefore necessarily that of an evolutionary system, whose learning is achieved by a permanent integration into its normal mode of operation, and of unexpected aspects under which the environment presents itself to it. Hence, Rosenfield declares himself so seduced by Edelman’s neurological Darwinism: by denying the reality of a strict genetic determinism in the embryogenesis of the brain as well as in its subsequent functioning, neural Darwinism ipso facto instills, in our conception of memory, the degree of lability that it must show in order to represent this instrument in the service of our survival that Rosenfield sees in it. From Atlan’s perspective, the same fundamental preoccupation – explaining the appearance of the new – is present in a slightly different light: translated into much more abstract terms, it seems more intended to explicitly respond to a purely theoretical objective than to a stated need for realistic discourse. It is not that Atlan’s concept of self-organization is devoid of any concern for realism; quite the contrary, with such a concept, this author proposes to analyze the functioning of genuinely unbuilt systems. But at least this concept seems to be mainly dedicated to solving, in the vein of the formalism developed by Shannon who left them in suspense: “Three kinds of problems that are difficult to avoid when it comes to information. (1) Those related to the creation of information: Shannon’s second theorem, the noisy channel, explicitly states that information transmitted in a channel cannot be created, since it can only be destroyed by the effects of noise, and at best, preserved. (2) Those related to the meaning of the information: Shannon’s formula only allows the quantification of the average information per symbol of a message if the possible meaning of the message is neglected. (3) Finally, those related to hierarchical forms of organization: insofar as Shannon’s formula could have been used as an organizational measure, it totally ignores the problems of interlocking different levels of organization, more or less integrated, into each other” (1979, p. 65, author’s translation).

Human Memory as a Self-organized Natural System

59

It is remarkable that Rosenfield, who is as concerned with the problems of creating meaning and meaning of information as Atlan, although he does not ask himself these questions in terms of a hierarchical system, and despite the slight difference in approach mentioned just now, responds by essentially invoking the same type of logic as the latter: in his memory theory as in that of Atlan’s self-organization, the growth of information is always accompanied by differentiation or complexity – never by homogenization. This is the third characteristic shared by the conceptions of these two authors, and on this particular point, it would be difficult for us to be more explicit than Atlan himself: “When it comes to non-directed learning, two properties, consequences of the principle of complexity from noise, can be recognized. The first is that the learning process can be understood as the creation of patterns by reducing redundancy, where particular pattern specifications exclude others. So, to the question: what is increasing and what is decreasing in learning? According to this principle, we can answer that what increases is the differentiation, the specificity of the patterns learned, and this therefore implies an increase in variety, in heterogeneity; on the contrary, what decreases is the redundancy of the whole system, it is the undifferentiated character […]. A second aspect of the principle of complexity from noise in non-directed learning mechanisms is that patterns, once created, are compared with the new stimuli or, more accurately, are projected and applied to them. To the extent that patterns and new stimuli can coincide, we say that we ‘recognize’ new patterns in the environment. But since they are really new, this coincidence can only be approximate. There is an ambiguity in this application, in this projection of these patterns on the new stimuli, and this ambiguity itself has a positive role to play in that it leads to a feedback action on the patterns themselves, i.e. a modification of the initial patterns. These, modified, will then be projected again onto the new stimuli, and so on. We can thus imagine these learning mechanisms not directed by a kind of back and forth between patterns that are created, then projected on random stimuli, and these, which, to the extent that they cannot coincide exactly with the former, then modify the class of patterns that will serve as a reference, and so on. In other words, everything happens as if our cognitive apparatus were a kind of creative apparatus, once again, of an increasingly differentiated order, or complexity based on noise” (1979, author’s translation, pp. 145–146, original author’s use of italics). If we wanted to quote almost entirely from this long passage by Atlan, it is for two related reasons. First, because it explicitly addresses the type of learning

60

The Carousel of Time

performed by our cognitive system as an example of an unbuilt system and therefore presenting the self-organizing properties of which this type of system is more generally credited by Atlan. In short, the latter takes precedence here over precisely the same subject as Rosenfield. Secondly and above all, because it is particularly striking to see that, on this identical subject, Atlan’s comments are exactly in line with those of this author. Apart from inevitable differences of a purely lexical nature, there would not be a single word to change in this quotation so that it appears to come from the text of Rosenfield. To convince oneself of this perfect identity of views between these two authors, it is sufficient to consider the analysis of speech recognition and acquisition by children – a significant case of a “reliable” self-organizing system in the sense of Atlan. Based on a large amount of experimental data, Rosenfield shows that learning at work in children, during speech recognition and acquisition, proceeds by generalization based on different visual and spoken clues (1989, p. 95 sq). In a way, the child interprets these clues by categorizing them – by constructing patterns, Atlan would say: he translates his perception of the sounds associated with certain labial mimicking into a combination that he tries to reproduce during his articulation tests. According to Rosenfield, the child would first identify the “outlines” of the meaningful speech, including its prosodic outlines, although he or she may not understand isolated words or sentences: “Some young children start, for example, by imitating variations in intonation and intensity of the adult’s speech. They sometimes reproduce the various inflections so well that adults think they hear them say authentic sentences, when in fact the syllable sequences do not form real words” (op. cit., p. 96–97). Rising melody of questioning and surprise, falling melody of the statement, etc.: these are the prosodic outlines of the sound that the child first tries to reproduce with babbling, in which adults only recognize a gibberish, a nugget of information and a burst of their own voice. After this intonative mimicking, comes the acquisition of the syllables and phonetic segments (consonants and vowels) composing the syllables. Segments and syllables gradually appear as the child learns to master, by synchronizing them correctly, the components of articulatory movements necessary for the correct production of sounds. The important point here is that this type of learning strongly evokes the one described above by Atlan, which ultimately consists of two complementary approaches: a categorization operation that can always be seen at work in the synchrony of the learning process, and a differentiation operation that reflects the diachrony of the same process. Thus, the successive stages of the latter concern objects obtained by an ever finer partitioning of all the categories corresponding to each stage – a set itself qualitatively modifiable by the always approximate nature of the projection of the old categories on the new

Human Memory as a Self-organized Natural System

61

stimuli. Indeed, let us recall one last time Rosenfield, summarizing the logic of children’s learning of speech: “[it] therefore begins by recognizing the overall outlines, but lacks the necessary synchronization for the sequence of movements, although he has detected it […] He does not immediately grasp the details of this organization, nor all its variations from one sending to another. But it is striking to note that children establish categories of movements whose importance they perceive in adult speech, and that they try to reproduce words using these categories. Then, an articulatory diagram, a series of movements, allows to define a category for a specific phoneme or sound unit. Thus, the meaning of a movement game evolves in a sequence from general to specific (children first acquire a general ability to imitate intonations) just as it happens with the acquisition of meaning. (Indeed), children first acquire the semantic part common to two words, and only later the semantic part that differentiates the two words”17 (1989, p. 98–99). Finally, self-organization, according to Atlan, and the functioning of human memory, according to Rosenfield, therefore, have three important points in common: (1) it is non-directed learning, in the sense that it does not respond to any pre-established program, neither in the system under consideration (for example human memory) nor in the environment of that system; (2) the efficient cause of this learning lies in the random encounter of the system and certain factors from its environment; (3) the product of this learning consists of the construction of patterns (or psychological categories), as well as in an ever finer differentiation of such categories, the list and the very mode of construction of which are likely to be called into question at each stage of the process. All these characteristics make the functioning of human memory an outstanding example of the natural complexity of a self-organized system in the sense of Atlan. And finally, we easily recognize in this example the hierarchy of the different categories of learning analyzed by Bateson: the random encounter of the system (of memory) and its environment provokes primary learning in the form of categorization of the perceived, and the depth of the anchoring simultaneously initiates (or reinforces) the categories thus produced and differentiated in the form of deutero-learning, itself always likely to be challenged by a level three learning. On the basis of the combination of these concepts shared by Bateson, Atlan and Rosenfield, while maintaining a Shannonian formalism, our next chapter will now set 17 For an approach to the “mind theory” in children, see Astington (1999).

62

The Carousel of Time

out a number of hypotheses on the basis of which we will then construct a formal model of the space–time of a complex socio-cognitive network of individual actors.

4 Hypotheses Linked to the Model

In this chapter, we will begin to construct our space–time model of the socio-cognitive network of individual actors on the basis of the lessons learned from the previous three chapters. To do this, we will formulate a number of assumptions about the structure and evolution of this network. As announced from the outset in our general introduction, the model we aim to develop in this way is generic rather than explanatory. Although it occasionally encounters some empirical data, this model does not originate from precise experimental data, but merely suggests a likely logical structure for this global phenomenon that is too complex to be analyzed in detail, namely the space–time in question. This logical structure concerns the two dimensions, social and cognitive, that we know are at work in an adaptive system whose complexity is natural in the sense of H. Atlan: those of inter-individual communication and intra-individual cognition that we identify here with categorization. Communications and categorizations cause the emergence of certain structures at the global level of our complex socio-cognitive network, and these structures in turn influence the individual level from which they originate. Moreover, this model is situated in the space of possibilities, and not in that of realization: in the wake of the Marxian concept of “possible consciousness” (Goldmann 1965)1, what interests us is not so much what the actors communicate, 1 Lucien Goldman thus translates the expression “Zugerechte Bewuβtsein”, which he indicates literally means “calculated consciousness”. This expression, which he describes as “Marx’s most fruitful discovery” (op. cit., p. 47), is introduced by Marx in a famous passage of The Holy Family to explain the difference between real and possible consciousness of the proletariat – between what the proletariat conceives and what it can conceive. L. Goldman uses this concept to emphasize the privilege that should be given to the reception pole in the analysis of communication: what is the receiver of information capable of conceiving, or what can he receive from what is transmitted from elsewhere and transmitted to him? This, in his view, is in line with G. Bateson’s position, the central question of communication, particularly

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

64

The Carousel of Time

the precise objects of their cognition and the resulting learning contents, as what they can communicate, their possible cognitive objects and the types of learning that can result from it. In other words, our object consists of a plausible mechanism of the structure and evolution of a set of potential cognitive and social gestures of the actors, and their possible consequences in terms of individual and collective consequences. This mechanism traces the shifting boundaries within, which the deliberate and voluntary strategies of the latter can be deployed, and it is these boundaries that we want to analyze rather than the strategies themselves2. We will first propose six hypotheses concerning the structure of the network. These hypotheses will clarify our formalization of the cognitive universes of individual actors. We will postulate that these universes are composed of repertoires containing sets of combinations of psychological categories, as well as semantic memories that constitute a part of these repertoires. We can then represent in a matrix form any state of the network and associate it with a date, while giving it a measurement in Shannonian terms. We will then introduce eight additional hypotheses concerning the evolution of the network. First, we will define the conditions for the possibility of inter-individual communications carrying messages, the generic form of which we will specify. Then, we will introduce a concept of propensity to communicate between individual actors, whose formal definition will imply, by construction, that among all possible inter-individual communications, the most likely are dyadic. The first driving force behind the evolution of the network consists of such communications, which thus control the most likely evolution of the network by promoting certain forms of individual and collective learning. The categorization processes carried out by individual actors represent a second driving force of the network’s evolution, and, like the previous one, they produce learning with an extensive and intensive dimension. 4.1. Six hypotheses relating to the structure of the network HYPOTHESIS 4.1.– The individual actors, rated Ai, i = 1, …, m, are represented in terms of their mental representations which constitute beliefs that may prove to be false, in contrast to knowledge defined as true beliefs that are reliably justified. This choice allows us to avoid the tricky question of what exactly is meant by the notion of “true belief” supposed to distinguish knowledge from opinion since Plato’s for the sender concerned about the effectiveness of his message, having regard to the intention behind it. 2 For a recent interpretation of the philosophical tradition around the concepts of possibility and realization, see Debru (2003, pp. 19–96).

Hypotheses Linked to the Model

65

Theaetetus (151b–210a), as well as that which concerns a notion of “reliable justification” of which no known epistemology has yet succeeded in giving a rigorous definition, and to which Edmund Gettier (1963) seems to have dealt a fatal blow3. The structure of these representations is propositional (each proposal is composed of a predicate and one or more arguments: A1 believes that p, A2 believes that A3 believes that q, etc.), and it is constructed by assembling representations that have a concept structure. These mental representations constitute the semantic memory of the individual actor4. HYPOTHESIS 4.2.– The representations of actor Ai are composed of psychological categories, i.e. meanings of mental concepts seen in terms of their extension (the extension of the concept is the set of all flowers). Categorization is defined as the implementation of the relationship “is one” (“this is a flower”), and therefore consists of assigning a particular representation-occurrence (“this object, currently in front of me…”) to a generic representation, which is the category (…“is a flower”) (Le Ny 2005, p. 160 sq.). The perception appears here to be entirely woven from categorizations, in the sense that what the actors perceive are objects (things, individuals, events, etc.) grouped into perceptual categories: perceiving is recognizing an occurrence falling within a category (ibid., pp. 164 sq.). In accordance with the position of I. Rosenfield that we have seen considering perception and recognition as two aspects of the same cognitive function, this conception of categorization also joins the older work of Jerome S. Bruner (1958) according to which any perceptive experience represents the final product of a

3 In a three-page article that has given rise to a very extensive literature on the “Gettier problem”, the latter uses two examples to show how ambiguous the notion of justification is. The notion of knowledge is traditionally based on three conditions that are supposed to be necessary and sufficient: subject S knows that P, if and only if: (1) P is true; (2) S believes that P is true; and 3) S is justified in believing that P is true. E. Gettier shows that these three conditions are not sufficient, because it may be that P is true, that S believes that P is true, and that S is justified in believing that P is true, but this for the wrong reasons, so that strictly speaking we cannot affirm that S knows that P. 4 There are usually three categories of long-term memories: episodic memory, which stores all information about events that occurred at a particular time (Tulving 1983); semantic memory, which includes knowledge that is not related to a specific time, and whose degree of abstraction gives this knowledge a character of transferability to many cases and situations; and procedural memory, which consists of skills such as cycling, playing tennis or juggling (Didierjean op. cit., pp. 63–79). We favor semantic memory here because, as experiments led by Stanley B. Klein and Cynthia E. Gangi have shown (Klein and Gangi 2010) with patients who have suffered certain injuries, it is mainly that it contains information that summarizes what our own personality is, and thus expresses for each of us the feeling of being ourselves (Didierjean op. cit., pp. 145–154). For a presentation of the different memory registers, see Squire (2004).

66

The Carousel of Time

categorization process5. Perception and categorization are thus indissolubly linked by a process involving a decision, the use of clues and an inference operation – the purpose of which is to enable the organism, through categorization, to adapt to its environment: “[Perception] is based on a decision-making process (the perceiver decides that what is perceived is one thing and not another) […] The decision involves the use of discriminatory clues […] The process of using clues involves the inference operation […] A category can be considered as a set of specifications concerning events that can be grouped as equivalent, and categories vary in their accessibility [which depends on] the person’s expectation of the likelihood of events in the environment, and [the] research requirements imposed on the organism by its needs and ongoing undertakings…”6 (Bruner op. cit., pp. 16–19, author’s translation). The last part of this quotation refers in a rather vague way to the activity that we know to be that of a complex adaptive system. “The research requirements imposed on the organism by its current needs and undertakings” refers to the more precise formulation that such a system obtains information about its environment and its interactions with it. This is in order to obtain indications in terms of behavior that it should therefore adopt to meet its needs and carry out its current undertakings – to adapt itself to its environment. As for the reference to “the set of specifications concerning events that can be grouped as equivalent”, contained in the first part of the quotation, it is clearly part of the analytical understanding of categorization which, since Aristotle (Topics1.9. 103b20-25, Categories 4.1b25-2a4), has defined 5 Alongside Gestalt Theory, J. S. Bruner’s approach identifying perception and categorization is one of the two major research streams that have strongly influenced work on social perception. According to this approach, assigning a stimulus to a class of stimuli constitutes the inductive aspect of categorization, and associating the characteristics of the latter with an item belonging to a category presents the deductive aspect (Dubois 2005, p. 10 sq.). The current literature on the psychological operation of categorization is immense. For theoretical links and experimental contributions between categorization and cognitive development, see Houdé (1992); for relationships between classification and categorization, see Estes (1994); for a praxeological approach borrowed from various social sciences, see Fradin et al. (1994); for links between categorization and cognition, see Dubois (1997); for philosophical studies of the relationships between concepts and categories, see Longy and Le Du (2004). For an original approach to the relationship between categorization and analogy, see Sander (2000). Concerning the analysis of the cognitive processes involved in categorization, enriched by many examples from the history of science, the most recent book, and in our opinion the most stimulating, is that of Hofstadter and Sander (2013). 6 For an overview of the different theories of perception, from G. Berkeley to J. Mc Dowell, see Dokic (2004).

Hypotheses Linked to the Model

67

the category as a set of discrete entities grouped on the basis of a set of common characteristics. At the time of writing, another approach to categorization, based on the prototype theory introduced by Eleanor Rosch (1973), had not yet emerged. This approach proposes a more gradual vision of categorization than the previous one, assuming that each category is centered on a prototypical element while admitting within it those elements considered sufficiently close to it. At the level of generality that we have here, this distinction between analytical and prototypical approaches to categorization is of little importance. On the contrary, it is essential to stress that, analytically or prototypically, categorization always takes place from a certain point of view and from a certain angle, which is that of a given individual actor using a precise criterion at a given time to group distinct entities into categories. Whatever the object in question, it can therefore always lend itself to as many categorizations, as there are possible points of view and angles from which to operate this cognitive gesture which is at the basis of the actors’ representations7. In our model, each category (indexed by j) constructed at date t by actor Ai is noted Cqj, where q represents the number of representations-occurrences encountered by actor Ai from his most distant past to date t and which fall within the category Cj. Such a category is part of a set Si(t) whose cardinal is denoted by I. This categorization process corresponds to the synchronic dimension of learning that we know is common to Atlan and Rosenfield, as well as the dialectic between stimuli and patterns common to H. Atlan and G. Bateson where this dialectic covers level 1 and 2 learning. The diachronic dimension of the conception of learning shared by these three authors consists of a differentiation of the psychological categories thus formed, as well as in a possible questioning of the mode of selection and organization of these categories in the individual memories that corresponds to a Batesonian level 3 learning. HYPOTHESIS 4.3.– Each psychological category Cqj element of Si is defined in the model according to the highest current discrimination power of actor Ai. In each state of the network, this power of discrimination is finished, and the finer Cqj categories in it are called “elementary”. Thus, the category is elementary for Ak in the state considered by the network if this actor’s power of discrimination is reduced to distinguishing flowers from vegetables. This is not the case for Al who, in the same state of the network, has just distinguished from . While the cognitive repertoire of the first one contains the elementary category (Cqj), the second one now contains, in addition to increased by an

7 The importance of this point will become apparent when we discuss the significance of the entropy phenomenon in our model (see Chapter 9, section 9.3).

68

The Carousel of Time

additional occurrence (Cq+1j) but which is no longer an elementary category, the new elementary categories and noted Cj.1 and Cj.2. HYPOTHESIS 4.4.– c (Si)(t), set of all parts of the set Si(t), formalizes the notion of Ai’s cognitive repertoire at date t. This repertoire is the set of all formal combinations of the Cqj categories listed in Si(t). Generally, only a subset p(Si)(t) of c (Si)(t) is semantically relevant for Ai in the corresponding state of the network, and it is such a subset that constitutes the individual memory of Ai. This memory is likely to change within a given cognitive repertoire, because some parts of it – which are meaningless in some states of the network – may acquire one in other states of the network, and vice versa8. This hypothesis thus reflects what cognitive psychology calls the attentional filter model: our perception is selective. Indeed, according to a hypothesis introduced by Donald Broadbent (Broadbent 1958), then confirmed by many experiences from daily life (Didierjean op. cit., pp. 33–43), our cognitive system would include a filter that would select incoming information by letting only one piece of information pass through at a time – the others being stored for a short time, some of which are processed in a deferred manner and the most numerous lost and deleted from memory. The majority of authors believe that this filter blocks information before its meaning is processed. Hence, the information immediately processed in our model constitutes a representation-occurrence immediately stored in a generic representation, i.e. in a psychological category that is integrated at the date considered into the cognitive repertoire of the individual actor concerned. For such a category to appear in this actor’s individual memory, it must have a meaning in the latter’s eyes that is consistent with the combinations of categories already included in it. Indeed, our prior knowledge plays a fundamental role in the selection and interpretation of what we perceive. As A. Didierjean writes, “we ‘see’ above all the knowledge we have in memory about the world” (op. cit., p. 45), and this enrichment of perception by knowledge, highlighted by Helene Intraub and Michael

8 For example, the following combination: the dissection table-machine to sew-umbrella, mentioned in song VI of the Chants de Maldororor by Lautréamont, not relevant until 1869, started to be relevant after these date. Conversely, Aristotle’s “grave”, a category relevant until the emergence of classical mechanics in the 18th Century, ceased to be so with this emergence, and then shifted from the register of theoretical physics to that of the history of this discipline. For a variation on this theme, see Poplin (1996–1997). This distinction between cognitive repertoire and semantic memory has no effect on most of our developments: it only becomes operational when new information is contradictory to the current state of a given actor’s memory. It then causes the latter to learn at level 3, as we will see in our chapter 10 by analyzing the radical transformation of a paradigm during a scientific revolution in the sense of T.S. Kuhn (op. cit.).

Hypotheses Linked to the Model

69

Richardson (Intraub and Richardson 1989), and named by them “field extension”, is carried out at a non-conscious level in our cognitive system. Under assumptions 4.1, 4.2, 4.3 and 4.4, the state of the network at time t is representative using a Boolean matrix [aij](t) whose rows are the individual actors Ai, i = 1, …, m, and the columns are the elementary categories Cqj, j = 1, …, n. The coefficients of this matrix are such that aij = 1 if, and only if, the set Si (t) contains the category Cqj, and aij = 0 in the opposite case. It should be noted that this matrix does not show the distribution of a preliminary global stock of representations among the actors (with a possible remainder, so that there could be one or more columns of [aij](t) such as aij = 0, ∀i), but instead represents the composition of a network of representations from the individual representations, so that each column of the matrix contains at least a “1”. Let us give a simple example of such a matrix, by posing m = 2, n = 5 and q = 1, ∀j. The state of the network at time t is then representable, as shown in Table 4.1. C1

C2

C3

C4

C5

Ak

1

0

1

0

1

Al

0

1

1

1

0

Table 4.1. The network at date t

According to the ratings defined above: Sk (t) = {C1, C3, C5}; Sk(t) = K= 3 c (Sk) (t) = {{ø}, {C1}, {C3}, {C5}, {C1C3}, {C1C5}, {C3C5}, {C1C3C5}}; c (Sk) (t)  = 23 Sl (t) = {C2, C3, C4};Sl(t) = L =3 c (Sl) (t) = {{ø}, {C2}, {C3}, {C4}, {C2C3}, {C2C4}, {C3C4}, {C2C3C4}}; c (Sl) (t)  (t) = 23 HYPOTHESIS 4.5.– At the global level of the network, we have no reason to distinguish between “memory” and “cognitive repertoire”, because at this level, no combination of the Cqj categories has any particular semantic relevance. Indeed, it is a network, in itself devoid of any purpose, unlike a system whose purpose would constitute one of the five characteristics whose congruence defines the concept of

70

The Carousel of Time

system (Le Moigne 1994, p. 62). We can therefore define the “global memory” of the network at date t by Γ(t), union of the individual cognitive repertoires at this same date: Γ(t) = ∪i c (Si)(t). In our example, Γ(t) contains 14 parts: Γ(t) = c (Sk) ∪ c (Sl) = {{ø}, {C1}, {C2}, {C3}, {C4}, {C5}, {C1C3}, {C1C5}, {C3C5}, {C1C3C5}, {C2C3}, {C2C4}, {C3C4}, {C2C3C4}}. HYPOTHESIS 4.6.– The volume of this global memory is measurable in the language of Shannonian information theory. Indeed, another matrix [Aij](t) is easily deduced from the matrix [aij](t), whose rows are always the individual actors, but whose columns are now the set of all the parts of the sets Si(t), i.e. all the combinations of elementary categories that appear in at least one c (Si). In our example, there are, at date t, 14 such columns corresponding to the 14 parts of Γ(t). It is this matrix [Aij](t) that truly represents the state of the network, and it is to it that the network observer is supposed to have access by observing individual actors. Each “1” he observes indicates to him that the actor located in the corresponding row has in his cognitive repertoire the combination of elementary categories indexed by the corresponding column. This observation constitutes for him an event, and since he had a probability a priori equal to ½ (it could not have occurred), this event brings 1 bit of information to this observer by producing itself. In other words, the network appears to the latter as a message composed of a series of combinations of categories coded in binary language and contained in at least one individual cognitive repertoire. At each considered date (t), the number of bits contained in the global memory of the network is therefore equal to H(t) = Γ (t)  and this number measures the structural complexity of the network. In addition, the maximum quantity of information contained in the network, noted Hmax (t), is equal to c(Sk)(t) + c(Sl)(t). In our example, Hmax (t) = 16 bits, and the structural complexity of the network being equal to 14 bits, its redundancy R (t) is 12.5%. 4.2. Eight hypotheses relating to the evolution of the network HYPOTHESIS 4.7.– The evolution of the network results, on the one hand, from communications between individual actors and, on the other hand, from these cognitive processes internal to the actors that constitute their categorization operations. This distinction between inter-individual communication and intra-individual categorization is analytical, because in reality any communication concretely established between real actors has an intentionality dimension9 that itself 9 As H. Atlan states (2011, pp. 219–221), there are two meanings to the notion of intention: the first covers the aims or intentions that distinguish voluntary actions from those that are

Hypotheses Linked to the Model

71

implies a part of cognition, and any categorization, as more generally any individual cognitive process, can be interpreted as a communication that the actor concerned establishes with himself. Nevertheless, a communication established between individual actors produces direct effects on each of them as they receive the messages exchanged on this occasion, whereas an internal cognitive process only has a direct effect on the actor who puts this process into action. Inter-individual communication and intra-individual categorization thus produce different effects at the level of the network as a whole, and it is therefore important to distinguish them. Communication and categorization include different types of different types of learning, leading to different individual and collective information gains. 4.2.1. Assumptions related to inter-individual communication HYPOTHESIS 4.8.– As we said in the introduction to this chapter, inter-individual communication is conceived here in terms of the messages that can be exchanged, or not, by individual actors, themselves never in any way because they are always located in the space and time of the network. The notion of possibility thus works on two levels in this aspect of our model: the identity of the actors who can (or cannot) communicate, and the messages that these actors can (or cannot) exchange with each other on this occasion. The necessary (but not sufficient) condition for establishing communication at time t between actors is the existence of a common language between them. This common language is formalizable by the intersection of the sets of elementary categories at the agreed date: Sc (t) = ∩i Si (t), with C = Sc (t). If this intersection is empty, communication is impossible between the actors concerned10. In our example, Sc (t) = Sk (t) ∩ Sl (t) = C3, and communication is therefore possible between the actors Ak and Al. HYPOTHESIS 4.9.– The proverb “birds of a feather flock together” is largely confirmed by sociological data concerning affinity relations, one of the main characteristics of which is thus to be homophilic (Degenne and Forsé 1994a, 1994b). In our model, this means that between impossible communications (Sc (t) = ø) and not, reflexes or habitual, not consciously decided; the second was introduced by Franz Brentano in the 19th Century, then taken up by Edmund Husserl, and today it has a psycholinguistic technical meaning consisting of considering that every consciousness is conscious of something, because every experience is directed towards an object by virtue of its content; on this point, see Tiberghien (2002, p. 150). Like H. Atlan, we use this term here in its first sense, “certainly limited but non-trivial” (op. cit., p. 221, author’s translation). 10 “Also nothing absolutely new to us; and this is the secret of our intelligence, for we would not understand what has no analogue in our past, what would awaken nothing in us. Plato was right to argue that knowing is half remembering, that there is always something in us that corresponds to the knowledge that is brought to us from outside” (Guyau 1998, p. 21).

72

The Carousel of Time

certain communications (Sc (t) = Si (t)), there is a whole range of communications from least likely to most likely. There is then a term-term correspondence between the degrees of this range and the degrees of resemblance between actors, and this correspondence is formalized by a distribution of propensity to be communicated between actors. These propensities to communicate are conditional probabilities to a situation, and the latter is that of the considered state of the network. Our concept of propensity to communicate here constitutes a dual specification of the notion of informational distance defined in the context of algorithmic complexity11. When compared to two actors Ak and Al in our network, this informational distance can be expressed by the difference between the cardinal of the union and the cardinal of the intersection of their sets of elementary categories – this union and this intersection being indeed two discrete and finite sets, ∀t. Let Dkl (t) be this informational distance at date t: Dkl (t) = Sk (t) ∪ Sl (t)–  Sk (t) ∩ Sl (t)  This is an absolute informational distance between these two actors, which must be transformed into a relative distance (between 0 and 1) in order to associate it with a probability. Let dkl (t) be the relative informational distance between Ak and Al at date t, which is defined as follows: 0 ≤ dkl (t) = Dkl (t) / Sk (t) ∪ Sl (t) 0 ≤ dkl (t) = [Sk (t) ∪ Sl (t)–  Sk (t) ∩ Sl (t) ] / Sk (t) ∪ Sl (t) < 1 We can then define a concept of cognitive proximity (or degree of resemblance) between the Ak and Al actors as complementary to the unity of their relative informational distance. Let rkl (t) be this cognitive proximity: rkl (t) = 1 – dkl (t) rkl (t) = 1 – [Sk (t) ∪ Sl (t)–  Sk (t) ∩ Sl (t) ] / Sk (t) ∪ Sl (t)

11 Let us consider two discrete objects x and y which, in this context, are two finite binary sequences of generally unequal lengths; the algorithmic informational distance between x and y is the minimum quantity of information sufficient to transform x into y, and vice versa, by means of a universal computer (Li and Vitányi 1997; Delahaye 2002).

Hypotheses Linked to the Model

73

rkl (t) =  Sk (t) ∩ Sl (t)  / Sk (t) ∪ Sl (t) 0 ≤ rkl (t) 0, in accordance with the formula discussed at the beginning of section 1 of Chapter 2. The image of the self-organization of our network would be different, being that of an addition of at least one book (organizational noise) to the existing stock (increase in Hmax), which would include references or citations to books already present in the library (increase in R).

+ 8 bits

[H(t1) – H(t0)]/H(t0) = 41.67 %

γ=

+ 12.5% (+6 bits)

+ 10 columns (with a reinforcement from C3 to C23 in the columns corresponding to the parts containing it)





24m.actors.bits.

Es(t1) =

14m.actors.bits.

Es(t0) =





qmax = 2

48m.actors-occurrencesbits, with

Ev(t1) =

14m.actors-occurrencesbits, with qmax = 1

Ev(t0) =

Pv(t1) = 32m. actorsoccurrences-bits

Ps(t1) =

16m. actorsoccurrences-bits

Pv (t0) =

in t0 and t1

Potential (Pv) and effective (Ev) volumes

32m.actors.bits.

16m.actors.bits.

Ps (t0) =

Table 5.3. Values of Hmax, H and qmax at dates t0 and t1. Evolution of γ from t0 to t1



Information production rate between t0 and t1

+ 10 bits

H (t1) = 24 bits

(H) and redundancy (R) in t1

(H) and redundancy (R) in t0

R (t1) = 25% (8 bits)

Γ (t1) = H (t1) = 24 bits Ηmax (t1) = c Sk + c Sl  (t1) = 32 bits

Γ (t0) = H (t0) = 14 bits Ηmax (t0) = c Sk + c Sl  (t0) = 16 bits

H (t0) = 14 bits

c Sl (t1) = 16

c Sl (t0 = 8

R (t0) = 12.5% (2 bits)



c Sk (t1)= 16

c Sk (t0) = 8

+ 8 bits

in t1

in t0

Potential (Ps) and Evolution of structural complexity and redundancy effective (Es) surfaces from t0 to t1 in t0 and t1

Cognitive repertoires, global memory and structural complexity

Cognitive repertoires, global memory and structural complexity

88 The Carousel of Time

Scope, Dimensions, Measurements and Mobilizations

89

We will return in the next chapter to this last phenomenon and its significant consequences on the evolution of the network’s internal structure. Note for the moment that c (Sc) (t1) = c (Sk) (t1) ∩ c (Sl) (t1), corresponding to the redundancy of the network at date t1, contains some totally new parts: {C23}, {C1C2}, {C1C23}, {C2C23} and {C1C2C23} were not included in c (Sk) (t0) or in c (Sl) (t0). The first has replaced {C3} simultaneously in c (Sk) and c (Sl) – therefore also in c (Sc) – according to the process of strengthening the anchoring of representations mentioned in hypothesis 4.2. But the fact that the following four parts now appear in c (Sc), whereas they were previously neither in c (Sk) nor in c (Sl), means that at date t1 all actors have constructed representations that no individual memory contained at date t0. In other words, we are dealing here with a phenomenon of the emergence of representations that are fully and immediately collective. Such a phenomenon indicates the self-organized nature of our network by the appearance of an emerging quality at its own level, distinct from the qualities of its elements, which is why we call this network complex. It is thus plural subjects that emerge in the latter4. Discontinuous, probabilistic and relational, populated by events and not things, the quantum world and that of our complex socio-cognitive network therefore have in common that they are both the centers of emergence phenomena: like interactions at the microscopic level between elementary particles of the quantum world, thus giving rise to the macroscopic structures of our daily physical world, those established between individual actors within our network give rise to emergence and then they stabilize successive collective structures, and sometimes disappear, throughout its socio-historical trajectory (Ancori 2017a). In addition, individual communication also impacts the intensive dimension of learning. Indeed, categories C1 and C2 find at date t1 a deeper anchoring in the representations now shared and combined with C23 in {C1C23} and {C2C23} than 4 Unlike a trivial emergence, such as that of a water molecule whose structure and chemical properties always emerge in the same way from two hydrogen atoms and one oxygen atom, and which can therefore easily be predicted a priori, the emergence of a quality specific to our network, distinct from the qualities of its elements, is complex because it gives rise to unexpected behaviors that may be adapted to new situations (Atlan 2011, pp. 10–12). There are multiple conceptions and controversies surrounding the notion of emergence (Stengers 1997; Sève 2005; Kim 2006; Collectif 2014; Sartenaer 2018). The concept of emergence that our model operates under is that of the British emerging school of the early 20th Century: a plural subject is a collective considered as a whole composed of parts, and it emerges from these parts if: (1) the properties of the whole depend on those of its parts and their arrangement; (2) the whole is more than the simple sum of its parts; and (3) the emerging properties have original causal powers that are exercised in the world (causal effectiveness of the emerging) and modify the basic properties from which they emerge (reflective downward causality) (Fagot-Largeault 2002).

90

The Carousel of Time

when they were idiosyncratic and combined with C3 at date t0 – such as the combinations {C1C3} ⊂ c (Sk) and {C2C3} ⊂ c (Sl). The emergence of collective representations of this type is therefore always accompanied by an increase in the degree of anchoring of the corresponding combinations of psychological categories compared to that which these combinations knew when they were idiosyncratic. This phenomenon is obviously due to the fact that these collective representations necessarily include the category (or categories) initially shared, acting as the common language mobilized by the communication, and whose coefficient q is ipso facto increased by one unit. Finally, it should be noted that communication between individual actors, while creating information, only does so in a weak sense, since it is limited to making certain combinations of elementary categories potentially present from the initial state of the network5: at each date considered, these combinations are the elements of the set c (Cqj) that do not appear in the subset ∪i c (Si). The difference between c Cqj and ∪i c (Si) thus represents a “pool” of c Cqj – Γ elements that are potential combinations of psychological categories ready to be realized through communication. Our network gradually depletes this pool for each dyadic informative communication for at least one of its actors. This process continues up to a limit such as ∪i c (Si) ≡ c Cqj: when this limit is reached, all potential combinations of elementary categories existing in the network have been achieved, so that the network reaches an informational equilibrium. This equilibrium is such that all actors’ cognitive repertoires are strictly identical, and also identical to the network’s global memory. We then have c (Sk) ≡ c (Sl) ≡ ∪i c (Si) ≡ ∩i c (Si), ∀k, ∀l, and this balance is such that H = 2K = 2L = 2C = 2n, Hmax = m. H = m. 2n, and R = (m – 1)/m. Let us illustrate this general data with the example already used previously, by reconsidering the state of the network at date t1, as shown in Table 5.4. C1

C2

C23

C4

C5

Ak

1

1

1

0

1

Al

1

1

1

1

0

Table 5.4. The network at date t1 5 We adopt here the Bergsonian distinction between the realization of potentialities, which are always there, and the actualization of virtualities, which establishes a creative relationship between a virtual and an actual, always dissimilar (Deleuze 1968, p. 269 sq): inter-individual communication is limited to the realization of potentialities, contrary, as we will see, to the categorization that can actualize virtualities.

Scope, Dimensions, Measurements and Mobilizations

91

In this state of the network, we know that Hmax = 32 bits, H = 24 bits and R = 25%. While at date t0, the pool of combinations not yet realized contained 18 elements, it contained only 8 at date t1, each of them comprising a combination of the only two elementary categories C4 and C5 which, in this state of the network, remain idiosyncratic. The informational equilibrium described above will be achieved once these two elementary categories are shared following communications between the two actors. Depending on the precise content of the messages exchanged by the latter on these occasions, this process of convergence towards informational equilibrium may take more or less time. Let us suppose, to simplify, that it only takes one period. This would be the case, for example, if at date t1, Ak sent the message “C23C5” to Al who would simultaneously send him the message “C23C4”. At date t2, the state of the network would then be as indicated in Table 5.5. C1

C2

C33

C4

C5

Ak

1

1

1

1

1

Al

1

1

1

1

1

Table 5.5. The network at date t2

On that date, we would have H = 32 bits, Hmax = 64 bits and R = 50%. All potential combinations of existing psychological categories would then be realized, so that c Cqj \ ∪i c (Si) (t2) ≡ ø. Unlike the death of biological individuals, however, the informational equilibrium thus analyzed and illustrated is not inevitable. Indeed, the process leading to it seems irrevocable only because the numbers m of individual actors and n of mental categories have so far appeared given once and for all – as constants in the model. However, there is no reason to maintain these extremely strong assumptions, so avoiding the death of the network simply means transforming these constants into variables and giving them appropriate values. There are then at least two ways to indefinitely defer the informational equilibrium of the network, and each of them is synonymous with creating information in a strong sense within the network by adding new columns to the matrix [Aij] representative of the state of the network (Ancori 2014). The first consists of transforming m into a variable by introducing successive generations of individual actors into the network. The net impact of such an introduction on the volume and distribution of idiosyncratic and shared categories is then determined by the actors’ birth and death rates, as well as by the respective volumes and distributions of idiosyncratic and shared categories among incoming and outgoing actors. However, there are necessarily certain value

92

The Carousel of Time

classes of these four variables that are such that the product of their combined sets results in a net increase in the number of idiosyncratic categories in the network. And each of these categories opens up new combinatorial potentialities for the latter, whose informational equilibrium is thus postponed accordingly. Under certain conditions, the same applies if we transform n into a variable activated by a categorization process, and this is a second way to delay the occurrence of an informational equilibrium of the network, or even to reverse the direction of the trajectory leading it to such an equilibrium. 5.2. Categorization and learning Like inter-individual communication, categorization potentially impacts both dimensions of learning. The intensive dimension of the latter is likely to modify the effective depth of the network via the second form of forgetting identified in the previous chapter – the burial of the all too well known in the unconscious. We know that categorization involves placing a representation-occurrence in a generic representation that is a psychological category, and we postulated that the latter reached an unconscious meta-category level when the number q of its representations-occurrences reached a limit value g. By construction, the Cqj categories present at the actors’ conscious level of individual memories are such that qmax ≤ g-1. Any additional activation of a Cg-1j category therefore reduces the effective depth of the network if this category is then the only one to be located at the boundary between conscious and non-conscious levels of individual memories, and if it appears in only one individual memory. When this is the case, the extent of the decrease in the effective depth of the network is measured by the difference between g-1 and the coefficient q* < g-1 which is closest to g-1 in all remaining Cqj categories. In addition, such activation simultaneously reduces the potential and effective widths of the network, as it reduces by one unit the number of columns in the matrix [aij]. With a constant number of players, these decreases result in a decrease in the latter’s potential and actual volumes. But, on the other hand, categorization can also lead to an increase in the potential and effective width of the network due to the extensive dimension of the learning achieved by the individual actor on this occasion. This increase in Hmax and H is related to the observation of their natural and social environment, or even self-observation, that this actor achieves as a complex adaptive system of this particular type that is a self-organizing system in the sense of H. Atlan. Moreover, we know that the actors’ individual memories are associative in nature, so that the creation of psychological categories and their storage by such a system is always conditioned by their semantic coherence with the combinations of categories already contained in this memory in order to achieve overall coherence of the latter.

Scope, Dimensions, Measurements and Mobilizations

93

It is only if this condition is met that the actor’s representations-occurrences are transformed into psychological categories that are fully integrated into the individual memories concerned6. This creation of new psychological categories is that of new equivalence classes, each of which brings its members together under a common predicate. Such a gathering therefore implies the observation of similarities between two or more objects, and is thus based on acting analogies or metaphors (Lakoff and Johnson 1980, 1999; Lakoff 1993; Gibbs 1994; Sander 2000; Hofstadter and Sander 2013). The literature on analogy and metaphor, from Aristotle to the present day, is considerable, and its most important quantitative part – even its standard paradigm – deals with the use of these two tropes in a problem-solving perspective associated with an experimental framework often targeting pedagogical applications and now willingly using computer algorithms (Sander 2000, pp. 5–87). Our perspective is very different, in the sense that we will explore the role of analogy and metaphor in terms of their contribution to problematization processes. In many forms, analogy and metaphor have indeed shown their epistemological richness throughout the history of Western sciences, whether in the natural sciences (Lichnerowicz et al. op. cit.; Bolmont 1999; Durand-Richard op. cit.), the social and human sciences (De Coster op. cit.; Walliser 1998; Babou 2006), or the fascinating area where science and art come together (Miller op. cit.). This richness testifies to problematizations that we can describe as successful: although they stem from analogies or metaphors that are always risky, they have made it possible to open up a new field of research, or have led to new questions in a given field of research. In short, it is mainly because of their creative power that we will discuss here analogy and metaphor, rather than as the pedagogical tools they are. In fact, as shown by the “theory of interaction” initiated by Ivor A. Richards (1936) and developed by Max Black (1962, 1993), far from identifying certain existing similarities between objects that are nevertheless different, analogy sometimes has the effect of creating similarities between objects previously considered dissimilar7.

6 We will come back to this point by discussing the phenomenon of “déjà-vu” in Chapter 8. 7 As M.-D. Gineste (op. cit., pp. 135-136) points out: “When we say that a man is a wolf, we focus attention on some of the properties of wolves, ‘ferocity’, ‘moving in groups’, ‘attacking the weakest’ and thereby neglecting other properties, such as ‘having four legs’, ‘having fur’ or ‘walking on four legs’. Then we interpret human behavior, particularly in social situations, using concepts that characterize the wolf. Only then do man and wolf appear similar. A change in the level of interpreting the traits allows the assertion of similarity. To develop a metaphor, it is therefore necessary to have a description of men and a description of wolves and to see, beyond and despite the differences, the similar properties present in the two

94

The Carousel of Time

According to A. I. Miller (op. cit.), this interactive vision: “was specifically formulated to bring out the creative dimension of metaphorical thinking.... Black’s interactive vision of the metaphor can be written as follows: x acts as if it would be a {y} where the instrument of metaphor – as if – connects the misunderstood primary subject x to the better understood secondary subject y. The braces around y indicate that it represents a set of properties or assertions, similar to a scientific theory. The connections between the set {y} and the primary subject x are generally not obvious or even necessarily valid, as may be the case in scientific research. In most cases, x can also be replaced by {x} which refers to a set of properties or assertions. The dissemblance at first sight between the primary and secondary subjects is called the tension between them. The greater the tension, the greater the creative powers of the metaphor”8 (op. cit., p. 222, author’s translation, original author’s use of italics). Let us apply this notion of Black’s interactive vision, reviewed by Miller, to the field of scientific practices. In this regard, it should be noted, with L. Soler (2001), that what leads a community of specialists to retain a hypothesis resulting from an analogy is never only the analogical origin of this hypothesis, but a series of procedures that eventually lead to its corroboration in the current knowledge status. In the best of cases, analogy is therefore at the heart of knowledge in the emerging state by characterizing something new in advance, something that cannot yet be spoken of with assured relevance (Schlanger 1995). Its enunciation is therefore a risky act9, and the degree of novelty thus introduced may be more or less descriptions. But these similar properties are only constructed and appear after the metaphor has been produced and interpreted […]. In doing so […], a new semantic field or at least new properties are created for the two terms present in the metaphor. Don’t wolves become more human, men more like wolves? [...] The metaphor, in the clash it imposes between the argument and the predicate, creates [...] new knowledge” (author’s translation). 8 This concept of “tension” has its source in I.A. Richards (op. cit.), who proposes the terms “content” and “vehicle” to designate what Chaïm Perelman and Lucie Olbrechts-Tyteca (2000) call “theme” and “phore” and what A.I. Miller calls respectively the “misunderstood primary subject” and the “better understood secondary subject”. Although the vocabulary of Richards is often used in the literature on metaphors, we prefer the vocabulary of M.-D. Gineste (op. cit., p. 135, no. 1) and E. Sander (2000, 2002) who speaks of “target concept” and “source concept”. 9 Thus, the idea of analogy was the foundation of comparative anatomy by qualifying as analogues organs with different origins and similar functions (an insect’s wing and a bird’s

Scope, Dimensions, Measurements and Mobilizations

95

pronounced. When it makes it possible to realize in explicit form knowledge that is already potentially present, we can characterize this degree as “weak”, but when it consists of an actualization of virtualities and leads to the updating of a questioning, or even an entirely new field of research, we are obviously in the presence of a radical novelty: the analogy is then creative in a “strong” sense. 5.2.1. The creative analogy of weak novelty: the example of Planck’s formula As we have seen, the creative power of analogy is an increasing function of the dissemblance (tension) between secondary subject (source-concept) and primary subject (target-concept), and “metaphors with maximum tension involve non-propositional, i.e. non-logical reasoning, which is often based on visual imagery” (Miller op. cit., p. 222). Analyzed in detail by Soler (2001), Max Planck’s introduction, in 1900, of the famous formula ε = hν is exemplary of this type of analogy. Let us summarize this analysis by erasing its technical aspects. Planck introduced this formula at the end of a reasoning using a formal analogy whose source concept consists of a demonstration developed by Ludwig Boltzmann in 1877. Boltzmann and Planck’s research programs have four common points: (1) their objective is to demonstrate the irreversibility of macroscopic laws from reversible microscopic laws; (2) their reasoning framework is the same two-level schema; (3) they share the same theory responsible for describing the upper level, phenomenological thermodynamics and (4) the systems they studied had as their central property the distribution of energy on the components of microsystems. On the other hand, these research programs differ in the type of system studied (gas for Boltzmann, radiation for Planck) and, correlatively, in the paradigm characterizing the microscopic level (discontinuous mechanics for one, continuous electrodynamics for the other). Through a demonstration of the irremediable evolution of the initial state of black radiation towards a determined distribution law, Planck’s ultimate objective was to demonstrate that the second principle of wing). But this global richness did not prevent accidents along the way, such as the invention of imaginary animals born of the desire to “push the parallelization between animal and human societies to the limit” (Gadoffre 1980, p. 7). The men of the 16th Century thus considered the existence of the sea monk or bishop fish likely (Céard 1980, p. 82). In this respect, Jacques Bouveresse recalls the brilliant demonstration of R. Musil ironically justifying the definition of the butterfly as the central European dwarf winged Chinese, by the fact that there are lemon yellow butterflies and that there are also lemon yellow Chinese (1999, pp. 21–22).

96

The Carousel of Time

thermodynamics is absolutely (and not only statistically) valid, and that macroscopic thermodynamics therefore offers an absolute characterization of the world. To do so, he had to give a solid foundation to the new law of radiation that he built by trial and error, and proposed in October 1900 in response to the recent questioning of the universal validity of an earlier version of this law, hitherto accepted by all. The version proposed by Planck represents adequately all the available experimental data, and it is therefore legitimized on the theoretical level that the latter will proceed by analogy using a combinatorial demonstration developed by Boltzmann in 1877. Boltzmann’s demonstration exists in a continuous version supposed to correspond to reality, and in a discrete version presented as an approximation of the previous one, suitable for facilitating the calculation of the physical process, and from which it is possible to find the continuous version by making certain values tend towards infinity10. It is this discreet version that serves as a model for Planck. Where Boltzmann established a proportionality link between the entropy of gas and the logarithm of the probability of obtaining any given macroscopic state, and more precisely showed that the most likely macroscopic state is the one where the entropy of gas is at its maximum (thermodynamic equilibrium), Planck showed that it is its law of radiation which is at equilibrium by determining the expression of the entropy of an equifrequency resonator group. To do this, he divided the energy continuum into discrete elements like Boltzmann: E = p ε, and distributed integer multiples of ε on the resonators. By combining the entropy of a resonator (obtained by the combinatorial demonstration) with Wien’s displacement law itself combined with the Kirchhoff–Clausius law, Planck established that ε = h ν, where h is a constant, ε the amount of energy and ν the oscillation frequency of the resonators. It finally led to a theoretical expression whose general form was similar to that of the law of radiation proposed by him a few weeks earlier, and corroborated by all the experiments then known. In Black’s interactive vision revisited by Miller, the source {y} of the analogy here is the proportionality link established by Boltzmann between the entropy of gas and the logarithm of the probability of obtaining any given macroscopic state, showing that the most likely macroscopic state is the one where gas entropy is maximal. The target {x} of the analogy is the demonstration of the law of radiation, verified experimentally, but still lacking a true theoretical basis. To establish this demonstration, Planck’s cognitive universe must contain three elements as permanent representations: 10 In the continuous version, any given molecule of the gas can at any time have any value of energy between zero and the average energy E of the gas. In the discrete version, the infinity of the values of the energy continuum E is replaced by a finite number of integer values: E is divided into p small elements ε. To find the continuous version, simply make ε tend towards zero and p towards infinity at the end of the calculation (Soler 2001, pp. 96–97).

Scope, Dimensions, Measurements and Mobilizations

97

– Wien’s law of displacement, demonstrated by Wien in 1893, which shows how the graph of radiant energy as a function of frequency is “displaced” when the temperature of the black cavity varies; – the Kirchhoff–Clausius law stipulating that the energy density of frequency radiation is inversely proportional to the square of the speed of light; – the “fundamental equation” that he himself introduced in 1897 to express the relationship between the energy density of radiation of a given frequency, the average energy of a resonator of the same frequency, and the speed of light. As for the transitional representation used by Planck, it consists of the discrete version of Boltzmann’s demonstration that Planck operates on the basis of the analogy between gas molecules and resonators, the likelihood of which is undoubtedly suggested to him by the four common points (notably by the latter) between his research program and Boltzmann’s. The degree of novelty of the formula thus established by Planck seems low, because the latter is only a recombination of already potentially existing representations within the network. Let us identify Boltzmann and Planck with the two individual actors shown in the proposed example following hypothesis 4.411. At date t, i.e. at this time in 1900 when Planck began to demonstrate his formula, the corresponding Boolean matrix is presented as shown in Table 5.6. C1

C2

C3

C4

C5

AB

1

0

1

0

1

AP

0

1

1

1

0

Table 5.6. The “Planck-Boltzmann” network at date t

The individual repertoires of “Boltzmann” and “Planck” then each contain three elementary categories, and are therefore respectively represented by the two sets: c (SB) (t) = {{ø}, {C1}, {C3}, {C5}, {C1C3}, {C1C5}, {C3C5}, {C1C3C5}} c (SP) (t) = {{ø}, {C2}, {C3}, {C4}, {C2C3}, {C2C4}, {C3C4}, {C2C3C4}} The four common points of the Boltzmann and Planck research programs are represented by a set of type c (SC) (t) representing the intersection of c(SB) (t) and c (SP) (t): 11 See Chapter 4, section 4.1.

98

The Carousel of Time

c (Sc) (t) = {{Ø, C3}} Correlatively, the specificities of Boltzmann’s and Planck’s research programs are represented by the following respective sets: Prg B (t) = c (SB) (t) – c (Sc) (t) = {{C1}, {C5}, {C1C3}, {C1C5}, {C3C5}, {C1C3C5}} Prg P = c (SP) (t) – c (Sc) (t) = {{C2}, {C4}, {C2C3}, {C2C4}, {C3C4}, {C2C3C4}} The formal analogy put into effect by Planck is of the type: “gas molecules/discontinuous mechanics ≈ resonators/continuous electrodynamics”, and it suggested that he should have adopted the discrete version of Boltzmann’s combinatorial demonstration. Everything then happened as if the Boltzmann actor were communicating, for example, the message “C3C5” to the Planck actor for whom this message was both audible (because it contained C3) and informative (because it contained C5). Consequently, at date t+1, when Planck was able to publish his famous formula, the matrix representing the state of the network was that shown in Table 5.7. C1

C2

C3

C4

C5

AB

1

0

1

0

1

AP

0

1

1

1

1

Table 5.7. The “Planck-Boltzmann” network at date t+1

Boltzmann’s cognitive repertoire did not change during this one-way communication12, but Planck’s doubled in volume: at date t+1, c (SP) had 16 parts, half of which were new. These eight new parts in Planck’s cognitive repertoire are the following: {C5}, {C2C5}, {C3C5}, {C4C5}, {C2C3C5}, {C2C4C5},{C3C4C5},{C2C3C4C5} Planck’s reception of the elementary category C5 caused a process of recombination in his cognitive repertoire with those already included in it, and the product of such recombination resulted in the appearance of these eight new parts.

12 We neglect here the reinforcement of category C3 in C23 at date t+1 in Planck’s cognitive repertoire, following its repeated activation by its reception during the message “C3C5” in this repertoire which already contained it at date t.

Scope, Dimensions, Measurements and Mobilizations

99

Of these, two were also included in Boltzmann’s repertoire, and this since the date t: C5 and C3C5. The remaining six new parts can therefore be considered as expressing the novelty contained in Planck’s formula: {C2C5}, {C4C5}, {C2C3C5}, {C2C4C5},{C3C4C5},{C2C3C4C5} Since this novelty results purely and simply from a recombination of already existing representations, it seems that the analogy is here a vector of novelty in the weak sense of the term. As with any inter-individual communication, the novelty following the analogy that led Planck to state his famous formula was reduced to the discovery of previously unexplored combinatorial potentialities, but which existed from the outset in the network he formed with Boltzmann and with whom the communication established with the latter was content to achieve13. Nevertheless, 13 L. Soler (op. cit., p. 103 sq.) interprets this weak novelty as the necessary result of the encounter between two pre-structured symbolic systems. Planck’s formula would be an example of the effect of the systemic nature of the physicist’s language, which could generate something new. Soler’s overall argument can be broken down as follows: (1) physics presupposes a language in the broadest sense of the term (verbal language and mathematical language); (2) any language being a symbolic system such that none of the signs constituting it has meaning in isolation, but only by being connected in a network with the other signs, physics is a set of closely connected statements; (3) the symbolic system constituted by all the statements made in 1900 concerning the problem of black radiation has a wide and not empty intersection with the symbolic system formed by the constituent statements of Planck’s 1900 black radiation research program; (4) this second symbolic system includes in particular a series of basic statements, each of which functions for Planck as an absolutely ineliminable constraint; (5) this network of constraints coordinated with the symbolic system formed by Planck’s 1900 black body research program encountered another symbolic system constituted by Boltzmann’s discrete combinatorial demonstration, and this other symbolic system then also functioned as a constraint in Planck’s demonstration; and (6) the appearance of the formula ε = h ν can then be presented as the result, necessary in a certain sense, of the encounter between these two pre-structured systems of symbolic constraints, and this result is also a new constraint. In short, since physics is conceived as a kind of organism where the function of each part depends on the internal organization of the whole, and this internal organization being fixed, means that each given set of initial statements contains virtually a number of necessary consequences, not yet explained by any physicist. The symbolic system can then function as an anticipatory structure, so that the physicist’s progression sometimes leads them to totally unexpected consequences, although they are already virtually present. According to Soler, this is what would explain the “mysterious heuristic power traditionally recognized by analogy”, which she therefore prefers to call the “inductive power of the symbolic structure” (op. cit., p. 108, author’s translations). The analogy would be a cognitive operator producing a reorganization within existing representations, so that by linking fragments of previously

100

The Carousel of Time

the decrease in the number of unexplored psychological combinations was less here than in the case of dyadic communication, where the two actors were both transmitters and receivers, such as the one we saw above, established from t0 to t1 between the actors Ak and Al: whereas, in this example, the content of the “pool” of such combinations went from 18 to 8 elements at the end of this communication, it went from 18 to 12 elements here because between t and t+1 only one actor (Planck) was both sender and receiver of informative messages, the other (Boltzmann) being satisfied to be (implicit) sender of information. Within the framework of our complex cognitive network model, we thus broadly agree with Soler’s interpretation, but add a possible quantification of the degree of novelty obtained by a cognitive approach such as Planck’s. This quantification could be expressed in at least two ways. The first would measure this degree of novelty by the number of new combinations produced, and it would be all the stronger mechanically as the previous knowledge of the cognitive actor producing the novelty would be significant and as the part of this knowledge that they would share with others would be small. Indeed, any learning of a new psychological category doubles the volume of the considered actor’s initial cognitive repertoire, and the number of new combinations thus produced is obtained by deducting from the repertoire the combinations already included in the cognitive repertoires of other actors. The increasing efficiency of the individual learning process would thus be tempered by the global cognitive network’s degree of redundancy. A second way to quantify this degree of novelty would be to measure it by reducing the difference between the sets c (Cqj), and Γ = U c (Si) produced between (t) and (t+1). The higher the degree of novelty thus measured, the more quickly the subsequent possibilities of novelty production would be restricted, because the global network would more quickly come up against the limit represented by the achievement of an informational equilibrium, such that Γ = U c (Si) = c (Cqj), and [aij] = 1, ∀i, ∀j. The interpretation proposed by Soler has the benefit of offering a possible explanation of the spontaneous realistic conviction found among most physicists, as well as the existence of simultaneous discoveries. This conviction would be due to the fact that the scientist has the feeling that their thought is carried by something other than itself and inscribes in reality an order which is already there rather than recognizing in it the play of symbolic constraints exposed above. As for simultaneous discoveries, the ideas of which are often said to be “in the air”, they would already be virtually simultaneously inscribed in the symbolic systems of several discoverers. But apart from the fact that this interpretation makes it possible to think of the novelty created by analogy only in the weak sense of the term, the disconnected pre-structured symbolic networks, it “is undoubtedly a vector of novelty in the weakest sense of the term, insofar as it leads to the almost automatic suggestion of combinations not yet explored (heuristic power)” (ibid., p. 115, author’s translation).

Scope, Dimensions, Measurements and Mobilizations

101

latter judgment itself remains at least partially determined by an irreducible personal appreciation. On the other hand, our model makes it possible to explain the phenomenon of simultaneous discoveries through the emergence, following inter-individual communication, of combinations of categories known to all protagonists, whereas these combinations were previously unknown to each of them, as we have seen previously, and it also makes it possible to objectify and quantify not only the apparently weak novelty resulting from Planck’s analogy, but also the radical novelty of which this type of analogy is significant. 5.2.2. The creative analogy of radical novelty: Gregory Bateson’s “grass syllogism” In addition to inter-individual communication, we know that there are at least two other types of interactions that can be carried out by the individual actor: their interaction with themself, implied by the meta-representational dimension of human cognition, and their interaction with their natural environment. These two types of interaction give rise to language-related knowledge activity, and this language intellect, which “encompasses both the most abstract speculations and the mass of common thought” (Schlanger op. cit., p. 582), is likely to produce analogies that lead to radical innovations. Our aim now is to introduce into the model the possibility of such creations in the form of cognitive functioning that can produce idiosyncratic elementary categories in individual cognitive repertoires, such as categories C1 or C4 in our example. However, it so happens that the mental operations put into effect by analogy lead precisely to such a result. After underlining the predominant role played by “analogical reasoning” in scientific activity, Bruno Latour and Steve Woolgar (Latour and Woolgar 1988, p. 175 sq.) provide us with an example: “Bombesin sometimes behaves like neurotensin. Neurotensin decreases the temperature. So bombesin decreases the temperature”. Whilst logically incorrect, this argumentation built on the basis of an analogy “is sufficient to launch research that should lead to results hailed as an exceptional contribution” (op. cit., p. 176). Let us detail the type of categorization involved: the partial similarity (“often”) of the apparent behaviors of bombesin and neurotensin led the scientist (in this case, Marvin Brown, a doctor specialized in the physiology of neurotransmitters) to bring these two substances together in a class of equivalence predicated by the ability to decrease temperature.

102

The Carousel of Time

This example illustrates a particular cognitive process leading from analogy to categorization based on observed similarities, suggesting the existence of other possible similarities, hitherto ignored but which appear sufficiently likely to abound a given equivalence class – that of substances capable of decreasing temperature – by including a new entity among its members. However, it is not sufficient for our purpose, because the class of substances with this capability existed before M. Brown had the idea of storing bombesin on the basis of its behavioral similarities with the neurotensin already listed here. However, we want to show the role of analogy in the creation of totally new equivalence classes, some of which will prove to be true psychological categories. Let us then turn to another type of “analogical reasoning”, which is based on a very serious reasoning fault within the framework of classical logic. This fault lies at the heart of a devastating criticism of verificationism as a pre-Popperian belief in the probative value of a particular confirmation (e.g. an experimental result) of the scientific truth of a general statement (theory). This devastating criticism is therefore also that of the inductive approach as a producer of true statements that are reliably justified, i.e. that produce knowledge. The logical fault in question concerns an erroneous construction of a universal syllogism whose correct construction would be of the type: “All men are mortal. Socrates is a man. So Socrates is mortal”. In the major premise, we distinguish here the antecedent “all men” from the consequent “are mortal”. The minor premise affirms the antecedent of the major “Socrates is a man”, and the necessary conclusion of this equivalent of a modus ponens: “Socrates is mortal”. This is the correct way to reason in the classical logic that constitutes a part of the Aristotelian cognitive universe – a bivalent logic, in the sense that it knows only two values of truth: the true and the false. The logical mistake consists of affirming in the minor the consequent of the major, instead of its antecedent – logicians here speak of “affirming the consequent”. It takes the following form: “All men are mortal. Socrates is mortal. So Socrates is a man”.

Scope, Dimensions, Measurements and Mobilizations

103

The major has not changed, but the minor concerns another of its parts and infers a logically false conclusion, because it is not necessary: Socrates is mortal, yes, but he could just as easily be a mouse. In the Popperian epistemology, this is one of the elements of the fundamental asymmetry between verification and invalidation14. Therefore, as Mark Blaug summarizes it, from a strictly logical point of view: “we can never affirm that a hypothesis (theory) is necessarily true because it is in agreement with the facts; by reasoning on the basis of the facts, we implicitly commit the error of reasoning which consists of affirming the consequent” (Blaug 1982, pp. 13–14, author’s translation). Nevertheless, let us keep this erroneous logical structure in mind, and let us evoke the syllogism of G. Bateson (1996, p. 325 sq.): “Grass is mortal. Yet, men are mortal. So, men are grass”. This syllogism is obviously wrong for the same reason as the previous one: the minor affirms the consequent of the major, “mortal”, and comes to a conclusion as logically wrong as “Socrates was a man”. But its advantage over the previous syllogism is that it startles the non-logician still imbued with factual truth, who can well admit that Socrates is a man, but absolutely refuses to confuse him with grass. Let us then more precisely analyze the cognitive operations at work in both the correct and faulty forms of the syllogism. It is immediately apparent that they consist of two different but complementary approaches, each as legitimate as the other. In the correct form of our universal syllogism, we begin by affirming the existence of the class of “all men” determined by the predicate “to be mortal” – this is the major premise. We continue by naming a member of this class, Socrates, identifying him as such – this is the minor premise. We conclude by asserting that the predicate that determines the class must necessarily also predict this member of the class. As for the cognitive operations at work in the erroneous form of the grass syllogism stated by Bateson, they are totally different. As the latter notes: “the grass syllogism is interested in the relationship between predicates, and not between classes or between subjects of sentences, it is interested in the identity of predicates. Mortal–mortal: what is

14 Introduced in Popper (1973, p. 36 sq.), this central theme of Popper’s epistemology is treated much more fully in Popper (1990, p. 199 sq.).

104

The Carousel of Time

mortal is equal to this other mortal thing” (1996, p. 327, author’s translation). In other words, the grass syllogism constructs a class that did not previously exist on the basis of predicates shared by two entities: it constructs the class of “mortal things” on the basis of an observed analogy between these two entities, and this analogy is that of shared predicates. This is therefore a relationship that is being constructed between a state of the world and a model, i.e. a first-order q-morphism in the sense of John H. Holland (1986), and not a second-order q-morphism characterizing a relationship between an old representation (Boltzmann’s model, as a source concept) and a new representation (Planck’s formula, as a target concept). Before the enunciation of this first-order q-morphism, the equivalence class “mortal things” did not exist as such, in other words, as located at a higher level of abstraction than the entity “mortal man” and the entity “mortal grass” now appearing as members: this equivalence class was literally created by the observation of a state of the world containing both mortal men and mortal grass. As a result, the apparent absurdity of the conclusion disappears: men are indeed grass, and vice versa, as both men and grass are mortal – as well as possibly a host of other things, all of which can also be classified in the newly created class “mortal things” which constitutes an authentic psychological category15. As we know, this newly created category must be semantically coherent with a whole series of other psychological categories already written in individual memories in order to combine with them in a relevant way and thus integrate into these memories. The analogy here therefore goes far beyond what has appeared to us as a novelty in the weak sense of the term, because its essential function is not so much to apprehend a new or poorly known situation by treating it as a known situation as to create a new psychological category. In this sense, it is at this level that the analogy falls within the radical interpretation of the theory of interaction – and the

15 Every category is a class, but the opposite is not true: all objects that weigh less than one kilogram are a class predefined by “weighs less than one kilogram”, but this is not a category. L. Quéré (1994, p. 13), from whom we take this example, points out that a category also groups things that are similar. Let us add, for our part, that a psychological category informs representations that are associated with behaviors. It is thus that constructing the class of equivalence of “mortal entities” was simultaneously constructing that of “non-mortal entities” and that these two subclasses became historically collective psychological categories fully shared until the dawn of the classical period of Greek antiquity: the history of mentalities shows that this opposition “mortal versus immortal” constituted, for the ancient Greeks, the principal criterion of distinction between men and gods. This opposition was then fully effective in inciting the people of those times to commit certain actions and in simulating that they would not commit others, until the 5th and 6th Centuries BCE, when the attic tragedy placed the relationship between men and the gods in renewed terms (Vernant and Vidal-Naquet 1972, 1986).

Scope, Dimensions, Measurements and Mobilizations

105

tension related to the dissemblance between “men” and “grass” is obviously much stronger than that which distinguishes “bombesin” from “neurotensin”. Finally, it is clear that the grass syllogism is incorrect only in the context of classical logic, in the sense that the conclusion does not necessarily follow from the premises. It is in this sense, and only in this sense, that this conclusion “is not true” and therefore that it is “false”. In the broader context of modal logic, the “not necessarily true” is not identified with the “false”, since this logic deals with what is necessary and what is not: the contingent, what is possible and what is not – the impossible. In such a context, it is possible that Socrates is a man, and this hypothesis can form the basis of an inductive reasoning leading to a probable conclusion synonymous with successful problematization. In reality, it is indeed the logically erroneous form of the grass syllogism that characterizes the heuristic approach of any complex adaptive system that identifies regularities from the observation of their interactions with the environment and, in particular, the scientist identifying regularities that they condense by formulating theories: these identified regularities come from observed similarities16. Certainly, the analogies thus formulated can enable us to better understand poorly known objects from better known objects, and to create a weak novelty by creating potential combinations of representations that have remained in their potential state until now. But, beyond this play on existing resemblances between given representations, they can also be creators of radical novelty by producing entirely new representations composed of totally new categories that are brought to light by being named: beyond the realization of potentialities, they then actualize virtualities. To take our example, a combination {C1C3C6} that will appear in Ak’s memory at date t1 could result from the combination of {C1C3}, already present at date t0 in this memory, with a new idiosyncratic category C6 just from a grass syllogism formulated between t0 and t1. It would therefore be ready to be more or less shared by all individual actors through the combinatorial game of social communication. Thus, even if inductivism (“sophisticated” or not) must be abandoned as an immediate process of knowledge production, because it has “increasingly failed to shed new and interesting light on the nature of science” (Chalmers 1987, p. 71), it seems it must be preserved as a process of producing new representations. On the other hand: “According to the most sophisticated inductivism, creative acts, including the most

16 “Inductive mechanisms are based on the detection of similitudes between several similar situations. Faced with different examples that seem to involve common driving forces, the cognitive system compares them to detect similar and diverging points. The purpose of this comparison is to extract the abstract structure they share. A great deal of our learning is based on this principle, and we spend our time building knowledge based on inductive mechanisms” (Didierjean 2015, p. 21, author’s translation).

106

The Carousel of Time

innovative and significant, require genius and call upon the individual psychology of the scientist, defying logical analysis” (ibid., p. 69). Regardless of the notion of “genius” and the call to “the individual psychology of the scientist”, the complementarity between cognitive operations implemented respectively by the erroneous and correct forms of a universal syllogism thus appears in the center of the complementarity between what Hans Reichenbach (1938) respectively called “context of discovery” and “context of justification”. Despite the empirical difficulties generated by this distinction, it must be maintained at the conceptual level, which is that of the cognitive operations respectively put into effect in one context or another17. A particularly enlightening example of a grass syllogism militates in this direction. It consists of the metaphor of the computer program for the structure and functioning of DNA. Is it the strength of the tension between the image of the computer program and that of DNA that explains the rapid progress made by molecular biology, which has become one of our major sciences? Following Miller, we noted earlier that the most creative metaphors are those for which the difference between source and target concepts seems most important at first glance. The computer program that was the source concept {y} here was obviously much more dissimilar from the target concept {x} that constituted the DNA than the neurotensin was from bombesin, and we will see that this metaphor had exactly the same structure as Bateson’s grass syllogism. This metaphor was introduced by an article by Ernst Mayr (1961) whose construction contains, according to Atlan (1999, p. 23 sq.), the following sophism: 1) DNA is a quaternary sequence, easily reducible to a binary sequence; 2) any 17 The same Paul Feyerabend, who denied the relevance of this distinction on behalf of Galileo who adopted the Copernican system through misleading use of new physical interpretations of movement (1979, pp. 152, 167, 170), would undoubtedly have accepted the heuristic validity of the cognitive operations underlying the analogy operating in the context of discovery, but lacking validity in that of justification, governed by classical logic. In fact, the distinction of these contexts is relevant in terms of normative principles guiding certain moments of the scientific process, although it can legitimately be questioned in terms of description where “prejudices, passion, vanity, errors, simple stubbornness” (Feyerabend op. cit., p. 167, author’s translation) are inextricably linked to “laws dictated by reason” (ibid). This is why this distinction remains valid in the eyes of philosophers of science, especially when, like K. Popper, they almost completely disregard the context of discovery, or go so far as to deny any scientific character to the studies that deal with this context: the distinction then takes the radical form of an elimination. And reciprocally that is also why it is often rejected by sociologists of science in the name of the inextricable intertwining of reason and passions, as well as the social determinations of science and technology: the non-distinction between the two contexts then takes another radical form: that of their confusion.

Scope, Dimensions, Measurements and Mobilizations

107

conventional computer sequential program is reducible to a binary sequence. From these two proposals, E. Mayr deduced 3): genetic determinations work like a computer program written in the DNA of genes. According to Atlan (op. cit., p. 24), the reasoning error here consists of holding the reciprocal of the second proposal to be true (i.e. that any binary sequence is a program)18. However, it is clear that the fact that any program can be reduced to a binary sequence does not imply that any binary sequence is necessarily a program. In the terms we have used above, Mayr clearly makes the mistake of asserting the consequent: the first proposition says that A (DNA) is B (a binary sequence), and the second says that C (a computer program) is B (a binary sequence), hence Mayr deduced that A is C, just as Bateson’s syllogism said that A (the grass) was B (mortal), then that C (man) was B (mortal), from which he deduced that A was C (the grass is of man, here equivalent to C is A: man is grass). The analogy thus activated is the following: “DNA functioning/natural machine ≈ computer program/artificial machine”, and although it is a second-order q-morphism such as the analogy “gas molecules/discontinuous mechanics ≈ resonators/continuous electrodynamics” put into action by Planck, it creates, like the first-order q-morphism of the “grass syllogism”, a completely new category. On the basis of an alleged similarity between “natural machines” and “artificial machines”, the latter were subsumed under the category “machines” which had thus aggregated them and within which they were now included as members. Molecular biology was thus led to ask its questions in an entirely new language. As we have seen, the answers to these questions inevitably implied an inversion of the terms of the old debate around vitalism, because from then on it was no longer a question of accepting, or not accepting, the move of reducing life to being physicochemical, but on the contrary extending the physicochemical to a biophysics of organized systems, applicable both to artificial and natural machines, and all the work on the logic of self-organization went in this direction19. Similar to Planck’s analogy producing a formula that marked the birth of quantum physics, the result of Mayr’s wrong syllogism therefore represented much more than an important, even exceptional, contribution, such as that of Brown, 18 This wrong syllogism was later taken up by H. Atlan (2011, p. 57) in a development aimed at countering the “central dogma of molecular biology” which implies a notion of genetic programming in cellular functioning and, by extension, in embryonic development (op. cit., pp. 48–67). Far from being the result of a transmission of information in a single direction: from DNA-programs to proteins, this functioning must be analyzed as a program distributed over the network of intracellular biochemical units, with DNA as data. In the end, between the two metaphors: “DNA-program” and “DNA-data”, “reality must probably be conceived by the idea of an evolutionary network, produced by two superposed dynamics evolving on different time scales” (ibid., p. 67, author’s translation, original author’s use of italics). 19 See Chapter 2, section 2.1.

108

The Carousel of Time

who placed bombesin in the class of substances that lowered the temperature next to the neurotensin already listed there: this result opened up a whole new way of posing the problem of the relationship between the living and the inanimate, by suggesting that the question was no longer whether to reduce the former to the latter, but to expand the latter to the former. Beyond a punctual scientific advance, a whole section of theoretical biology was thus reversed in its usual way of posing problems. The analogy formulated by Mayr has exactly the same epistemological status as that conjectured by Planck: these two second-order q-morphisms are supposed to allow a better understanding of the target concepts using source concepts, and appear in this sense as creating weak novelties but, presenting the same logical structure as the grass syllogism of Bateson, they actually convey a radical novelty, either by opening a completely new field of research (Planck), or by upsetting the episteme of an existing field by introducing a completely new way of exploring its issues (Mayr 1961). The second outcome certainly reveals better than the first the lack of logic that is inherent in any analogy formulation, but both identically put into action a metaphorical abstraction in the sense of Jean-Blaise Grize (1990). The radical richness of the analogy is therefore due to the fact that it brings about a new object, or sheds a different light on an existing object: under the dual condition that it never stops increasing the distance it maintains with its source-concept and that the frequency of its repetitions does not transform it into being commonplace, it allows us to visualize differently. By breaking with the usual divisions, it is thus an essential step in the development of concepts (Utaker 2002). This step is part of a logic of discovery (where representations work) that precedes the logic of justification (supposedly transforming representations into knowledge), and the path from analogy to concept is therefore, as Arild Utaker brilliantly writes (op. cit., p. 218), that of the eclipse of the former in favor of the emergence of the latter20. As the rest of the history of Brown’s establishment of the similarity between bombesin and neurotensin shows, this path is that of a double transformation: “On the one hand, the analogical approach often gives way to a logical link. On the other hand, the complex series of local contingencies that helped to temporarily establish a weak link is being replaced by flashes of intuition. The form: ‘an idea has come to someone’ sums up this process in a highly condensed way. This is also the way to overcome the essential contradiction contained in the procedures used by scientists: if they are logical, they are pointless; if they are successful, 20 For an interesting interpretation of the links between analogy, abduction and creativity, see De Brabandere (2012).

Scope, Dimensions, Measurements and Mobilizations

109

they are logically incorrect” (Latour and Woolgar op. cit., p. 177, author’s translation). Finally, like this net creation of categories, the refinement and aggregation of existing categories is synonymous with creation in a strong sense, as it results in the addition of additional columns to the matrices [aij](t), and consequently [Aij](t). These columns correspond to new psychological categories that are therefore offered to the weak creation of novelty resulting from possible subsequent communications between the actors. Corresponding to an increased power of discrimination in a given individual actor, the refinement of a category Cqj thus leads to the creation of two new columns in the matrix [aij]: next to a category Cq+1j – which is no longer elementary to have been so refined – two new elementary categories Cqj.1 and Cqj.2 appear. Let us take our example again, and suppose that at date t0 the actor Ak refines category C1 into C1.1 and C1.2 instead of communicating with Al. The state of the network at date t1 is then that of Table 5.8. C21

C1.1

C1.2

C2

C3

C4

C5

Ak

1

1

1

0

1

0

1

Al

0

0

0

1

1

1

0

Table 5.8. Another state of the network at date t1

This state of the network corresponds to Table 5.9. Cognitive repertoires, global memory, structural complexity (H) and redundancy (R) in t0

Cognitive repertoires, global memory, structural complexity (H) and redundancy ( R) in t1

Information gains between t0 and t1 Evolution of structural complexity and redundancy

c Sk (t0)  = 8

 c Sk (t1)  = 32

+ 24 bits

 c Sl (t0)  = 8

 c Sl (t1)  = 8



Hmax (t0) = 16 bits Γ (t0)  = 14

Hmax (t1) = 40 bits Γ (t0)  = 38

+ 24 columns in Γ (with a reinforcement from C1 to C21 in the columns corresponding to the parts containing it)

H (t0) = 14 bits

H (t1) = 38 bits

+ 24 bits

R (t0) = 12.5%.

R (t1) = 5%.

– 7.5%

Table 5.9. Evolutions of Hmax, H and R following a refinement of a category

110

The Carousel of Time

Table 5.9 shows that the substitution of a refinement of a psychological category for inter-individual communication has resulted in: 1) a higher increase in the volume of the network memory (+ 24 bits instead of + 10 bits), as well as the network’s potential surface area (+ 48 actor-bits instead of + 32 actor-bits); 2) a significantly reduced network redundancy instead of an increase (from 12.5% to 5% instead of 25%); 3) an unequal modification of the distribution of the volume of the network’s effective surface area between the actors: as in the example of Planck’s formula, Ak’s cognitive repertoire quadrupled while Al’s remained unchanged, instead of these two repertoires identically doubling in volume; 4) a considerable increase in the pool of potential combinations of elementary categories not yet realized in the network’s global memory. We have seen that the communication established at t0 between Ak and Al under the conditions described above led to a pool containing eight elements at date t1. If the refinement of this communication described above took place during the same period, we would have: c (Cqj) (t1) = 27 = 128 and ∪i c (Si) (t1) = 38, so that this pool would contain 90 elements on the same date, while the communication would reduce the volume of this pool by 10 units between t0 and t1, the refinement would increase it by 72 units during this same period. The refinement of existing categories therefore considerably modified all of the network’s spatial boundaries: it further extended the network’s external and internal boundaries, and thus increased its potential and effective surfaces, and within the latter, it shifted the boundary delimiting the parts respectively allocated to the two actors in favor of the one who refined a previously elementary category and thus increased its cognitive repertoire by four units; 5) a reduction in the propensity to communicate between actors where their communication led on the contrary to its increase: from t0 to t1, far from increasing from 1/5 to 3/5 as was then the case, this propensity to communicate decreased here from 1/5 to 1/7. The reason was simple: while communication increased the numerator and left the denominator of the relationship that gave the measure of the propension to be communicated between actors unchanged, unilateral categorization did the opposite – it increased the denominator of that relationship and left its numerator unchanged. Like the refinement of existing elementary categories, the aggregation of such categories amounts to transforming the number n into a variable, and it produces the same type of results. Nevertheless, where a refinement of existing elementary categories by an actor Ai introduced at least two additional elementary categories in

Scope, Dimensions, Measurements and Mobilizations

111

Si, thus quadrupling at least its cognitive repertoire21, the aggregation of categories by this same actor can very well be reduced to the addition of only one additional category in Si, and thus be content to double its cognitive repertoire: creating the “flower” category from the elementary categories “tulip” and “reseda” is not necessarily accompanied by the simultaneous creation of the “vegetable” category, but may well be content to suggest the existence of a negative “non-flower” category which cannot enter in the same way as real new categories in the list of those currently appearing in the network. From a formal point of view, the impact of the aggregation of existing elementary categories on the network space may therefore be less than that of a refinement of such categories, especially since the volumes of the individual cognitive repertoires considered were significant in the previous state of the network. The creation of psychological categories finally appears to be linked to the formation of representations-occurrences on the basis of random or non-random stimuli from the individual actor’s natural or social environment: as we know, the latter immediately classifies these representations-occurrences into a generic representation that constitutes a category, and it happens that it creates the latter on this occasion. However, without such a creation, there would simply be no elementary category to refine, so we would still be faced with the question posed earlier by Raymond Ruyer (1954) on C. Shannon’s theory of information: where does information come from? In the end, whether or not the network achieves its informational equilibrium thus depends on the comparison between the rate of socialization, via inter-individual communication, of the idiosyncratic categories present in the current state of the network, and the rate of creation of new idiosyncratic categories in this state of the network, linked to the refinement or aggregation of existing categories, but also to the creation of new categories, or even the introduction of new generations of actors within the network. As long as the first of these two rates is less than or equal to the second, the informational equilibrium of the network is indefinitely deferred, but when it is higher, the network converges towards its informational equilibrium, this convergence being only delayed by the introduction of a number of new idiosyncratic categories insufficient to reverse the direction of its path.

21 More generally, a refinement from Cqj to Cj.1, Cj.2, …, Cj. multiplication of the cognitive repertoire considered by 2v.

v

corresponds to a

6 Provisional Regionalization and Final Homogenization

It now seems necessary to define the initial status of our network much more precisely than before. Indeed, we have so far implied: (1) that communication between actors had the necessary (but not sufficient) condition of the existence of a common language between them; (2) that this communication was indeed taking place between the two actors represented in our example in the previous chapter. The transition from proposal 1 to proposal 2 leaves a theoretical void here, since there is no reason to infer the actual occurrence of an event solely on the basis of the necessary condition for such an occurrence. Let us fill this gap by using the concept of propensity to communicate that was introduced by Hypothesis 4.9. To this end, let us define the initial state of the network in the most equitable way possible by imposing three constraints on it: (1) the individual volumes of information must be as small as possible, and q = 1, ∀ j – the initial state obliges; (2) each actor must have the same opportunities to communicate (social equity); (3) each individual memory must contain the same amount of information and have as many shared categories as idiosyncratic categories (cognitive equity). On the basis of this initial state, we will analyze the most probable evolution of the space of our complex socio-cognitive network of individual actors in three steps. We will first show that, when this evolution is driven solely by inter-individual communication, it leads to the cumulative formation of clusters of individual actors. This process is synonymous with a division of the network’s effective surface area into regions that are increasingly distant from each other, and thus seem to lead to a partition of the network’s space in the form of a juxtaposition of apparently irrevocable local informational equilibria with regard to each cluster of actors thus formed.

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

114

The Carousel of Time

We will see that this cluster formation and the phenomenon of regionalization of space thus generated have nothing irrevocable in reality. Not only is no local informational equilibrium ever immune from destruction and being replaced by another when the number of actors is odd, but also certain psychological categories present in the network, and therefore in all these categories, tend to be shared by all the actors. Even though the number of the latter is even, the link thus established between all of them allows each of them to escape from their cluster to be able to communicate with any other individual member of any other cluster. The network is therefore moving towards a global informational equilibrium that is irrevocable – if inter-individual communication persists as the only driving force behind the network’s evolution. Finally, on the basis of an example, we will compare the respective changes in the values of the main characteristic variables at the level of the network as a whole, and that of each cluster considered separately. We will see that these developments are of the same nature, but not of the same degree: they appear to be more pronounced at the level of each cluster than at the level of the global network. 6.1. Formation of clusters of actors and regionalization of the network space The first two constraints indicated in the introduction imply that the network of possible inter-individual communications must be in the form of a circuit, such that each individual actor can communicate with their two immediate neighbors through one, and only one, elementary category common to their cognitive repertoires. Each of these latter therefore contains two shared categories. The third constraint implies that each set of elementary categories contains exactly four elements: two shared categories and two idiosyncratic categories. Under these conditions, n = 3m, and the initial state of the network can be represented using the matrix [aij](t0) as shown in Table 6.1. The first m columns of this matrix are those of categories shared by two successive actors, and the next 2m columns are those of idiosyncratic categories. The sets Si (t0) of elementary categories of actors are such that: ∀i ≠ m: Si (t0) = {Ci, Ci+1, Cm+2i-1, Cm+2i} for i = m: Si (t0) = {C1, Cm, C3m-1, C3m}

0

0

0

0

0 …

0 …

0

0

Am 1

0

0 …

0

0

Am-1 0

0

1

0

0

1

0 …

0

0

Am-2 0

0

1

0 …

0 …



0

A4 0

1

0

0

… … … … … … …

0

A3 0

1

0

0

1

A2 0

1 …

1

A1 1

0

1

1



0

0

0

0

1

1

0



0

0

0

0

0

0

0 0

0

0



0

0

0

1

0

0

0



0

0

1

0

0

0

0



0

0

1

0

0

0

0



0

1

0

0

0

0

0



0

1

0

0

0

0

0



1

0

0

0

Table 6.1. State of the network in t0



0

0

0

1

0

0

0



1

0

0

0

















0

0

1



0

0

0

0

0

0

1



0

0

0

0

0

1

0



0

0

0

0

0

1

0



0

0

0

0

1

0

0



0

0

0

0

1

0

0



0

0

0

0

C1 C22 C3 C4 C5 … Cm-2 Cm-1 Cm C2m+1 Cm+2 Cm+3 Cm+4 Cm+5 Cm+6 Cm+7 Cm+8 … C23m-5 C3m-4 C23m-3 C3m-2 C3m-1 C3m

Provisional Regionalization and Final Homogenization 115

116

The Carousel of Time

Each individual cognitive repertoire, c Si (t0), contains 24 combinations of elementary categories. The volume and structure of global memory being distributed in a perfectly equitable way, structural complexity and global redundancy are distributed equally among the actors. Each of the latter has a common language with two other actors, with whom it can therefore communicate directly, but it is only through them that it can communicate with other actors. In other words, the circular graph of possible communications is connected, but not complete. Throughout this chapter, we will assume that the evolution of the network is driven solely by inter-individual communication, and that the latter generally has the same structure: unless otherwise noted, each message exchanged on this occasion has two psychological categories exactly, one shared by the actors, thus making their communication possible, and the other being idiosyncratic, allowing its receiver to learn, which is why we will describe it as “informative”. Since the most likely form of communication is dyadic, the most likely evolution of the network is determined by the only dyadic communications that are established within it in this way, and we know that it is exclusively this type of evolution that interests us here. In the initial status of our network, there is a number of pairs of actors equal to C = m ! / [2 !(m–2) !], so as many dyadic communications a priori possible. Among these C 2m pairs, the structure that we have imposed on the initial state of the network implies that only m pairs verify pkl (t0) ≠ 0, and these m pairs cannot all establish simultaneously dyadic communications in each state of the network. Indeed, two consecutive edges of the non-oriented graph of possible communications between actors cannot be simultaneously activated (such that the corresponding possible communication is actually established), because this would ipso facto imply at least a triadic communication, and we would then leave our analytical framework. Thus, with m = 3, only one dyadic communication can actually be established among the three possible ones; if m = 4 or m = 5, they are two among the six and ten respectively possible; with m = 6 or with m = 7, they are three among the 15 and 21 possible, etc. It appears here that from any number m of actors, any addition of an additional actor to the network leaves the number of dyadic communications established when m is even as unchanged, and increases by one unit the number of such communications when m is odd. Among the m pairs for which the propensity to communicate between actors is not zero; there are therefore at most m/2 (when m is even) or [1 + (m – 3)/2] (when m is odd) which effectively support communication in each state of the network.

2

m

Let us agree that m is odd. A roll of the dice indicates that the pairs of actors entering communication in this state of the network are as follows: (A1 and A2), (A3 and A4), (A5 and A6), …, (Am-2 and Am-1). The mth actor is therefore provisionally excluded from any communication. In the initial state of the network, the non-zero propensities to communicate between the actors are as follows:

Provisional Regionalization and Final Homogenization

117

p1,2 (t0) = p2,3 (t0) = p3,4 (t0) = ……… = pm-2,m-1 (t0) = pm-1,m (t0) = pm,1 (t0) = 1/7 They are therefore equiprobable in the classical sense of this term, and since m is odd, the [1 + (m – 3)/2] communications actually established between dates t0 and t1 initially have the following propensities to communicate: p1,2 (t0) = p2,3 (t0) = p3,4 (t0) = ………… = pm-2,m-1 (t0) = pm-1,m (t0) = 1/7 For any pair Ak and Al, let Lk (t) = Sk(t), Ll (t) = Sl(t) and Lkl (t) = Sk ∩ Sl(t), ∀k, ∀l, ∀t. Consider the two pairs (A1 and A2) and (A3 and A4), it being understood that the following analysis applies to any pair of pairs of actors establishing a communication. Let D = E + F and G = W + Q, the numbers of informative psychological categories composing the messages exchanged between t0 and t1 by each of these two pairs during their communications, E and F being the numbers of these categories appearing in the messages respectively transmitted by A2 and A1 (therefore received respectively by A1 and A2), and W and Q being the numbers of categories appearing in the messages respectively transmitted by A4 and A3 (therefore received respectively by A3 and A4) during this period1. At date t1, the propensity to communicate of the pair (A1 and A2) is defined by: p1,2 (t1) = L12 (t1) / [L1 (t1) + L2(t1) – L12 (t1)]) with: L12 (t1) = L12 (t0) + D L1 (t1) = L1 (t0) + E L2 (t1) = L2 (t0) + F from which it follows that: p1,2 (t1) = [L12 (t0) + D] / ([L1 (t0) + L2 (t0) + E + F – [L12 (t0) + D]) p1,2 (t1) = [L12 (t0) + D] / ([L1 (t0) + L2 (t0) – [L12 (t0)] = (1 + D)/7 It thus finally appears that: p1,2 (t1) ≥ p1,2 (t0), if and only if D > 0

1 As mentioned above, in general E = F = W = W = Q = 1, so D = G = 2.

118

The Carousel of Time

The same is obviously true for the pair (A3 and A4), as well as for each of the other [(m – 3)/2) – 1] pairs establishing a communication of the same type in the initial state of the network. It thus appears that any communication established increases the propensity to communicate between its participants as soon as it is informative for at least one of them: as soon as E or F is not zero, D is not zero, and the same applies to W or Q, as well as to G, and again with regard to the other communicating pairs. Moreover, as we have seen with Hypothesis 4.142, for the network to change state (and therefore date), it is sufficient that at least one inter-individual communication established within it is not totally void of information for at least one of its actors. However, because of the structure we have given to established communications, this is still the case3. Let us now look at the evolution of the propensity to communicate of the pair (A2 and A3) who could have communicated between t0 and t1, but did not do so because each of their members was busy communicating with a different third party. In the initial state of the network: p2,3 (t0) = L23 (t0) / [L2 (t0) + L3(t0) – L23 (t0)])

2 See Chapter 4, section 4.2.2. 3 More generally, in the initial state of the network as specified above, each communicating actor sends a message containing at least one combination of elementary categories among the 16 that their individual memory then contains. For each pair of actors between whom communication is possible, there are therefore 256 possible message contents, of which only four are not strictly informative: on the one hand, the three messages containing the combination already simultaneously known by the two actors concerned, in the sense that their possible reception would leave their propensity to communicate unchanged – for example, the “C2” messages transmitted by one or the other A1 and A2 actor in communication while the other actor simultaneously sends an empty message, or it is transmitted by these two actors simultaneously. Non-informative in the strict sense given here to this term, these three messages are not, however, totally void of information, because they imply the realization of an intensive learning process in their receiver(s); on the contrary, the “communication” consisting of a message whose content would consist solely of the singleton corresponding to the empty set, and whose quantity of information is in this sense null. Any communication established in the initial state of the network therefore has a probability of 252/256, i.e. more than 98%, of being informative in the strict sense, and thus increasing the propensity of its protagonists to communicate. Thus, in the initial state of the network, the individual singularities of the actors are such that almost any communication established among them is informative for at least one of them. It follows that the mere fact that two or more given actors are currently entering into communication almost certainly increases their propensity to communicate at the beginning of the next period.

Provisional Regionalization and Final Homogenization

119

and: p2,3 (t0) = p1,2 (t0) (t0) =1/7 where: p2,3 (t1) = L23 (t1) / [L2 (t1) + L3(t1) – L23 (t1)]) with: L23 (t1) = L23 (t0) L2 (t1) = L2 (t0) + F L3 (t1) = L3 (t0) + W hence: p2,3 (t1) = L23 (t0)/ [L2 (t0) + L3(t0) – (L23 (t0) + F + W]) = 1/(7 + F + W) Finally: p2,3 (t1) < p2,3 (t0), if and only if F > 0 or W > 0 As a result of the same process, any informative communication for at least one of its participants increases by establishing the propensity to communicate between the individual actors concerned, and simultaneously reduces the propensity to communicate between actors whose communication was initially equally likely, but who each communicated with a different third party (such as A2 and A3, who could have communicated at date t0, but then communicated respectively with A1 and A4)4. Finally, what about the evolution of the propensity to communicate of the actor Am, excluded from any communication between t0 and t1? We know that initially: pm-1,m (t0) = pm,1 (t0) = 1/7

4 It should be noted that the propensity to communicate for A2 and A3 decreases between t0 and t1 only if A2 or A3 receives an informative message when communicating with A1 or A4 during the same period, and then it decreases all the more if the message received is informative.

120

The Carousel of Time

So: pm-1,m (t1) = Lm-1,m (t1) / [Lm-1 (t1) + Lm (t1) – Lm-1,m (t1)]) with: Lm-1,m (t1) = Lm-1,m (t0) Lm (t1) = Lm (t0) Lm-1 (t1) = Lm-1 (t0) + M where M is the number of elementary informative categories received between (t0) and (t1) by the Am-1 actor during their communication with Am-2. Thus: pm-1,m (t1) = Lm-1,m (t0) / [Lm-1 (t0) + Lm (t0) – Lm-1 (t0) + M] = 1/(7 + M) so that: pm-1,m (t1) < pm-1,m (t0), if and only if M > 0 The same is obviously true for the evolution of the propensity of actor Am and actor A1 to communicate from (t0) to (t1). According to the above, indeed: pm,1 (t1) = 1/(7 + E) hence: pm,1 (t1) < pm,1 (t0), if and only if E > 0 We assumed that all actors behaved in the same way when communicating, so that E = F = W = W = Q = M, and therefore D = G. Under these conditions, M < D and M < G: while it is true that each informative communication increases the propensity to communicate between its participants – thus the probability of its own repetition in the eyes of the observer – and simultaneously reduces these propensities and probabilities with regard to possible communications that have not been established, it is also true that the latter characteristic is all the more pronounced since the actors concerned have established other communications. In other words, from date t0 to date t1, the propensity to communicate of the actors Am and Am-1, as well as that of Am and A1 actors, has certainly decreased, but to a lesser extent than that of the actors A2 and A3.

Provisional Regionalization and Final Homogenization

121

The most likely evolution of the network thus leads it towards a local aggregation/global disaggregation of the set of individual actors considered from the point of view of their propensity to communicate: clusters of actors appear in the network, such that communication tends to be more and more likely within each cluster, and less and less likely between actors belonging to different clusters. Starting from a situation in which all the actors are separated by the same informational distance, the establishment of at least one informative communication in the network thus brings some actors closer to some others, and simultaneously distances them from all the others, and this is all the more so because some of the latter have come closer to each other. The existence of this process of local aggregation/global disaggregation can have several concrete meanings5. As for its intensity, it depends on the informative value of each dyadic communication. Let us consider the pair (A1 and A2): Δ p1, 2 (t0, t1) = p1, 2 (t1) – p1, 2 (t0) = (1 + D)/7 – 1/7= D/7 0 < Δ p1, 2 (t0, t1) ≤ Δ p1, 2 (t0, t1)max Where: Δ p1, 2 (t0, t1)max = Dmax/7 = [Emax + Fmax]/7 = 6/7 so that eventually: 0 < Δ p1, 2 (t0, t1) ≤ 6/7 The same applies to the pair (A3 and A4) and, more generally, to any pair of actors establishing one of the [1 + (m – 3)/2] communications made within the network between dates t0 and t1. As this double inequation shows, there are then a priori two extreme cases. The first is that all the communications established by the pairs of actors are completely empty of information, so that no actor learns in the 5 The empirical contents of these clusters are extremely diverse, because they are linked to the epistemic domains which the psychological categories that compose them refer to in our network model, and therefore ultimately to the nature of the space envisaged. If it is a political space, the clusters refer, for example, to the public institutions of the tiny Mycenaean royalty of the 16th Century or to the dust of Greek city-states of 8th–4th Centuries BC (Ancori 1990, 1997, 2009); if it is a social space, they refer to conventions that may be mutually exclusive and therefore competing, such as driving on the right or left side of the highway, or more broadly to a phenomenon of tribalization of the world (Maffesoli 1988, 1992; Maffesoli and Perrier 2012); if this space is of a linguistic nature, the clusters are organized around as many local dialects; finally, if it is of an economic nature, there are groups of actors who exchange by using local currencies, such as Merovingian currencies, or those that circulate today in our modern local exchange systems.

122

The Carousel of Time

extensive dimension of the latter: E = F =W = Q = … = M = N = 0 (where N is the number of psychological categories received between t0 and t1 by the Am-2 actor during their communication with Am-1). These communications then take the form either of exchanges of totally specular messages (A1 sending the message “C2” to A2, which simultaneously sends the same message, A3 exchanging the message “C4” with A4 under the same conditions, etc.), or of messages empty of any content simultaneously sent by all the actors, i.e. composed of the only singleton corresponding to the empty set. In this case, we would have: Δ p1, 2 (t0, t1) = Δ p3, 4 (t0, t1) = … = Δ pm-2, m-1 (t0, t1) = 0 such that the state of the network at date t1 would be represented by the same matrix as before, except that all the elementary categories Cj appearing in the network at date t0, such as j is even and j ≠ 3m – 1, would now be categories C2j. This first extreme case is excluded by hypothesis from our analysis, because of the structure given to inter-individual communications. Hence, we mentioned a strict inequality for the variation of the propensities to communicate between t0 and t1, such as p12: 0 < Δ p1, 2 (t0, t1). In the other extreme case, all actors would be driven by a concern for total transparency in their communications: D = Dmax, G = Gmax, …, T = Tmax (where T = M + N). Under these conditions: Δ p1, 2 (t0, t1) = Δ p3, 4 (t0, t1) = … = Δ pm-2, m-1 (t0, t1) = 6/7 As in the previous case, all the elementary categories Cj included in the network at date t0, and such that j is even and j ≠ 3m–1, have become C2j categories and are now better anchored than before in the memories of the actors concerned. In addition, we find here a similar form of informational equilibrium to the one we observed in our previous chapter with our example of a network reduced to two actors. But whereas in this previous example, this informational equilibrium was global, because it brought together all the actors, this form of equilibrium is only local here: it concerns in isolation each pair of actors (A1 and A2), (A3 and A4), …, (Am-2 and Am-1), and not all the actors of the network considered globally. In reality, each of these pairs now represents a cluster formed by a duplication of the same actor. Indeed: p1, 2 (t1) = p3, 4 t1 = … = pm-2, m-1 (t1) = 1 The state of the network at date t1 is then represented by the matrix in Table 6.2.

1

1

0

0



0

0

0

1

1

0

0



0

0

1

A1

A2

A3

A4



Am-2

Am-1

Am

0

0

0



1

1

1

1

C22 C3

C1

0

0

0



1

1

0

0

0

0

0



1

1

0

0

C4 C5

















0

1

1



0

0

0

0

0

1

1



0

0

0

0

0

0

0



0

0

1

1

0

0

0



0

0

1

1

0

0

0



0

0

1

1

0

0

0



0

0

1

1

0

0

0



1

1

0

0

0

0

0



1

1

0

0

0

0

0



1

1

0

0

Table 6.2. State of the network at date t1

1

1

1



0

0

0

0

0

0

0



1

1

0

0

1 1 0

… …

0

… …

0





0





0



0

1

1



0

0

0

0

0

1

1



0

0

0

0

0

1

1



0

0

0

0

1

0

0



0

0

0

0

1

0

0



0

0

0

0

… Cm-2 Cm-1 Cm C2m+1 Cm+2 Cm+3 Cm+4 Cm+5 Cm+6 Cm+7 Cm+8 … C23m-5 C3m-4 C23m-3 C3m-2 C3m-1 C3m Provisional Regionalization and Final Homogenization 123

124

The Carousel of Time

The final topography of our network thus seems to identify the effective space of the latter with a juxtaposition of local informational equilibria, correlative of a division of this space into regions perfectly distinct from each other. Such a division would thus partition our network into as many complementary and mutually exclusive parts as there would be regional equilibria. We will see that this is not the case, and this is due to a double phenomenon: when m is odd, as the network evolves, the actor excluded from any communication is more and more likely to destroy the informational equilibria achieved and to build others that may also be destroyed later, etc.; moreover, at least one psychological category tends to be shared by all the actors, thus breaking their communicative isolations in their respective clusters. However, if the number m of actors is even, only this second process is likely to remove their irrevocable character from local informational equilibria. 6.2. Instability and erasure of regions within the network Sticking to the above, we would be tempted to note respectively the duplicated actors within each cluster A12, A34, …, Am-2, m-1, and thus consider each of these duplicated actors as one and the same actor. However, such a notation would be incorrect because, unlike the global informational equilibrium analyzed in the previous chapter with our example of a network reduced to two actors, these local informational equilibria are not irrevocable. Indeed, in this hypothetical final state of the network, if there were any possible informative communications, it would be those that would be established between actors belonging to successive clusters of the looping cascade that the current state of the network presents to us: between A1 (or A2) and A3 (or A4), between A3 (or A4) and A5 (or A6), … , between Am-4 (or Am3) and Am-2 (or Am-1), between Am-2 (or Am-1) and Am, between Am and A1 (or A2). However, among all these informative communications that remain possible at date t1, the most likely ones all involve Am. In fact, in this state of the network: p1,3 = p2,3 = p1,4 = p2,4 = p3,5 = p4,5 = p3,6 = p4,6 = … = pm-3, = pm-3, m-1 = 1/13

m-2

= pm-4,

m-1

while: pm-2, m = pm-1, m = pm,1 = pm,2 = 1/10 Indeed, we have seen that each informative communication established at date t0 within the [1 + (m – 3)/2] pairs formed by the first m – 1 actors of the network decreased the probabilities of establishing possible dyadic communications that could have been established at that date but did not, and that this decrease was all the more important as the actors concerned had established other communications with different third parties. Previously excluded from any communication, Am has now

Provisional Regionalization and Final Homogenization

125

become, for this same reason the most likely interlocutor for each other actor. In reality, everything happens here as if we were to add, at date t1, an additional member to a population previously composed of m – 1 actors, by providing this mth actor with a memory comprising four elementary categories C1, Cm, C3m-1 and C3m, the first two of which would be shared (respectively with A1 and A2, and with Am-1 and Am-2), and the last two would be idiosyncratic. And it is precisely because of such an introduction that the local informational equilibrium formed by (A1 and A2) is not irrevocable.

C3



Cm

C2m+1

Cm+2

Cm+3

Cm+4

C23m-5

C3m-4

C23m-3

C3m-2

C3m-1

C3m

1

1

1

0

0

1

1

1

1

0

0

0

0

0

1

1

0

1

1

0

0

1

1

1

1

0

0

0

0

0

0

1

0

0

0

0

1

0

0

0

0

0

0

0

0

1

1



C22

0

A2



C21

A1



C1

Let us look in detail at the demonstration of this last proposal. Of the four possible and equiprobable dyadic communications involving Am, only one can be established at date t1. Let (A1 and Am) be the pair thus selected at random. Whatever the message then transmitted by A1, it is sufficient that the one transmitted by Am has a non-zero informative value (e.g. that Cm, C3m-1 or C3m appears in its message) for the local informational equilibrium established at t1 between A1 and A2 to be broken: the strengthening of category C1 in the cognitive repertoire of A1 (and not in that of A2) certainly has no impact on the structure of the propensities to communicate, but the introduction of one or another psychological category Cm, C3m1 or C3m in this same cognitive repertoire (and not in that of A2) modifies the structure of the propensities to communicate within the network. Thus, even though Am is the only one to send information at date t1 (if the message sent simultaneously by A1 and received by Am has no informative value), and if it sends, for example, the message “C1C3m” to A1, the status of the network at date t2 is representative as indicated in Table 6.3.

… … Am

Table 6.3. State of the network at date t2

In this state of the network, p12 (t2) = 7/8 < 1, which makes it clear that the local informational equilibrium previously established by A1 and A2 has been disrupted. All other things being equal, this break is all the more clear as the informative value of the message sent by Am to A1 at date t1 is important: if this message were, for

126

The Carousel of Time

C3



Cm

C2m+1

Cm+2

Cm+3

Cm+4

C23m-5

C3m-4

C23m-3

C3m-2

C3m-1

C3m

1

1

1

0

0

1

1

1

1

0

0

0

0

1

1

A2

1

0

1

1

0

0

1

1

1

1

0

0

0

0

0

0

1

0

0

0

0

1

0

0

0

0

0

0

0

0

1

1



C 22

0



C 21

A1



C1

example, “C1C3m-1C3m” instead of “C1C3m”, the state of the network at date t2 would be represented by the matrix [aij] in Table 6.4.

… … Am

Table 6.4. First alternative state of the network at date t2

C2m+1

Cm+2

Cm+3

Cm+4

C23m-5

C3m-4

C

C3m-2

C3m-1

C3m

0

1

1

1

1

1

0

0

0

0

1

1

A2

1

0

1

1

0

0

1

1

1

1

0

0

0

0

0

0

1

0

0

0

0

1

0

0

0

0

0

0

0

0

1

1

2

3m-3

Cm

1

2



1



C3

1



C

0



C21

A1

2

C1

We will therefore have p12 (t2) = 7/9 < 7/8 < 1. And at the limit, if Am were driven by a concern for total transparency when communicating with A1 at t1, the message they would then send to A1 would be “C1CmC3m-1C3m”. All other things being equal, the state of the network in t2 would be that of Table 6.5.

… … Am

Table 6.5. Second alternative state of the network at date t2

In this state, p12 (t2) = 7/10 < 7/9 < 7/8 < 1. The propensity to communicate between A1 and A2 would thus have decreased by almost a third between t1 and t2, and the local informational equilibrium previously established between these two actors would be disrupted even more clearly. Simultaneously, a tendency to establish another local informational equilibrium, but between A1 and Am this time, would begin to appear, because in the latter case, p15 (t2) = 2/5. Finally, this trend would be all the more clear-cut since the message sent by A1 to Am in (t1) would be rich in information rather than void of it. Ultimately, if A1 and Am were identically driven by a concern for total transparency when communicating,

Provisional Regionalization and Final Homogenization

127

C3



Cm

C2m+1

Cm+2

Cm+3

Cm+4

C23m-5

C3m-4

C23m-3

C3m-2

C3m-1

C3m

1

1

1

0

1

1

1

1

1

0

0

0

0

1

1

1

0

1

1

0

0

1

1

1

1

0

0

0

0

0

0

1

0

0

0

0

1

1

1

1

1

0

0

0

0

1

1



C 22

0

A2



C 21

A1



C1

A1 sending the message “C21C22C3C2m+1Cm+2C2m+3Cm+4” to Am at date t1, and the latter simultaneously sending them the message “C1CmC3m-1C3m”, then the state of the network at date t2 would be representative as indicated in Table 6.6.

… … Am

Table 6.6. Third alternative state of the network at date t2

We would then have p12 (t2) = 7/10 < 1 and p1m (t2) = 1: the local informational equilibrium established at date t2 by A1 and Am would thus have destroyed and replaced the local informational equilibrium that had been established at date t1 by A1 and A2. And this informational equilibrium could in turn be destroyed and replaced by others during the most likely subsequent evolution of the network, etc. All other things being equal, the same type of phenomenon occurs, but for another reason, when m is even. Suppose that the possible m/2 informative dyadic communications are established like before between actors belonging to successive clusters of a cascade loop: between A1 (or A2) and A3 (or A4), A3 (or A4) and A5 (or A6), and so on until Am-4 (or Am-3) and Am-2 (or Am-1,) Am-2 (or Am-1) and Am, Am and A1 (or A2). Let (A1 and A2), (A3 and A4), …, (Am-3, Am-2), (Am-1, Am) be m/2 pairs. Suppose, moreover, that the messages sent at date t0 by each member of each of these pairs contain, as usual, only two psychological categories, one of which is shared and the other is idiosyncratic. In other words, E = F = W = W = Q = 1, so that D = G = 2, etc. Under these conditions: p1,2 (t1) = p3,4 (t1) = p5,6 (t1) = … = pm-5, m-4 (t1) = pm-3, m-2 (t1) = pm-1,m (t1) = 3/7 and: p2,3 (t1) = p4,5 (t1) = p6,7 (t1) = … = pm-6,m-5 (t1) = pm-4,m-3 (t1) = pm-2,m-1 (t1) = pm,1(t1) = 1/9 As before, the propensity of members to communicate in a pair, who entered into communication from t0 to t1, increased, and those of actors who could have

128

The Carousel of Time

communicated, but did not because they were busy communicating with a third party, decreased. However, there is no supernumerary actor here, such as Am in the previous case where m was odd, to come and crack and then finally fracture the m/2 informational equilibria clearly under construction. Between dates t1 and t2, it will therefore most probably be the same pairs as before who will communicate. If the messages exchanged then maintain the same structure as between t0 and t1, the propensity to communicate at date t2 will be as follows: p1,2 (t2) = p3,4 (t2) = p5,6 (t2) = … = pm-5, m-4 (t2) = pm-3, m-2 (t2) = pm-1,m (t2) = 5/7 and: p2,3 (t2) = p4,5 (t2) = p6,7 (t2) = … = pm-6,m-5 (t2) = pm-4,m-3 (t2) = pm-2,m-1 (t2) = pm1(t2)= 1/11 The cumulative process of cluster formation has therefore intensified, and is leading the network more and more clearly towards a juxtaposition of apparently irrevocable m/2 informational equilibria. Indeed, if the dyadic communications established between dates t2 and t3 have the same characteristics as in the two previous periods, such a juxtaposition will be achieved from the latter date. Indeed, the propensities to communicate will then be as follows: p1,2 (t3) = p3,4 (t3) = p5,6 (t3) = … = pm-5, m-4 (t3) = pm-3, m-2 (t3) = pm-1,m (t3) = 1 and: p2,3 (t3) = p4,5 (t3) = p6,7 (t3) = … = pm-6,m-5 (t3) = pm-4,m-3 (t3) = pm-2,m-1 (t3) = pm1(t3)= 1/13 The m/2 informational equilibria thus achieved seem irrevocable, since all the actors are then part of a cluster composed of two perfectly identical actors that nothing seems to be able to crack, and even less to fracture, since the two actors in question are certain to communicate with each other during all the periods to come, and since no third actor can disturb this perfect communication. The latter’s product is then limited to level 2 Batesonian learning, and the actors thus confined to the only intensive dimension of learning see the contents of the latter disappear from their level when the number of representations-occurrences of the psychological categories concerned reaches the value g.

Provisional Regionalization and Final Homogenization

129

However, such a development may only appear to be the case. Because, notwithstanding the absence of force by a supernumerary actor capable of breaking such a juxtaposition of clusters, this evolution includes another phenomenon leading to the same type of local information equilibrium disruption, even though m is even: the appearance of one or more columns of the matrix [aij] containing only “1”. This type of column corresponds to psychological categories simultaneously present in the individual memories of all the actors, and its appearance is part of the very continuity of the clusters formation process analyzed above. Let us review the state of the network at date t1, as shown in Table 6.7. Let us consider categories C3, C5, C7, …, Cm-1. Each of these categories is shared by two successive clusters along the cascade of the latter. Less deeply rooted in the memories of the actors than C22, C24, C26, …, C23m-5, C23m-3 which most deeply link them within each pair (A1 and A2), …, (Am-2 and Am-1), each of them therefore establishes a link between two successive pairs – between (A1 and A2) and (A3 and A4), …, (Am-4 and Am-3) and (Am-2 and Am-1). Since it is the only one in this case (it is from it that the “1” that appears at date t1 in the numerator of the propensities to communicate between actors of two successive pairs comes from), it is necessarily involved in informative communications that hinder the formation of the juxtaposition of clusters, or even that destroy the local informational equilibrium to which the formation of each of the latter leads. Depending on the precise content of the messages exchanged during communications established in the network after date t1, it is possible to see one or more of these categories gradually linking three, four, …, then between all the pairs of actors in the network (i.e. [1 + (m–3)/2] pairs if m is odd, and m/2 pairs if m is even). Ultimately, the most likely evolution of the topography of the distribution of the combinations of elementary categories within the network is first of all that of a regionalization of the effective area bounded by the border corresponding to the structural complexity of the network. Gathering a cluster of actors in formation, each of these regions sees its border move and its population of actors change according to the communications established among them. They sometimes compete with each other, and the resulting dynamics may see the disappearance of one of them in favor of another. However, the latter’s victory is never more than temporary, not only because of the potential arrival of other competitors much more formidable than the one it has just eliminated (Ancori 2017a), but also and above all because of a process of homogenization that affects the entire cognitive space of the network since inter-individual communication is the only driving force of its dynamics.

0

0

0

0

0 …

0 …

0

1

Am 1

0

0 …

0

0

Am-1 0

0

1

0

0

1

1 …

0

Am-2 0

1

1

0 …

0



0

0

A4

1

0

0 …

… … … … … … …

0

0

A3

1

0

0

1

1

A2

1

1 …

1

1

A1

0

1

1



0

0

0

0

1

1

1



0

0

0

0

0

0

0



0

0

1

1

0

0

0



0

0

1

1

0

0

0



0

0

1

1

0

0

0



1

1

0

0

0

0

0



1

1

0

0

0

0

0



1

1

0

0

Table 6.7. State of the network at date t1

0

0

0



0

0

1

1

0

0

0



1

1

0

0

















0

1

1



0

0

0

0

5

0

1

1



0

0

0

0

0

1

1



0

0

0

0

0

1

1



0

0

0

0

1

0

0



0

0

0

0

1

0

0



0

0

0

0

C23mC1 C22 C3 C4 C5 … Cm-2 Cm-1 Cm C2m+1 Cm+2 Cm+3 Cm+4 Cm+5 Cm+6 Cm+7 Cm+8 … C3m-4 C23m-3 C3m-2 C3m-1 C3m

130 The Carousel of Time

Provisional Regionalization and Final Homogenization

131

This homogenization process is inevitable and fast. It is inevitable because the dynamics of the network pushes it to realize all the potential combinations of psychological categories existing within it, and thus necessarily leads to the emergence of one or more of these categories simultaneously present in all individual memories6. And the homogenization process thus initiated is then very fast. Indeed, with m actors and 3m psychological categories, and the communications between actors including exactly the two categories provided for at each time step of the network, a series of simulations have showed that it takes only a little less than 3m time steps for a category shared by all actors to appear within the network, thus establishing a link among all of them, and a little more than 3m time steps for the network to reach its global informational equilibrium: T = t3m + ε (Belarte 2010)7. This speed and rhythm is explained by the fact that at each time step, the informative categories appearing in the exchanged messages are selected, with equal probability, among the idiosyncratic categories of the actors as well as among the ignored categories of the message receivers. However, in the initial state of the network, there are 2m categories of the first type and m categories of the second type, i.e. 3m categories in total, so that it takes a little less than 3m of time steps for one of these categories to be known by all actors, and, since this category is quickly followed by all the others, a little more than 3m of time steps, which is no time for all these categories to be known by all. Any local informational equilibrium achieved by a cluster is therefore fundamentally unstable, as it is constantly threatened by such a phenomenon. It follows that the cumulative formation of local clusters of actors corresponding to a progressive regionalization of the effective space of the network is then erased as at least one category shared by all actors allows each of them to escape from the cluster in which they would otherwise remain enclosed. Because of the homogenization process thus involved in the distribution of representations among the actors, the final topography of this distribution has exactly the same structure as that which we 6 With the advent of such a psychological category, the various public institutions governing several previously relatively autonomous groups of actors tend to merge into a small number of institutions governing a single group, such as the Carolingian Empire. The struggle between mutually competing conventions turns to the final victory, one over all others (Ancori 2017a), and, more generally, the tribalization of the world gradually fades before a national space, then supranational, populated atomized individuals; local dialects then give way to a lingua franca, and finally, local currencies disappear in favor of a single currency. 7 Such speed must be compared with the enormous potential volume of information that inter-individual communication within the network must achieve between the date when unstable local informational equilibria are reached and the date when the final stable global informational equilibrium is achieved. In an example with m = 5, the two unstable local informational equilibria are realized at date t3 with Hmax = 524 bits and H = 266 bits, and the stable global informational equilibria at date t15 + ε with H = 215 bits = 32 768 bits and Hmax = m.H = m.215 bits = 5.215 bits = 163 840 bits.

132

The Carousel of Time

saw in our previous chapter. It can be represented from a matrix [aij](T) whose coefficients are all equal to 1: at a certain date T, aij = 1, ∀ i, ∀ j. Contrary to what contemporary vulgate claims, social communication is therefore neither synonymous with a global and immediate erasure of differences between all the protagonists nor with a convergence of their points of view: it is only locally (at the level of each cluster of actors) that it first produces such an effect, which is simultaneously accompanied at the global level by an increase in differences and points of view between clusters of actors. And this is fortunate, because the novelty (in the weak sense) generated within the network by inter-individual communication is precisely due to the ambiguity inherent in it. It is this ambiguity that produces new information by combining psychological categories that potentially exist in the network from its initial state: it is because the actors give different meanings to representations that are, however, partially shared, that the epidemiology of the latter is reflected in their transformations. However, maintaining separations between regions dividing the network space before the homogenization process ends means that it does not only have advantages. It certainly implies a certain production of new information, but this production decreases continuously throughout the network’s trajectory, and this phenomenon, which affects the network as a whole, is even more pronounced when we consider each cluster of actors taken in isolation. 6.3. Evolution of information production at the level of the global network and at the level of each cluster of actors Let us study this evolution on the simple example of a network with only five actors, so that the number n of psychological categories then present in the network is equal to 15. The state of the network at date t0 is then that of Table 6.8. In this state of the network, Hmax = 80 bits, H = 70 bits and R = 12.5%. When they are not zero, all the propensities to communicate between actors are initially equal to 1/7. A roll of the dice indicates that the two pairs of actors entering into communication in this state of the network are A1 and A2, on the one hand, and A3 and A4, on the other hand. Actor A5 then does not communicate with anyone, and, as in the general case, we will assume here that they do not think either. In accordance with our previous analyses, the communication established within each of the two pairs (A1 and A2) and (A1 and A2) is such that the messages exchanged on this occasion contain exactly two categories. Thus, A1 communicates the message “C1C2” to A2, who simultaneously communicates the message “C2C3”, and, always simultaneously, A3 communicates the message “C3C4” to A4 who communicates the

Provisional Regionalization and Final Homogenization

133

message “C4C5”. The state of the network at date t1 can then be represented as shown in Table 6.9. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

C11

C12

C13

C14 C15

A1

1

1

0

0

0

1

1

0

0

0

0

0

0

0

0

A2

0

1

1

0

0

0

0

1

1

0

0

0

0

0

0

A3

0

0

1

1

0

0

0

0

0

1

1

0

0

0

0

A4

0

0

0

1

1

0

0

0

0

0

0

1

1

0

0

A5

1

0

0

0

1

0

0

0

0

0

0

0

0

1

1

Table 6.8. Status of a network of five actors at date t0

C1

C22

C3

C24

C5

C6

C7

C8

C9

C10

C11

C12

C13

C14

C15

A1

1

1

1

0

0

1

1

0

0

0

0

0

0

0

0

A2

1

1

1

0

0

0

0

1

1

0

0

0

0

0

0

A3

0

0

1

1

1

0

0

0

0

1

1

0

0

0

0

A4

0

0

1

1

1

0

0

0

0

0

0

1

1

0

0

A5

1

0

0

0

1

0

0

0

0

0

0

0

0

1

1

Table 6.9. Status of a network of five actors at date t1

In this state, Hmax = 144 bits, H = 122 bits and R = 15.28%, the rate of information production from t0 to t1: γ (t0, t1) = [H(t1) – H(t0)] / H(t0)] = 74.28%, and the distribution of propensities to communicate among the five individual actors is as follows: p1, 2 (t1) = p3, 4 (t1) = 3/7 p1, 5 (t1) = p4, 5 (t1) = 1/8 p2, 3 (t1) = 1/9 In other words: p1, 2 (t1) > p1, 2 (t0) p3, 4 (t1) > p3, 4 (t0) p2, 3 (t1) < p2, 3 (t0)

134

The Carousel of Time

p5, 1 (t1) = p5, 4 (t1) < p5, 1 (t0) = p5, 4 (t0) p2, 3 (t1) < p5, 1 (t1) = p5, 4 (t1) We know that the process of forming clusters of actors thus initiated is cumulative, and that it is all the more accentuated as the messages exchanged are rich in new categories for their receivers. Nevertheless, let us preserve the habitual structure of these communications by limiting the content of their messages to two psychological categories. Let us consider the messages “C22C6” and “C22C8” sent respectively at date t1 by A1 and A2 and received respectively by A2 and A1 when they were communicated, and the same applies to “C24C10” and “C24C12” for the pair (A3 and A4). At date t2, the state of the network is represented by the matrix in Table 6.10. C1 C32 C3 C34 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C15 A1

1

1

1

0

0

1

1

1

0

0

0

0

0

0

0

0

A2

1

1

1

0

0

1

0

1

1

0

0

0

0

0

0

0

A3

0

0

1

1

1

0

0

0

0

1

1

1

0

0

0

0

A4

0

0

1

1

1

0

0

0

0

1

0

1

1

0

0

0

A5

1

0

0

0

1

0

0

0

0

0

0

0

0

0

1

1

Table 6.10. State of the network k of five actors at date t2

In this state of the network, Hmax = 272 bits, H = 202 bits and R = 25.73%. The rate of information production from t1 to t2: γ (t1, t2) = 65.57%. As for the propensities to communicate between actors, they are distributed as follows: p1, 2 (t2) = p3, 4 (t2) = 5/7 p1, 5 (t2) = p4, 5 (t2) = 1/9 p2, 3 (t2) = 1/11 In other words: p1, 2 (t2) > p1, 2 (t1) p3, 4 (t2) > p3, 4 (t1)

Provisional Regionalization and Final Homogenization

135

p2, 3 (t2) < p2, 3 (t1) p5, 1 (t2) = p5, 4 (t2) < p5, 1 (t1) = p5, 4 (t1) p2, 3 (t2) < p2, 3 (t1) p2, 3 (t2) < p5, 1 (t2) = p5, 4 (t2) Finally, “C32C7” and “C32C9” are the messages sent on this date by A1 and A2 respectively and received by A2 and A1 respectively when they were communicated, and the same applies to “C24C11” and “C24C13” for the pair (A3 and A4). At date t3, the state of the network can be represented from the matrix in Table 6.11. C1 C42 C3 C44 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C15 A1

1

1

1

0

0

1

1

1

1

0

0

0

0

0

0

0

A2

1

1

1

0

0

1

1

1

1

0

0

0

0

0

0

0

A3

0

0

1

1

1

0

0

0

0

1

1

1

1

0

0

0

A4

0

0

1

1

1

0

0

0

0

1

1

1

1

0

0

0

A5

1

0

0

0

1

0

0

0

0

0

0

0

0

0

1

1

Table 6.11. State of the network of five actors at date t3

In this state of the network, Hmax = 524 bits, H = 266 bits and R = 49.24%. The rate of information production from t2 to t3 is then γ (t2, t3) = 31.68%, and the propensities to communicate are distributed as follows: p1, 2 (t3) = p3, 4 (t3) = 1 p1, 5 (t3) = p4, 5 (t3) = 1/10 p2, 3 (t3) = 1/13 In other words: p1, 2 (t3) > p1, 2 (t2) > p1, 2 (t1) p3, 4 (t3) > p3, 4 (t2) > p3, 4 (t1) p2, 3 (t3) < p2, 3 (t2) < p2, 3 (t1) p1, 5 (t3) = p4, 5 (t3) < p5, 1 (t2) = p5, 4 (t2) < p5, 1 (t1) = p5, 4 (t1)

136

The Carousel of Time

p2, 3 (t3) < p2, 3 (t2) < p2, 3 (t1) p2, 3 (t3) < p5, 1 (t3) = p5, 4 (t3) We find here the juxtaposition of two local informational equilibria – one being reached by the cluster composed of A1 and A2, and the other by the cluster composed of A3 and A4 – as analyzed above in the general case. From the initial date t0 to the final date t3, the evolution of the values of the main variables of the network considered as a whole is that which is represented in Tables 6.12 and 6.13. Dates

t0

t1

t2

t3

Hmax (in bits)

80

144

272

524

H (in bits)

70

122

202

266

R (in %)

12.5

15.28

25.73

49.24

p1,2 = p3,4

1/7

3/7

5/7

1

p1,5 = p4,5

1/7

1/8

1/9

1/10

P2,3

1/7

1/9

1/11

1/13

Table 6.12. Complexities and redundancies of the global network for m = 5

Periods

From t0 to t1

From t1 to t2

From t2 to t3

γ (in %)

74.28

65.57

31.68

Table 6.13. Global network information production rate for m = 5

These tables show that the maximum (Hmax) and structural (H) complexities, as well as the redundancy (R) of the network, increase continuously between t0 and t3. At the same time, the rate of information production is steadily decreasing. As we noted in the previous chapter with our simple example of a network with only two actors, the structural complexity and redundancy of the network varies in the same direction throughout its trajectory when the only driving force behind its evolution is inter-individual communication. In fact, H measures the volume of the union of cognitive repertoires and R measures that of their intersection, and this union and intersection grow simultaneously. At the same time, γ is decreasing precisely because of the continuous growth of R: everything happens as if the network sees its informative potentialities increasingly stifled under the increasing weight of its redundancy.

Provisional Regionalization and Final Homogenization

137

Finally, this example very clearly illustrates the cumulative nature of this process through the evolution of individual actors’ propensities to communicate. Initially distributed in a perfectly equitable way among the population of the latter, their propensities to communicate evolve in such a way that, as the network most likely evolves, clusters of individual actors are formed, such that the actors contained in any one of these clusters are always closer from a cognitive point of view, and are simultaneously ever more distant from the other actors contained in the other clusters. It follows that the most likely communications are always poorer in potential information creativity, while the least likely communications are always richer in potential information creativity. This means that the continuous drying up of the pool of potential combinations of elementary categories in which communication creativity is weak, as described in the previous chapter, is a process that is accelerating: the rate of information production can only decrease from period to period until it becomes strictly zero. In this final informational equilibrium of the global network, the values of the main variables characterizing the network are as follows in the particular case where m = 5: Hmax = m. 215, H = 215, R = 80% and γ = 08 All these developments are even more pronounced in each of the two clusters considered separately, as shown in Tables 6.14 and 6.15. Dates

t0

t1

t2

t3

Hmax (in bits)

32

64

128

256

H (in bits)

30

56

96

128

R (in %)

6.25

12.5

25

50

Table 6.14. Complexities and redundancies of each clusters for m = 5

Periods

From t0 to t1

From t1 to t2

From t2 to t3

γ (in %)

86.67

71.43

33.33

Table 6.15. Information production rate of each cluster for m = 5

According to Table 6.14, the redundancy of each cluster doubles from each date to the next date. With an initial value half that of the global network, it joins the latter at date t2 and then exceeds it from date t3 when each cluster reaches its local 8 With any number m of actors, Hmax = m. 23m bits, H = 23m bits, R = 1 – (1/m) and γ = 0.

138

The Carousel of Time

informational equilibrium. Table 6.15 shows that the rate of information production remains higher within each cluster than that of the overall network during the first three observation periods9, as well as that it decreases from period to period more quickly than that of the overall network – and from date t3 onwards, it becomes strictly zero. To conclude, let us underline a very important point in order to erase from the reader’s mind a possible impression of the ad hoc nature of the construction of our model because of the particular distribution of the propensities to communicate that we have imposed on the initial state of the network, or the identification of possible inter-individual communications in the particular case of dyadic communications alone, or the constraint imposing on the messages exchanged on this occasion to include only two psychological categories. What might seem like such restrictions on the degree of generality of the model is in fact a guarantee, on the contrary, of its greater generality. It is indeed obvious that all the processes thus analyzed would be accentuated if we removed these apparent restrictions. The distribution of the propensities to communicate between individual actors that we have imposed on the initial state of the network is the one that is as neutral as possible towards the process of the formation of clusters of actors, because this process is in itself unidirectional and cumulative: any other distribution of these propensities to communicate, for example the one that prevails from date t1 in the last example that we have just used, would lead even more quickly to an informational equilibrium similar to the one we have associated with a date T in the network’s history. Moreover, the introduction of communications bringing together more than two individual actors would clearly lead to exactly the same type of results as those produced by dyadic communications, i.e. to the formation of clusters of actors whose respective populations would simply be more numerous than those composing the clusters highlighted here by the model. Finally, it is obvious that, dyadic or not, whether or not there are communications between individual actors whose messages contain more than two psychological categories, the only effect, such as the distributions of propensities to communicate alternatives mentioned above, would be to accelerate the processes in question – cluster formation and/or informational equilibrium. Finally, it should be noted that our concept of propensity to communicate makes it possible to introduce the counterfactual notion into the modeling proposed here, and simultaneously makes it possible to distinguish the related dimensions of the 9 This difference is due to the fact that in each cluster, the two actors are constantly producing information between t0 and t3, while at the global network level, the actor A5 is consistently unproductive during these three periods. It would be less marked if we considered a global network with only four actors (i.e. without actor A5) instead of five.

Provisional Regionalization and Final Homogenization

139

being and the becoming of the individual actor. Let us look again at the case of a network with an odd number of individual actors: something could have happened in the initial state of the network (Am, which remained isolated at date t0, could have established a communication at that date), but did not happen (it did nothing of the sort), and this absence of an event is still an event, since it produces an effect (the deformation of probabilities) that would not have existed without it. An actor who has remained away from any communication (and does not carry out any categorization) thus remains identical to himself in the dimension of being, but not in that of becoming: in the considered state of the network, as soon as at least one other actor communicates or categorizes, this actor’s propensities to communicate, although totally inert, are modified, so that the range of openness towards their possible future is changed. It is this counterfactual notion linked to our concept of propensity to communicate that most clearly expresses the importance of social cohesion in our network by highlighting the fundamental solidarity that exists between all the actors whose individual actions (including the absence of action) produce effects that reflect on all the actors. Our next chapter demonstrates that this concept is not only a marker of the network space, whose internal structure it configures, but also plays this role in the network’s temporal dimension.

PART 3

Time

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

7 Propensities to Communicate, the Specious Present and Time as Such, the Point of View from Everywhere and the Ancestrality’s Paradox

In this chapter, we will introduce some aspects of the concept of time that our model operates under. We will first show the knotting together of these temporal aspects with those that our previous chapter analyzed as relating to space, this distinction having only an analytical reality. An adequate interpretation of our concept of the propensity to communicate will thus lead us to interpret in this concept a possible formalization of the notion of the specious present, introduced into the field of psychology by William James at the end of the 19th Century, then taken up by various schools of thought and explored today by our modern cognitive sciences. We will then propose to link this specious present constituting the subjective time of our daily experience to objective time within the framework of time as such, the concept of which is necessary so that we can speak of these two categories of time as time. Usually, the so-called “objective” time is measured on the basis of a definition of the second as the duration of 9,192,631,770 radiation periods corresponding to the transition between the two hyperfine levels of the fundamental state of the cesium 133 atom. We will see here that establishing the necessary link so that we can think of time as time, beyond all the possible registers and divisions of this concept, requires the introduction of another notion of objective time. Indeed, linking such a notion to that of subjective time implies that we must consider the difference between subjective and objective as a matter of degree, and not of nature. The time itself to link these two notions will then appear to us in the topological form of a Möbius strip.

144

The Carousel of Time

This discussion of the link between subjective and objective time will then lead to a fundamental philosophical and epistemological question of how it would be possible to combine the point of view of a particular person within the world with an objective view of that same world that can include the person and their point of view. This is the central object of a famous work by Thomas Nagel (1986), entitled The View from Nowhere, which conceives objectivity in a structural way. While fundamentally agreeing with the author’s solution to the question, we will see that a vocabulary better adapted to this subject would rather invoke a “point of view from everywhere”, as it could be adopted by each actor at the cost of a tension on their part towards the asymptotic elimination of all their singular and subjective determinations. Finally, the fact that such a point of view is never more than asymptotic for an observer of our network, always imbued with singularity, leads us to confer a primacy on subjective time over objective time. We will then conclude this chapter with a discussion of a supposed “ancestrality’s paradox”, according to which such a primacy would pay no heed to ignoring events located in a temporality prior to the appearance of any life on Earth, and well before that of Homo habilis. To close this chapter, we will show that this paradox is actually based on a double confusion.

7.1. Propensities to communicate and the specious present While situating the actors in relation to each other in the network space, their propensity to communicate also indexes the future opportunities that are currently open to them by their past trajectories. The previous chapter showed that the concept of the propensity to communicate is a marker of the structure of the latter’s inner space by measuring the articulated series of links that unite and separate both individual actors. This concept crystallizes each actor’s current situation at each moment of the network’s global trajectory. The fact that this measure is always associated with the present moment in the evolution of this space implies that the current distribution of the propensities to communicate among all the actors is what ties together the network’s space and time into an articulation where this space is never more than a moment of this time. The exact nature of this notion of the present can be deduced from this concept of the propensity to communicate between individual actors. In fact, the term propensity refers to a probability conditional on a situation. This situation is that of the current state of the network, and each individual actor appears to us to be installed in a form of specious present very close to the one whose concept was

Propensities to Communicate, the Specious Present and Time as Such

145

popularized by W. James in The Principles of Psychology (James 1890)1. With this term, W. James referred to the extension of time that we experience as present, and this extension represented in his eyes the unity of composition of our perception of time: according to him, this duration would present a “rear guard end” and an “avant-garde end”, these ends being part of a block of duration, a time interval felt as a whole with its two ends embedded in it. Although he seemed quick to change his mind as he reflected on this2, almost 20 years later, W. James’ position had not changed an inch on this point. In his Textbook of Psychology (1892), the 1908 edition of which was translated into French the following year, he wrote: “The present itself [this boundary moment that separates the past from the future] is only perceptible when it is integrated into a much broader organic time than it is, whose life and movement it shares. In fact, this present is only a pure abstraction; not only do we not have the perception of it, but we do not even have the conception of it, unless we are initiated into philosophical meditations. It is the reflection that convinces us that it must exist; experience does not give us the experimental intuition. The only immediate data provided by our experience here is what has rightly been called [as opposed to this ‘real present’], the ‘specious present’. This present one has a certain scope. It is, one could say, like a humpbacked bridge thrown over time and from the top of which our gaze can at will go down to the future or to the past at will. Our perception of time is therefore united by a duration between two limits, one forward and the other backward; these limits are not perceived in themselves, but in the block of time they end. For if we perceive a succession, it is not that we perceive first a before and then an after, and that we are led by this to infer the existence of a time interval between this before and this after; but we perceive the interval itself as a whole, with its two limits that are part of it. The experience of duration is therefore always, even the first time it occurs, a synthetic data and by no means a simple data; its 1 According to P. Buser and C. Debru, the term specious present “was created by a series of ‘amateur’ philosophers, the most recent of whom are Robert Kelly, better known under the pseudonym Clay and who were close to William James, as well as Shadworth Hodgson who would have influenced E. Husserl” (2011, p. 45, n. 30). 2 According to P. Buser (2008, pp. 34–37), W. James was successively marked by the old tradition of moral philosophy, then a convinced Darwinian, before trying experimental physiology in contact with the German school of associationists and mechanists, whose laboratory psychology he ended up rejecting in 1875, when he began teaching psychology. For an intellectual biography of W. James and the scientific context of his psychological work, considered in the light of his constant and intense interest in phenomena he considered “irregular”, and in particular his involvement in psychical research in the early 1880s, which aimed to study objectively and impartially phenomena resulting from “mesmerism”, “induced somnambulism” or even “modern spiritism”, see Trochu (2018, pp. 25–263).

146

The Carousel of Time

elements are inseparable from sensitive perception; and it is after the fact that attention distinguishes them, very easily, and discerns a separate beginning and end. After a few seconds, the consciousness of duration ceases to be an immediate perception and becomes a more or less symbolic construction. To obtain the one-hour representation, for example, we must divide our experience into successive ‘presents’, as we would do by repeating indefinitely ‘now, now, now, now, now’, etc. Each ‘now’ is the perception of an isolated segment of time; but the exact sum of all these segments can never produce a clear impression on the consciousness” (James 1909, pp. 332–333, original author’s use of italics, author’s translation)3. The same idea was evoked with other words in 1896 by Henri Bergson in the first edition of Matter and Memory, emphasizing that the present is always both below and beyond the mathematical point that we ideally determine when we think in the present moment, because it encroaches both on our past and on our future (Bergson 1990, pp. 152–153)4. It was then developed by Edmund Husserl’s phenomenology and his conception of a present “structured by the three moments that are now, the immediate past (the ‘retention’) and the future in the process of coming (the ‘protention’)” (Buser and Debru op. cit., p. 65, author’s translation), then by that of Maurice Merleau-Ponty (1945, 1964) or by the philosophy of Alfred N. Whitehead (Stengers 2002, pp. 74–90). Finally, the notion of the specious present is nowadays taken up by the philosophy of mind and by cognitive sciences (Debru 1992; Varela 2002; Debru et al. 2008; Buser and Debru op. cit., pp. 130–139)5.

3 According to some experiences of the time, later cited by W. James, this specious present would last between 1/500th of a second and 12 seconds. Some estimates have even given it a maximum of one minute, but according to current neuroscience, its average duration is 3 seconds (P. Buser and C. Debru op. cit., p. 130, no. 14). The maximum duration of 12 seconds corresponds to Clay’s “specious present”, and the minimum duration of 2 milliseconds is the one Sigmund Exner measured (Debru 2008, p. 70). 4 At the same time, it is exactly the same idea that is expressed by J.-M. Guyau in a pioneering work on the genesis of the idea of time, the first edition of which dates back to 1902, when he opposed the rational present, “conceived as infinitely small, dying and being born at the same time”, which constitutes a “result of mathematical and metaphysical analysis”, to the empirical present, “a piece of duration actually having past, present and future, a piece divisible into an infinity of present mathematics [...]” (op. cit., p. 30, author’s use of italics). 5 In fact, this conception could be traced back to Saint Augustine, who wrote in his Confessions (XX, 26): “There are three times: a time present of things past, a time present of things present and a time present of things future [...] The time present of things past is memory; the time present of things present is direct intuition the time present of things future is expectation”.

Propensities to Communicate, the Specious Present and Time as Such

147

The “avant-garde end” of this specious present reflects this very general characteristic of memory, which is that it not only stores the past, but also allows individual actors to make predictions based on the acquired experience it contains, so that it provides the basis on which they can build possible scenarios for the future. It is precisely this notion of the specious present, with its double opening towards the past and the future, that formalizes our concept of propensity to communicate. For the measurement of such a propensity, the cardinals of the Si sets of psychological categories known to the actors constitute quantitative measures of the traces left by their respective past6. They contribute to measuring the volumes of the cognitive repertoires c Si built by them from their most distant past to the present moment. However, we know that the individual memories of these actors consist of certain parts of their cognitive repertoires, so that the combinations of psychological categories contained therein limit the “rear guard end” of the specious present according to W. James. However, it is on the basis of the experience thus contained in their individual memories that these same actors cogitate in order to construct the scenarios that allow them to imagine the future. From this perspective, it is from their memories that these actors create and combine the categories that contribute to their dynamic adaptation to a changing environment. And, on the other hand, some of the psychological categories in these memories are then shared by them – such as C3, for example, which appears in Sk(t0) and Sl(t0) – and we have seen that their number is then measured by Sc (t). However, each category thus shared represents the minimum of common language that opens up the possibility for each individual actor to communicate with at least one other actor. In each state of the network, the number of shared categories appearing in the representations of a given individual actor therefore quantifies the openness of the range of possible communications with at least one other actor. Thus, while the C value measures the current volume of shared representations at the current moment of Ak and Al’s respective past trajectories, and as such contributes to measuring the number of their combinations that limit the “rear” of the specious present, it also measures the propensity of these actors to currently establish a communication that will partly shape the future state of the network. In this sense, this value also contributes to measuring the extent of the “avant-garde end” of the specious present as defined by W. James. Beyond the sole example of the Ak and Al actors and the number C of their shared psychological categories, the same is obviously true for any psychological category simultaneously present in the cognitive repertoire of at least two individual actors in any state of the network: for each of these actors, any category of this type 6 For example, in the measurement of pk,l (t) for actors Ak and Al, the values K(t) = Sk (t) and L(t) = Sl (t) are quantitative measures of the traces left by their respective past up to date t.

148

The Carousel of Time

is a guarantee of the possibility of establishing communication with at least one other individual actor. Far from giving the present of the network the sharp edge of a knife’s blade, our concept of propensity to communicate thus formalizes the notion of a specious present looking like “a saddle, endowed with a certain width, on which we are perched, and from which we look into the two directions of time” (James 1890, p. 609, author’s translation), or to “a speed bump over time, from the top of which our gaze can go to the future or to the past at will” (James 1909, p. 332). More generally, each psychological category included in an individual cognitive repertoire, whether shared or idiosyncratic, is intended to participate in the specious present of the actor concerned in one state or another of the network. The realization of this vocation nevertheless presupposes that the category/categories considered are part of the actors’ individual memories, which represent the semantically relevant part of their cognitive repertoires. To fully understand this point, it is now necessary to specify the relationship established between memory, cognition, consciousness and time of the individual actor at the moment of each perception of a representation-occurrence by the latter. With regard to the relationship between memory and cognition, we argue with G. Tieberghien (op. cit.) that, far from being a simple form of cognition of the past, memory represents the very form of cognition: it is simply impossible to study cognition without making assumptions about the structure and functioning of the underlying memory, so we must consider the concept of memory as more fundamental than cognition and even define cognition as an emerging property of a memory system (1997, p. 13). According to Hypotheses 4.2 and 4.3 relating to the structure of the network, this emerging property takes the form of a categorization process that transforms each representation-occurrence into a generic representation identified with a psychological category, and Hypotheses 4.4–4.6 immediately allowed us to propose a formalization and a quantitative measurement of the concepts of cognitive repertoire, individual memory and global “memory” of the network7. Finally, Hypotheses 4.10–4.13, relating to the evolution of the network, have clarified the links between this central form of cognition, which includes psychological categories in the cognitive repertoires (or even in the individual memories) of the actors, and the concept of consciousness as it is now understood by cognitive psychology. Indeed, we have seen that this discipline considers consciousness as a property of the momentary states of representations rather than as an entity or instance, a property that we know can be graduated on a scale ranging from “very conscious” to “unconscious” to “conscious on a small scale”, and which includes a threshold notion8.

7 See Chapter 4, section 4.1. 8 See Chapter 4, section 4.2.

Propensities to Communicate, the Specious Present and Time as Such

149

We used this threshold notion to define a number g of representations-occurrences from which a given psychological category jumps from the “conscious” level to the “unconscious” level of the meta-categories governing the selection and organization of the categories composing the individual actor’s memory. It is now the scalable nature of consciousness as a property, and more precisely the distinction between “very conscious” and “conscious on a small scale”, that we will use in our theoretical construction, because this distinction seems to us to be at the heart of the relationships between consciousness and time on which the notion of specious present is based as it works in our model. Indeed, in a recent book dealing with a form of memory without the memories, A. Lejeune and M. Delage classify in what they call the “preconscious” – a notion very close to that of “very little conscious” of J.-F. Le Ny – some elements are not conscious but directly accessible to consciousness without having to undergo a transformation process: “These are memories that we have stored away, but are not updated until we need them. However, we can freely use them to talk about them if we wish, or if it is necessary. They are waiting to be useful to the consciousness. In short, they are implicit, not expressed, but are capable of being so, in particular through what the psychoanalyst calls an associative process. This is similar to what is called in neuropsychology the phenomenon of priming” (op. cit., p. 22, author’s translation)9. As A. Didierjean argues, this difficulty that is sometimes encountered when recovering such memories could have an adaptive value by preventing an excess of information in memory: “Our cognitive system seems […] to have found an equilibrium between a search in memory that is too difficult and would lead to

9 Lejeune and Delage give an example of these “memories of experiences that seem forgotten, but whose traces are preserved and which can reappear in the field of consciousness, when they are reactivated by certain experiences of the present” (op. cit., p. 23, author’s translation). Let us imagine that “we meet a former fellow student from university who we have not seen for years. What comes to mind is a whole set of memories concerning the teachers, other comrades, the amphitheater, the atmosphere of the time, which were put aside because they were not relevant to our present life, but were always in reserve” (ibid., p. 22). In the same way, A. Didierjean insists on the extreme importance of context for recollection: if an external element can serve as a trigger for the recovery of a very old memory, it is because it shares with the latter many elements of context – as in the example proposed here by Lejeune and Delage (Didierjean op. cit., pp. 76–79).

150

The Carousel of Time

the need for constant efforts to remember, and one that is too easy and automatic which would lead us to be overwhelmed by a multitude of memories” (Didierjean op. cit., pp. 72–73, author’s translation). From our point of view, the psychological categories corresponding to such memories do indeed appear in the cognitive repertoire of the individual actor concerned, but as long as these memories are not recalled during some experiences in the present, they do not really appear in their memory, and consequently do not enter into the composition of their specious present. Indeed, as Nayla Farouki writes, when consciousness operates, it goes through at least two stages: on the one hand, through attention, it identifies an object that it “illuminates” like a projector; on the other hand, it simultaneously inscribes in its knowledge of the world the very act of identifying the object (Farouki 2000, pp. 65–66). In our opinion, for the representation-occurrence of this object to be transformed into a psychological category that is inscribed in an individual memory of which we know that one of the central characteristics is to be associative, the individual actor must also activate the necessary reflexive dimension so that they can associate, in a semantically relevant way, the category in question with certain combinations of psychological categories already present in their memory. It is only on this imperative condition that “each individual possesses a knowledge of their own consciousness that is continuously situated in the present, including when they recover elements of information contained in their memory”10 (Farouki op. cit., p. 66). For the individual actor, the reflexive dimension consisting of taking as an object of attention the potential inscription of a memory which, otherwise, would remain until further notice in the implicit memory, thus appears constitutive of the construction of the individual’s specious present. As J.-M. Guyau once said: “Not only is time related to representations […] but it can only be perceived if representations are recognized as representations, not as immediate sensations. It is therefore necessary to have the apperception of the representation of a presentation”11 (op. cit., p. 12, original author’s use of italics, author’s translation).

10 As Véronique Le Ru writes, “time is the necessary passage: I am unable to prevent the future from not being, the present from no longer being, but I can retain some meshes of the past. Meshes and not complete things or whole moments because, to rebuild the mesh I want and pull by a thread, I had to eliminate and forget many things and moments that surround the meaningful mesh I am aiming for” (Le Ru 2012, pp. 15–16, author’s translation). 11 For V. Le Ru, “as a human being, I am aware of myself and the time in me that makes me myself. And my consciousness itself is a process, a spider’s web in the making made up of internal and external interactions” (op. cit., pp. 65–66, author’s translation).

Propensities to Communicate, the Specious Present and Time as Such

151

Finally, the notion of the specious present introduced by W. James, then taken up again until today by many other authors, and our concept of propensity to communicate, which proposes a formalization of it, have as a central characteristic the linking of the three components of the notion of time that are familiar to us. In this assimilation between past, present and future, the specious present and the propensity to communicate each mark the space–time that corresponds to it in its own way: that of the individual subject of psychology and cognitive sciences, and that of the space–time of our complex socio-cognitive network of individual actors – of our cognitive space–time. It remains for us to analyze the notion of simultaneity which allows us to establish the existence of space–time, which would thus be common to us within a network that gathers all the individual actors at the same time, and thus to evoke a notion of territory common to the subjects envisaged by psychology and cognitive sciences. This analysis involves the introduction of a concept that encompasses both subjective and objective time. This is the subject of our next section.

7.2. Subjective time, objective time and time as such The concept of time usually refers to two dimensions considered as irreducible: the subjective time of each of our consciousness, the elasticity of which we all know, for example, waiting times can feel endless whilst times of pleasure often feel fleeting; the objective time of day and night or the time measured by our watches and clocks. Apparently irreducible, these two dimensions are nevertheless inseparable, since the elasticity of the first is only assessed in light of the second and the so-called objectivity of the latter itself appears socially constructed (Nowotny 1992; Elias 1996; Adde 1998; Farouki 2000; Le Ru 2012). Both irreducible and inseparable, these two dimensions of time therefore seem to us to be complementary. We then encounter C. Castoriadis’ position affirming the need for a third term to be able to consider these two dimensions simultaneously: just as the complementarity of two subsets can only be assessed in relation to the reference system offered by the whole of which they constitute an exhaustive partition, the subjective and objective dimensions of historical time have meaning only in relation to this third term. These dimensions then require the construction of a concept of: “time as such [with a] third term that makes it possible to speak of subjective time and objective time. Time as such would thus appear to overlook not only the various subjective times [...] but also all particular times of any kind, including objective time and its possible

152

The Carousel of Time

fragmentations [...] and make it possible, through innumerable articulations and interlockings, their mutual adjustment, or at least their accommodation, and their correspondence” (Castoriadis 1990, p. 247, original author’s use of italics, author’s translation). How then should we conceive and represent these three facets of the concept of time within the framework of our complex socio-cognitive network model? Let us begin with the subjective dimension of this concept, this “primitive current of consciousness” in which Nicholas Georgescu-Roegen saw the only basis of time (1969, p. 70 sq; 1971, p. 132 sq). Let us thus borrow from Aristotle his famous formula according to which “[time is a] number of change with respect to the before and after” (Aristotle 2008, 219b 1–2)12. Although Aristotle stated that “time is the same everywhere at once” (Aristotle 2008, 218b1 13–14), and thus seems to have given an objective character to his concept, this formula implies the existence of a being (of a soul, psuche, says Aristotle) numbering (measuring) a change (understood in a very general sense) perceived by him (Piettre 1994, pp. 11–15): in a soulless world, there would be no perception (and a fortiori no measure) of change, so there could well be movement as the “substrate” of time, but not time in the full sense. In commenting on the last pages of Temps retrouvé, Arnaud Gonord does not say anything else: “Psychic time precedes objective time, and the discovery of this one cannot be made without the consciousness of that one […]. It is […] the awareness of change that reveals time […] for time to manifest itself […] at least two things are necessary: the first is that there is a change; the second is that there is a soul or consciousness to determine that change” (Gonord 2001, pp. 14–15, author’s translation). And as Castoriadis also pointed out: “If nothing changes in our minds, or if the change escapes our attention, it seems to us that time has not passed” (op. cit., pp. 257–258, author’s translation). A contrario, as soon as something changes in our minds and this change attracts our attention, we feel like time has passed: “When consciousness determines a change, this change must be very particular: if this change is total, then consciousness can no longer

12 On Aristotelian thought of time, see Catherine Collobert’s (1995) very detailed commentary.

Propensities to Communicate, the Specious Present and Time as Such

153

recognize it as a change [...] The change must therefore not be total, and allow consciousness to tell itself that time has passed”13 (Gonord op. cit., p. 16, author’s translation). In our model, this means that the flow of subjective time is experienced by any actor whose cognitive repertoire changes, and who measures this change through their meta-representational ability (Sperber op. cit.). If the event in which this perceived change constitutes can be categorized in such a way as to be inscribed in their individual memory, then it is a specious new present that occurs and replaces the previous one without completely erasing it: in our example of a network with two actors, this is the case for Ak and Al whose respective cognitive repertoires have doubled in volume between dates t0 and t1, and whose certain new combinations of psychological categories are destined to add to individual memories. Such a formulation immediately implies the objective dimension of time, without which the notion of simultaneity of subjective times, inherent in the expression of “dates t0 and t1” common to these two actors, would be meaningless. The clock that we have implicitly used in this way is not extrinsic to the network as we have represented it. Indeed, the objective time thus involved is none other than that of the observer of this network14, and the latter is not (cannot be) totally outside the network itself: as we have already noted in Chapter 2, in the theory of natural complexity, “this observer, outside the system, is in fact in a hierarchical system, the (encompassing) higher level of organization compared to the element systems that constitute it” (Atlan 1979, p. 70, original author’s use of italics, author’s translation). It follows that an objective time elapses for the network as soon as

13 J.-M. Guyau noted in the same sense that without a perception of differences, there is no time, but that this difference must not be absolute so that one can compare the past and present states: “in a dream, attention being asleep, each image that disappears entirely: then, the comparison between the past state and the present state becomes impossible” (op. cit. p. 19, original author’s use of italics, author’s translation). In short, “in an absolutely heterogeneous mass nothing could give rise to the idea of time: duration only begins with a certain variety of effects. On the other hand, too absolute a heterogeneity, if possible, would also exclude the idea of time, which has among its main characteristics continuity, that is, unity in variety” (ibid., p. 20, author’s translation). 14 We must credit this observer with a constant power of discrimination in order to qualify the time measured by them as an objective, because the clock measuring the objective time must ideally be (almost) timeless (Georgescu-Roegen 1969, p. 73). This is both theoretical evidence and a recurrent practical concern among physicists, since Eddington saying that “the better the clock is, the less it feels the passage of time” (quoted by Georgescu-Roegen ibid., author’s translation) to Jean-Marc Lévy-Leblond affirming that “any reflection on the measurement of time leads to the unavoidable but difficult dialectic of linear and cyclical time: there is no time unless there is change, but it can only be measured if there is repetition without change” (Lévy-Leblond 1996, p. 277, original author’s use of italics, author’s translation).

154

The Carousel of Time

subjective time elapses in the network for at least one observed actor, and that this flow, which passes all the specious presents of individual actors from one given date to the next, is perceived by the network observer. Considering both the observer’s objective time and the observed actor’s subjective time thus initiates a possible solution to the problem of the articulation of these two dimensions to time as such, this third term being essential for us to be able to speak of these two types of time as time. Our rapid discussion on the relationship between observer and observed suggests the possibility of a permutation between these two hierarchical levels in the organization of our complex network. In fact, apart from this difference in hierarchical levels, the observer and observed are in no way different in the eyes of the third party that we are, located at a third hierarchical level of our network: that of an observer observing the observer observing the network15. From the point of view of this third level, the actors located at the two lower levels are perfectly interchangeable: not only do the observer and do the observed engage the same type of cognitive operations (observing, perceiving, categorizing, memorizing, remembering, forgetting), but also each individual actor observed “inside” the network is simultaneously an observer located “at the edge” of the network, and vice versa. The representation of representations in which our model consists therefore easily incorporates the permutation of the observer and the observed actor. However, it is exactly this type of permutation that individual actors accomplish when they adopt the reflective posture that allows them to integrate into their individual memory one or more psychological categories that have recently been introduced into their cognitive repertoire, following inter-individual communication or an intra-individual categorization process. In doing so, the individual becomes the observer of themself to build a new specious present, ipso facto completing one more step in their subjective temporality and simultaneously pushing a passage from the current state of the network to the next state. On a strictly epistemological, and no longer psychological, level, it is the outdated character of the classical distinction between subject (observer) and object

15 We take up here again the fundamental epistemological contribution made by H. Atlan by explicitly introducing the observer into the theoretical space of complex systems that the paradigm of self-organization envisages: by transposing into a world of organized systems the noisy-channel coding theorem established by C. Shannon in a world of messages, H. Atlan was only able to demonstrate the contradictory effects of ambiguity by encompassing the level of the elementary transmission path between subsystems within a higher hierarchical level, which is the level of the transmission path of information from the system as a whole and the observer of the system (see Chapter 2, section 2.1). In doing so, H. Atlan placed himself in the position of a third party observing the observer observing the system, and it is exactly this posture that we are using here in our own model.

Propensities to Communicate, the Specious Present and Time as Such

155

(observed) that is thus underlined again16. However, another distinction is simultaneously introduced at another level: the object here becomes the observer/observed couple, and the subject, the third party mentioned above. The traditional ontological distinction between subject and object is thus replaced by an operational distinction between two hierarchical levels of the same object. This distinction is essential to distinguish subjective time (of the observed actor-object) from objective time (of the observer-subject observing), and to simultaneously allow these two dimensions of time to be tied together. Because it is correlatively a third hierarchical level that is thus introduced: the one at which a subject observing the relationship between the observer-subject observing and the observed actor-object is located, and it is thus the enunciation (of the observer-subject observing) that is projected into the space of the statement whose subject is the third party mentioned above. In the latter’s view, the distinction between subjective time (flowing “inside” the network) and objective time making simultaneous (from the “outside” of the network) the plurality of events that occur there is therefore not exclusive of a link that its third position gives it the task of weaving. This link is that of time as such, and one of its possible figures is that of a Möbius strip. Indeed, we know that the topology of the latter is characterized by the fact that it has only one face, in the sense that it is possible to pass from its “interior” to its “exterior” (and vice versa) without any solution of continuity (without going through an edge). However, it is precisely in this way that the subjective time of the observed actor “inside” the network and the objective time of the “outside” observer of this network can be exchanged. Time as such is thus formed at the moment when the observer of the network perceives themselves as an object of observation, and with the course thus made along a Möbius strip and its storage, it is the specious present of this observed observer that completes building itself while preserving their past and announcing their future. Otherwise, it has been shown that the Klein bottle, this topological shape whose structure is obtained by connecting two Möbius strips, most closely represented 16 Again, because it was of course I. Kant (or even Bishop Berkeley) who first showed the untenable nature of a distinction between the properties of the world that belong to our relationship with it and the properties of a world in itself, independent of our relationship with it: “an untenable thesis because thought cannot come out of itself to compare the world ‘in itself’ to the world ‘for us’, and thus discriminate against what is due to the relationship with the world and what belongs only to the world. Such an undertaking is indeed self-contradictory: at the moment when we think that such a property belongs to the world itself – we think so precisely, and such a property is therefore itself essentially linked to the thought that we can have of it” (Meillassoux 2006, pp. 16–17, author’s translation). As we will see in the last section of this chapter, it is this theory that Q. Meillassoux tried to refute in order to philosophically construct a “paradox of ancestral history”, also identified by some physicists (op. cit., pp. 13–38).

156

The Carousel of Time

the type of spatialization of time that occurs during inter-subjectivity (Porge 1989). Finally, time as such would therefore take the form of a Möbius strip by linking the subjective times of the observed actors to the objective time of the observer, while it would take the form of a Klein bottle in the eyes of the third party observing the inter-subjectivity that thus is knotted between the two previous organizational levels – observing the observing observer. And we can thus imagine an infinity of potential hierarchical levels of observation – at a fourth level would be an observer observing the observer observing the observer observing the observed actors, and so on – giving time as such the multi-layered structure of an infinity of Möbius strips ordered according to an increasing hierarchy of connections. Although its measurement has been the subject of a long process of standardization in the West, as we will see later, historical time is therefore not unique. On the contrary, there is as much subjective time as there are observed actors, and as much objective time as there are observers and observation levels in the organization of the previous hierarchy, the number of these levels being potentially infinite. To suppose the opposite would be to postulate the existence of a kind of last resort observer, this imaginary absolute point of reference at which Newton was located, giving time for a unidimensional continuum in which events were placed according to a universal and predictable causal determinism, in other words a time that could have existed even if the universe had been empty. To assume epistemologically the place of the third party just mentioned also has an ethical dimension: one that marks an enunciator’s posture who expresses themself from a limit of the world, but is simultaneously inside it, and recognizes with humility that they thus produce statements from a point of view that is always socially and historically situated, and as such imbued with an ineliminable subjectivity. Moreover, there are as many limits of the world as there are individual actors who are also observers of it, and there is therefore no observer who would be able to observe the entire socio-cognitive network bringing together all these actors: each of them has access to only one region of this network, and this access always coincides with the subjective point of view of the actor concerned. 7.3. A point of view from nowhere or a point of view from everywhere? We thus end up with the problem posed since 1986 by T. Nagel, and known by the expression that gives its title to his work: is it possible to reach “the point of view from nowhere”? From the very first lines of this book, Nagel clarifies the meaning he intends to give to this expression: “This book is about one problem: how to combine the point of view of a particular person within the world with an objective view of the

Propensities to Communicate, the Specious Present and Time as Such

157

same world that may include the person and his or her point of view”17 (1993, p. 7). As Nagel immediately adds, this problem has many aspects, and he devotes almost 300 pages to dealing with them, from one concerning the mind to one concerning birth, death and the meaning of life, including those considering the relationship between body and mind, the objective self, knowledge, the relationship between thought and reality, freedom, value, ethics and, finally, the good life and the right life. It is mainly the aspects concerning the mind, the objective self, knowledge and the relationships between thought and reality that interest us here. What is the central argument put forward by Nagel in these respects? In his view, the distinction between the internal and external or subjective and objective points of view would be a matter of degree: “A point of view or a form of thought is all the more objective because it is no longer based on the specificities of the constitution and the place of the individual in the world or on the characteristics of the particular type of creature that they are. The broader the domain of subjective types that can be accessed through a form of understanding, the less it depends on specific subjective capacities, the more objective it is. An objective point of view compared to an individual's personal vision may be more subjective if compared to an even more distant theoretical point of view. The moral point of view is more objective than that of private life, but less objective than that of physics. One could consider reality as a set of concentric spheres that gradually reveal themselves as we detach ourselves from the contingencies of the self” (op. cit., p. 9, author’s translation). In order to juxtapose, and if possible to try to unify the internal and external, or subjective and objective, points of view, by an effort that he nevertheless declares from the outset as destined to remain unfinished, it is advisable, according to Nagel, to detach ourselves from the contingencies of the self. This detachment is progressive, and always follows the same logic: for the knowing subject, it is at each 17 In their history of objectivity, Lorraine Daston and Peter Galison (2012) place T. Nagel’s conceptions in the category of “structural objectivity” (op. cit., pp. 293–355). This form of objectivity brings together various authors in a current of thought, born between the end of the 19th Century and the beginning of the 20th Century among logicians, mathematicians, physicists and philosophers, and which based objectivity not on images, but on structures, “the only ones capable of breaking with the mental world deprived of individual subjectivity. Science, according to them, was only worth if it was communicable to all; yet only structures – unlike images, intuitions and other mental representations – could be shared by all minds independently of space and time” (op. cit., p. 294, author’s translation). The reference to Nagel is found on p. 354.

158

The Carousel of Time

step to accumulate information about the world at a given level from his or her own point of view, then to move to a higher level to examine the relationship between the world and himself or herself who is responsible for his or her previous understanding, then to move to an even higher level by analyzing this previous examination and so on. So much so that: “Each move towards greater objectivity creates a new conception of the world that includes us within its limits with our previous conception” (ibid., p. 10, author’s translation). We thus find our multi-layered structure of an infinity of Möbius strips ordered according to an increasing hierarchy of connections, and this with apparently the same limit of principle placed on the objectivity of any vision of the world: the nonexistence of the position that would be that of an observer of last resort, in the sense that this observer would be free of any subjectivity and would consider the entire complex socio-cognitive network of individual actors of which we propose a model. In fact, as Nagel writes: “The subjectivity of consciousness is an irreducible feature of reality, without which we could not do physics or anything else; and in any credible vision of the world, it must occupy a place as fundamental as matter, energy, space, time and numbers”18 (ibid., p. 12, author’s translation). This irreducible character of the subjectivity of consciousness comes from the fact that, if we gradually detach ourselves from the contingencies of the self at each passage from a given level to the higher level during our exploration of the concentric spheres of reality, this detachment is never absolute: whatever level we are at to consider reality thus conceived, it is still from a point of view tainted with subjectivity that we consider it. Nagel sets out the position of skepticism in this regard: “The idea of objectivity seems to destroy itself. The goal is to form a conception of reality that includes us and our point of view among its objects, but it seems that whatever forms this conception, whatever it is, will not be included. It seems to follow that the most objective conception we can form will necessarily rest on a subjective basis that it will not examine, and that, since we can never abandon our own point of view, but simply modify it, the idea that we approach reality

18 In a section significantly entitled “The incompleteness of objective reality” (op. cit., pp. 33–36), T. Nagel returns several times to this point: “no objective conception of the mental world is likely to contain it in its entirety” (op. cit., p. 33), “any objective conception of reality must include a recognition of its own incompleteness” (ibid., p. 34), etc. (author’s translations).

Propensities to Communicate, the Specious Present and Time as Such

159

from the outside, through each successive step, has no basis” (ibid., p. 83, author’s translation). Nevertheless, Nagel argues in favor of exceeding such a position: “Even if an objective understanding can only be partial, it is in our best interest to try to extend it, for a simple reason. The search for an objective understanding of reality is the only way to develop our knowledge of what is, beyond the way it appears to us. Even if we must admit the reality of a certain number of things that we cannot grasp objectively, as well as the irreducible subjectivity of certain aspects of our experience, the search for an objective concept of the mind is quite simply part of the general search for understanding” (ibid., pp. 34–35, author’s translation). Hence, he devotes a short chapter of his work to what he calls “the objective self” (op. cit., pp. 67–81). This is a concept that is quite difficult to understand because of its paradoxical nature: it seems to contradict the irreducible nature of the subjectivity of consciousness on which Nagel has focused so much in the previous pages of his book. Indeed, this objective self would access an a-centered conception of the world that would include all the innumerable subjects of consciousness by placing them on a roughly equal footing, because none would occupy a privileged metaphysical position: this self should be able to process experiences from any point of view and understand the world from the outer rather than from an inner point of view: “The fundamental step that gives birth to it is not complicated and we do not need very refined scientific theories to explain it: it is simply the step of conceiving the world as a place that includes within itself the person that I am, as simply another of its contents - in other words, to conceive myself from the outside. I can therefore deviate from the thoughtless perspective of the person I thought I was. Then comes the step of conceiving from the outside all the points of view and experiences of this person and others of their species, and considering the world as a place where these phenomena are produced by the interaction between these beings and other things” (op. cit., p. 78, author’s translation). This concept of objective self brings together subjective and objective points of view in that it places us both in and outside the world, making each of us “both the logical center of an objective conception of the world and a particular being in this world that occupies no central position whatsoever” (ibid., p. 79, author’s translation).

160

The Carousel of Time

According to Nagel, each of us would be both an ordinary person and a particular objective self, the subject of a perspectiveless conception of reality. It is therefore necessary to refer to these two aspects from the sole point of view of the objective self, conceiving a world that contains the ordinary person that it also is: “It is this subject without a perspective that constructs an a-centered conception of the world by projecting all perspectives within the content of this world” (ibid., p. 77, author’s translation). The position of the third party we occupy in building a model of a complex socio-cognitive network of individual actors in which everyone is both observer and observed seems to register in this conception of a subject without perspective: our network is a-centered and all perspectives are projected into the content of the world thus formalized. Certainly, our position as observer observing the observer of the observed actors only gives us access, as each of them does as observers of a region of the network and of themselves contained therein, to a more or less extensive part of the network. However, precisely, the sufficient condition for the objectivity of our observation is that the statements to which it leads us can ideally be taken up by all actors, regardless of where the world hides their point of view. In this regard, we would like to qualify Nagel’s conception of the objective self: by cultivating an objective self – “rather austere” because of the effort to eliminate any subjectivity that must be accomplished in order to achieve it through this search for objectivity (op. cit., p. 78) – we do not claim to be devoid of any perspective, but only to say that the perspective that is ours because of the irreducible subjectivity of our consciousness can perfectly be appropriated by all. We try to detach ourselves from a singular perspective to tend towards a universal perspective, and thus aim to produce a statement whose asymptotic universality would depend on the fact that it could be produced, without significant change, from all points of view. As we saw from the outset with Nagel, whom we fully agree on this point, the distinction between subjective and objective points of view is a matter of degree. However, rather than speaking with him/her from a point of view of nowhere, we prefer to qualify our universal point of view as a point of view from everywhere, in the sense that everyone can strive to move away from his/her singular perspective to try to reach it by tending towards an objectivity and universality that remain the asymptotic horizon of any discourse. Finally, let us underline with Nagel that an a-centered conception of the world is a conception towards which different people can converge, so there is a close link between objectivity and inter-subjectivity (op. cit., p. 78).

Propensities to Communicate, the Specious Present and Time as Such

161

7.4. On an alleged “ancestrality’s paradox” It is now important to examine more precisely the nature of this link between objectivity and inter-subjectivity, as it occurs in the context of a challenge to the notion that the subjective dimension of time is first in relation to its objective conception. This dispute is known as the “ancestrality’s paradox” mentioned by some authors, such as E. Klein, who invokes it on several occasions, notably in one of his best-known works (2007, pp. 77–81) and until his last work (2018, pp. 52–53), to affirm the priority of objective time over subjective time: as they are given (let us note this qualifier) the “ancestral events” (let us note this noun), i.e. prior to the appearance of consciousness, or even life, we would be led to “think of a time in which consciousness itself could emerge. How then can we conceive the emergence of consciousness in time if time needs consciousness?” (2007, p. 80, original author’s use of italics, author’s translation). Klein states in this regard that “philosophical responses to this paradox exist, and are certainly multiple. Nevertheless, in a recent book, Quentin Meillassoux showed that all end up coming up against an aporia” (2007, pp. 80–81, author’s translation). Let us examine in detail the argument by which Q. Meillassoux (2006) is said to have shown the aporia mentioned by Klein. This argument is subtle, and it is necessary to follow its progress step by step to see the double confusion on which it is based. It is contained in the first chapter of a book whose purpose is to show that only one thing is absolutely necessary: that the laws of nature are contingent. This chapter, appropriately entitled “L’ancestralité” (ancestrality) (op. cit., pp. 13–38), begins by recalling the distinction between “first qualities” and “second qualities”, the terms of which, Meillassoux tell us, come from Locke, but whose principle is already part of Descartes’ work. The term “second” refers to the sensitive qualities that are not in the things themselves, but in our subjective relationship to them (the burn of the fire in my finger is not in the candle), while the first qualities are supposed to be inseparable from the object: they belong to it, whether I fully grasp this object or not (ibid., pp. 13–16). Meillassoux argues that these are among the first qualities: “all that of the object can be formulated in mathematical terms, there is sense in thinking of it as property of the object itself. Anything that, from the object, can give rise to mathematical thought (a formula, a digitization), and not a perception or a sensation, makes sense to make a property of the thing without me, as well as with me” (Meillassoux 2006, p. 16, original author’s use of italics, author’s translation).

162

The Carousel of Time

This thesis, which he describes as “pre-critical”, consists of assuming the possibility of discriminating between the properties of the world falling into these two categories of qualities and it is apparently untenable, at least since Kant’s time. It is indeed naive to believe that one can think anything while ignoring the fact that it is always us who thinks it. And since the “in itself” of Kant is inaccessible, the notion of truth-correspondence, which gives as a condition of truth the conformity of a statement to the object defined as a similarity or adequacy, must be erased before that of truth-consensus. The difference between objective and subjective representation therefore lies in the difference between two types of subjective representations: those that are universalizable and those that are not, and intersubjectivity, when it builds consensus within a community, replacing the adequacy of representations of a solitary subject to the very thing itself. This is particularly true of scientific discourse, in that it is composed of universalizable statements: “Scientific truth is no longer that which conforms to a self supposedly indifferent to its givenness, but that which is likely to be shared by a scholarly community” (ibid., p. 18, author’s translation). Meillassoux then refers to correlationism as the position that we have access only to the correlation of thought and being, and never to any of these terms in isolation, which makes it impossible to consider subjectivity and objectivity independently of each other. Even more so: “not only must we say that we never grasp an object ‘in itself’ [...], but we must also argue that we never grasp a subject that is not always already related to an object”19 (ibid., p. 19, author’s translation). He then describes the figure of reasoning corresponding to such a position as “correlational dance steps”, which finally believes in the primacy of the relationship over related terms, and would represent the dominant particle of modern philosophy, its true “chemical formula” (ibid.). In the 20th Century, the two main “environments” of the correlation were, according to Meillassoux, the consciousness and the language he qualifies, following F. Wolff (1997), “world objects”, in the sense that they “make the world” because “everything is within”: to think anything, we must be able to be aware of it and be able to say it; we cannot therefore leave consciousness or language, and we are locked in; as well as, “everything is outside”, because having consciousness is always having consciousness of something,

19 The first part of this proposal refers to I. Kant, and the second part refers to the concept of intentionality introduced at the beginning of the 20th Century by F. Brentano and then taken up in a critical way by E. Husserl and phenomenology. Nevertheless, for the latter as for F. Brentano, this concept means that mental states (perceiving, believing, desiring, fearing), as well as having an intention in the common sense of this term, are always directed towards some object under some description, whether or not this object exists outside these mental states.

Propensities to Communicate, the Specious Present and Time as Such

163

as speaking is necessarily speaking about something, and consciousness as language are thus windows opening out to the world (op. cit., pp. 20–22). It is this circle of correlation that Meillassoux proposes to break away from by supporting the existence of first qualities, and it is precisely the objective time that encourages him to do so, because he ranks it among the latter by invoking several dates in this respect: those of the origin of the universe (-13.5 billion years), the formation of the Earth (-4.45 billion years), the origin of terrestrial life (-3.5 billion years) and, last but not the least, the origin of Man (Homo habilis, -2 million years). We thus find the “paradox of ancestrality” as stated by Klein, because these dates clearly show that some statements produced by experimental science concern “events” prior to the advent of life and consciousness. How can we grasp the meaning of scientific statements that explicitly relate to world data that are posed as predating any human form of relationship with the world (ibid., pp. 24–25)? Meillassoux then distinguished two main modalities of correlationism (op. cit., pp. 26–27): its transcendental version poses the correlation as unsurpassable because it is eternal, and leads to a metaphysics of an eternalized mind, vis-à-vis the perennial givenness of the being for whom the ancestral statement poses no problem, because any event would be a given-to, even the formation of the universe. On the contrary, strict correlationism cannot consider the hypothesis of such an ancestral witness to be legitimate, since it maintains that no correlation exists independently of its incarnation in individuals. It is to this correlationism, which poses the correlation as unsurpassable but this from a speculative and non-transcendental point of view, that Meillassoux addresses the following question: what interpretation is correlationism likely to give of statements about an ancestral reality – ancestral statements (ibid., p. 26)? To answer this question, he takes a detour consisting of showing that the meaning of ancestral statements does not pose a problem for a dogmatic philosophy such as Cartesianism. With regard to an event prior to the emergence of terrestrial life, such as the period of accumulation of matter that gave rise to the formation of the Earth, there would be no sense in this type of philosophy to say, for example, “it was very hot”, since no observer was able to experience it directly. However, it would be meaningful to say, for example, “the temperature was very high”: whereas a quality such as heat is inherent in the presence of a living being (therefore subjective and second) and would be incongruous, a quality such as temperature, formulated in mathematical terms and designating an objective property of the event (therefore first), would be perfectly admissible, even though no observer was present to experience it directly. In this perspective, the referents of ancestral statements, although in the past, can be considered real as soon as they are validated by experimental science (op. cit., pp. 27–30). According to Meillassoux, from a correlational point of view, such an interpretation would be impossible to accept

164

The Carousel of Time

literally. A supporter of this view would probably accept that such an event occurred so many years before humanity’s appearance, but he would immediately add that it occurred for humanity. An ancestral statement would thus have an immediate and realistic meaning, and a more original and correlational meaning: what the correlator cannot accept would ultimately be that the realistic meaning of such a statement would constitute their ultimate meaning, which would deny the very existence of the correlationist meaning (pp. 30–31). Meillassoux then proposes to show that this position conceals nonsense, because according to him, “an ancestral statement has no meaning unless its literal meaning is also its ultimate meaning” (ibid., p. 35, original author’s use of italics, author’s translation). To this end, he uses the notion of an archi-fossil that he previously defined as: “the material support from which the experiment is carried out, leading to the estimation of an ancestral phenomenon – for example, an isotope whose decomposition rate by radioactivity is known, or the light emission of a star likely to provide information on the date of its formation” (Meillassoux 2006, p. 26, author’s translation). According to him, the correlationist would of course point to the self-contradiction of such a notion, which consists of the fact that the archi-fossil thus appears as a givenness from a being prior to the givenness: “‘Givenness of a being’ – the whole point is there: the being is not prior to the givenness, it gives itself as prior to the givenness. This is sufficient to demonstrate that it is absurd to consider a prior existence – chronological, moreover – to the givenness itself. For givenness is first, and time itself only makes sense to be always already engaged in Man’s relationship with the world” (ibid., p. 32, original author’s use of italics, author’s translation). For the correlationist, the two levels of the ancestrality approach would refer to the doubling of the term “givenness”, according to which the being gives itself as pre-givenness: at the immediate level, forgetting the original character of the givenness leads to naturalizing it by making it a property of the physical world, so that the being lends itself as a pre-givenness; at a profound level, the being-thought correlation logically precedes any empirical statement concerning the world, and the being gives itself as pre-dating the relationship. Thus, there is indeed an immediate, realistic and derived level of meaning that deals with a chronological anteriority of what it is and on what it appears, but there is also, at an original and deeper level, a logical anteriority of the givenness on what it thus gives itself and of which the previous chronological anteriority is a part of. Accepting this articulation between these two levels of meaning leads to

Propensities to Communicate, the Specious Present and Time as Such

165

abandoning the belief in an accretion of the Earth preceding the appearance of humanity in time, in favor of another statement according to which the scientific community has objective reasons to consider that this accretion preceded this appearance by so many years (Meillassoux 2006, pp. 32–33). Thus, the question of objectivity is explicitly raised, and since I. Kant we know that objectivity can no longer be defined in reference to the object itself. According to Meillassoux, it should now be considered in reference to the possible universality of a statement that would be objective: it would thus be the intersubjectivity leading to a consensus within the scholarly community concerned that would guarantee the objectivity of the ancestral statement. However, from the perspective of this truth-consensus: “An ancestral statement is true, according to the correlationist, in that it is based on a present experimentation – based on a given fossil material – and universalizable (verifiable de jure by everyone). It can therefore be said that the statement is true, in that it is based on an experimentation de jure that can be reproduced by all (universality of the statement), without naively believing that its truth would come from an adequacy with the actual reality of its referent (a world without a world givenness). To put it another way: to grasp the profound meaning of fossil data, we must not, according to the correlationist, start from the ancestral past, but from the correlational present. In other words, we need to make a retro-projection of the past from the present” (ibid., pp. 32–33, original author’s use of italics, author’s translation). According to Meillassoux, to claim that one must go from the present to the past in a logical order, and not from the past to the present in a chronological order to understand the fossil would obviously be an untenable position, and to understand it, it would be sufficient to ask the correlationist whether or not the accretion of the Earth has taken place (ibid., p. 34). According to him, the latter would answer yes in one sense, because the scientific statements indicating this event are objective, i.e. verified intersubjectively, but not in another sense, since the referent of these statements cannot have existed in the way of its naive description – as uncorrelated to a consciousness. So we would come up with this statement that Meillassoux describes as “quite extraordinary”, and which would show the aporia on which the correlationist position would finally come up against: “The ancestral statement is a true statement, in that it is objective, but it is impossible that the referent could actually have existed as this truth describes it. It is a true statement describing an impossible

166

The Carousel of Time

event as being real, an ‘objective’ statement without a conceivable object. In short, to put it simply: it makes no sense” (Meillassoux 2006, pp. 34–35, original author’s use of italics, author’s translation). The countersense that the correlationist imposes on the ancestral statement by retrojection would result in the duplication of meaning that we have seen above, which, according to Meillassoux, would suppress meaning by believing that he was taking it further. There would therefore be an “irremediable realism of the ancestral statement: this statement has a realistic meaning, and only a realistic meaning, or it does not” (ibid., p. 35, original author’s use of italics, author’s translation). To accept the archi-fossil would be to disqualify the correlate, and vice versa, so that no compromise would be possible between them. Faced with the archi-fossil, all idealisms would converge and became equally extraordinary, and correlationism, which is part of it, would thus dangerously approach creationism (ibid., p. 36). Finally, the real issue of what Meillassoux proposes to name henceforth “the problem of ancestrality” would consist of knowing “how to think about the capacity of experimental sciences to produce a knowledge about the ancestral” (ibid., p. 37, original author’s use of italics, author’s translation). And since the scientific discourse thus put into play is characterized by its mathematical form, this question raises the question of “the capacity of mathematics to discuss from the Great Outside, to discuss from a past deserted by Man as by life” (ibid., original author’s use of italics, author’s translation). The new title of the “problem” of ancestral life could have suggested that this author’s brilliant demonstration had stripped him of his paradoxical dimension, but this is not the case, because this problem now hides “the paradox of the archi-fossil” consisting of asking “how can a being manifest the anteriority of being on its manifestation”? This “pure contradiction”, rather than this paradox, would in turn be an illusion, says Meillassoux in fine, who recommends in this regard that we keep an equal distance from these two ways of not seeing ancestrality as a problem represented by naive realism and correlationist subtlety (ibid., pp. 37–38). Laudable intention, but which seems to us to remain here a pious wish, because if Meillassoux is clearly very distant from correlationist subtlety, he is certainly not “at equal distance” from it and from naive realism. Indeed, the aporia that he considers to have brought to light is in our view only apparent, and this is for at least two reasons. The first, and the least serious, is the epistemologically naive identification that it operates between the notions of intersubjectivity and objectivity. As we argued above following Nagel, the distinction between the two points of view to which these two notions refer to is certainly only a question of degree, but it does not mean that it disappears, and above all it does not imply that the question of truth (scientific or not) would ipso facto be guaranteed by the existence of a consensus within the community concerned (scholarly or not).

Propensities to Communicate, the Specious Present and Time as Such

167

First because the notion of consensus is itself extremely vague (is it a partial or general consensus? At what threshold of the number of members of a consensus is it legitimate to guarantee the truth of a statement on its basis?) and because the empirical distinction between consensus and controversy always involves an element of arbitrariness, any consensus containing the seeds of controversy and any controversy eventually turns into consensus (Ancori 2008a). Secondly, because the intersubjectivity that produces universalizable statements is only a necessary condition of the objectivity that designates its asymptotic horizon. The sufficient condition of objectivity, which Meillassoux associates with an irremediable realism of the ancestral statement, would, in our opinion, require the possibility of a detachment from any particular subjectivity that is both total and simultaneous among all members of the community in question, and this in such a way that the intersubjectivity of the latter would be identified with an exact superposition of subjectivities that have thus become unspecified. Finally, because of Meillassoux’s repeated call for an “experimental science” thus supposed to constitute a homogeneous whole is missing at least four decades of research that has never ceased to think about and illustrate the extreme diversity of the very notion of experimentation according to whether we are talking about physics, chemistry, engineering or life sciences – not to mention the “present universality of the verification of ancestral statements”, also naively invoked as a guarantee of their objective value (op. cit., p. 35). But the second reason to doubt the relevance of the notion of “paradox” in relation to ancestrality is even more serious, because it is based on a permanent confusion between the existence of a change (in this case the accretion of the Earth) and the joint definition of a unit and a measurement protocol concerning this change, to date it and give it the status of an event (to determine the period during which this accretion occurred, i.e. the 4.56 billion years measured according to a precise protocol using the archi-fossil isotope whose rate of decomposition by radioactivity is known). In our opinion, neither Klein nor Meillassoux see that, far from being given, these so-called ancestral events are in reality changes that are constructed as events by human consciousness, which takes their measure by dating them. Whether this construction concerns changes affecting existing objects, or those that have existed, independently of any observer, is in no way incompatible with the fact that it remains entirely carried out by a human observer. In reality, there is therefore no paradox here, and ontological realism is in no way contradictory to epistemic constructivism. As F. Grison writes about a science that identifies and analyzes many such objects: “It is said that geology is the science of rocks, a science that exists independently of the people who learn from it, the science of the object. Yes, but it is a science shaped by a school of thought,

168

The Carousel of Time

according to a certain world view. Geology is therefore both a discourse on rocks, related to rocks, an established discourse that does not depend on the speaker, and that of a group of people, geologists, who have gradually shaped it and are making it evolve, so it is also a discourse located (socially and historically) in a framework of thought”20 (2011, p. 21, author’s translation). In short, change may well have existed in the environment before it became the environment of living beings, and then of humans themselves, but in the absence of any human measurement of that change, it cannot be said that there was time – back to Aristotle. This human measurement of time is very old, and like the shared recognition of the event thus measured, it has always been based on a form of consensus. This consensus has long been local, such as the one that brings together, on the one hand, actors A1 and A2 sharing categories C1, C2 and C3, and on the other hand, actors A3 and A4 sharing categories C3, C4 and C5, in our example of a network with five actors. This was the case in most of the history of the Western Middle Ages, where these local times were punctuated by the ringing of church bells before merchants, concerned about regularity and coordination in their affairs, replaced them with the noise of the clock – from 1284 in Florence, where the noting of the beginning and end of the working day with the liturgies Terce and None were gradually stopped and finally replaced with the clock for this purpose in 1354 (Le Goff 1977, pp. 46–79). This concern for regularity and coordination found its intellectual answer in the mathematization of time in the same way as Galileo’s mathematization of space, which thus established time, like space, as a variable in the study of movement (Le Ru op. cit., pp. 31–39). Now cut off “from the physical process of which it was the symbol in order to be of value only to itself” (ibid., p. 39, author’s translation), time became an absolute symbol in 1687 with Newton’s Philosophiae naturalis principia mathematica. The path was therefore paved for the emergence of a psychological category gradually known to more actors (such as category C3 in our example), broadening the consensus by encompassing and going beyond previous partial consensus. This emergence then materialized in the equally gradual adoption of Greenwich Mean Time (or average solar time) from 1847 onwards. Finally, a general consensus on time punctuation, centered on one or more C3 category now shared by all stakeholders, was established in 1972 with the adoption of Coordinated Universal Time (UTC).

20 As N. Farouki says in a more pithy way, it is important “to never lose sight of the fact that the human psyche is itself at the origin of scientific explanations” (op. cit., p. 4, author’s use of italics, author’s translation).

Propensities to Communicate, the Specious Present and Time as Such

169

As we have seen previously21, this adoption can result from a communication that leads, from a given date (say 1972), to the emergence of full representations, and also has an impact on the intensive dimension of learning. We know that the emergence of these collective representations is always accompanied by an increase in the degree of the anchoring of the corresponding combinations of psychological categories compared with that which these combinations knew when they were idiosyncratic, and the formation of such representations thus constitutes the cognitive support of the genesis of conventions or consensus as collective structures generated by the composition of individual actions – compositions that lead to the emergence and stabilization of plural subjects. And as suggested by G. Bateson, the degree of anchoring of these representations can be high enough that they disappear from the conscious level of the actors’ memory, to the point of giving them the illusion of their exogenous origin while they are socially constructed. This, in our opinion, is the socio-psychological illusion that leads to giving primacy to objective time over subjective time. The measurement of this objective time was for a long time of a non-mechanical nature (night, day, use of the globe, intervals, normalization of hours), before becoming mechanical, from clocks to atoms. In this respect, it should be noted that, always and everywhere, this measure has involved comparing the speeds of two more or less precise and rigorous physical processes22, which leads us to think, with F. Kaplan (2004), C. Rovelli (2012, pp. 91–111; 2015a, pp. 59–73; 2015b, pp. 161–179; 2018, pp. 140–150), B. Guy (2011, 2015, 2018) and many others, that “objective” time does not exist as a primary concept, but only as a concept derived from the more fundamental concept of movement. We only measure what we perceive from relative movements: the movement of object A is taken as a standard for the movement of object B, or the opposite. As Rovelli writes: “This means that we never measure time itself. We always measure physical variables A, B, C... (oscillations, beats, and many other things), and we always compare one variable with another. So we

21 See Chapter 5, section 5.1. 22 On all this, see Lippincott (2000) and Savoie (2018a, 2018b). It should be noted (Koyré 1948; Clavelin 1996, pp. 423–425; Le Ru op. cit., pp. 33–36) that Galileo’s experiments in time measurement, particularly when he wanted to verify the accuracy of the law of the square of time in uniformly accelerated motion (e = 1/2 gt2, where e is the space traveled, g is the acceleration of the earth and t is the time taken to travel in space e), are extremely rudimentary. In fact, he then used the beats of his pulse (or the water flow of a clepsydra) as a unit for measuring the movement of a ball on an inclined plane. This is certainly more rigorous and precise in terms of the experimental measurement of a movement! On the most recent gains in accuracy in time measurement, see Pajot (2018a, 2018b).

170

The Carousel of Time

measure functions A(B), B(C), C(A), etc.”23 (2012, p. 100, original author’s use of italics). However, whether it is a physical model or a network of actors, it is sometimes useful to introduce a variable t: in physics, all equations for the variables are written according to this unobservable “real time”, so these equations reveal how things change as a function of t, and from there we calculate how these variables change relative to each other, which makes it possible to make predictions that are then compared with observations (ibid.). In the following chapters, we will see that such an introduction is necessary within the framework of our model to analyze certain phenomena such as the “slowing down” of time, or on the contrary, its “acceleration” that the flesh and blood actors who populate our societies say they feel, but is useless for the analysis of another phenomenon, often reported by these same actors: “déjà-vu”.

23 As B. Guy pointed out, whom we thank here, this idea had already been expressed by E. Mach in his book La Mécanique, exposé historique et critique de son développement, published in 1904 by Hermann and reprinted in 1987 by Éditions J. Gabay. See p. 216 et seq., in particular p. 219: “We choose an arbitrarily chosen movement to measure time”. And p. 220: “It is simply a question of establishing the mutual dependence of phenomena” (author’s translations).

8 Déjà-vu and the Specious Present

The previous chapter showed us that our concept of the propensity to communicate between actors offered an adequate formalization of the concept of the specious present introduced earlier by W. James, then developed by the phenomenological tradition and finally taken up by our modern cognitive sciences. Following C. Castoriadis, we then recognized the need to have a concept of time as such in order to think of subjective and objective time as ‘time’. We then proposed to build this concept within the framework of the self-organizational paradigm to which H. Atlan has devoted almost half a century of research, stressing following the possibility of a perfect permutation between the level of organization from the network actors as they are observed, and the level of organization immediately above which the observer of a more or less extensive region of the same network is located. It then appeared that one of the possible formalizations of the concept of time as such consisted of the topological form known as the Möbius strip. However, in principle, there is nothing to prevent further multiplication of the network’s organizational levels, so that the network’s moderator – located at a third hierarchical level – is linked to the two previous ones by a Klein bottle. And we can finally imagine the existence of an infinite number of potential hierarchical levels of organization conferring on time as such the layered structure of an infinite number of Möbius strips or given according to an increasing hierarchy of connections. This perspective naturally led us to consider T. Nagel’s problem, namely how to combine the point of view of a particular person within the world (the point of view of an actor observed in the network) with an objective view of the same world (the point of view of an observer of the network) that could include the particular person and his or her point of view. Instead of the expression “point of view from nowhere”, we prefer the expression “point of view from everywhere” which seems more faithful to the thought of Nagel himself: a point of view which strives ever more to strip itself of the singularities that anchor it in the subjective order, and thus aims at an asymptotic form of objectivity. With the evolution towards this form of

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

172

The Carousel of Time

objectivity, the point of view can be described as “everywhere” to the extent that it is increasingly tending to be legitimately adopted by everyone. Finally, we close this chapter with a discussion of the “ancestral paradox” invoked by some to deny the primacy of subjective time over objective time in the name of the fact that the formations of the universe, the solar system and that of the Earth itself, existed well before the appearance of Homo habilis. We believe we have convincingly shown that this alleged paradox is in reality based on a double confusion. On the one hand, one that identifies purely and simply intersubjectivity and objectivity, whereas this is only the asymptotic limit of this one, and on the other hand, one that does not distinguish the realization of a change in its perception, and a fortiori its measurement, which transforms this change into an event. This chapter is in line with the exact continuity of these reflections. We begin it with a brief history of the interpretations that have been given – from the middle of the 19th Century to the present day, to the phenomenon of déjà-vu that psychological disorder leads us to believe that we have already experienced in an undetermined past the situation we are currently experiencing. The data collected on this subject by several authors allow us to list 11 main characteristics of this phenomenon. We first rely on the beginning of explanation resulting from these characteristics to identify the psychological cause of déjà-vu, as well as the phenomena of depersonalization and sudden strangeness that result from this same cause in respectively more serious and benign forms. We will then propose to identify, within the framework of our model, the common cause of these three forms of mental disorders. In our opinion, this cause consists of a temporary difficulty experienced by the individual actor when it comes to assigning a current representation-occurrence to a psychological category that is semantically relevant with those already in his memory. This interpretation makes it possible to explain all 11 characteristics identified with regard to déjà-vu, whose neurological aspect consists of a momentary neuronal or synaptic failure at the circuit level that leads from the processing of the memory trace by the cortical areas to its integration into the hippocampus. 8.1. A history of interpretations of the déjà-vu phenomenon The phenomenon of déjà-vu, also known as “false recognition” in the past, is an extremely old psychological experience, but the expression that refers to it was only truly established in 1896, when the French psychiatrist François-Léon Arnaud (1858–1927) considered in a communication to the Société médico-psychologique de

Déjà-vu and the Specious Present

173

Paris1 that until an explanation of this phenomenon was provided, it was appropriate to opt for this neutral formulation2. Reflections on this theme were frequent in the 19th Century in different types of texts – books on memory, descriptions of psychiatric cases, specialized neurological journals, medical manuals, poems and novels – and in 1898, Eugène Bernard-Leroy, a doctor trained at the Salpêtrière Hospital in Paris, wanted to systematize them. To this end, he prepared a questionnaire distributed in a thousand copies that gave rise to 67 answers, of which

1 This communication was the subject of an article (Arnaud 1896). Alfred Binet wrote a report (Binet 1896). As Remo Bodei (2007, p. 16) points out, the expression “déjà-vu” was introduced by Émile Boirac in his book (Boirac 1876) and then explicitly developed by Ludovic Dugas (1894). 2 For a history of the discovery of the phenomenon (if not the term) of déjà-vu, from the Pythagoreans to the time when it became central in the medical field, then literary and finally philosophical, from the first half of the 19th Century to the beginning of the 20th Century, see Bodei (2007, pp. 16 sq.). From Alphonse de Lamartine (1790–1869) to Dante Gabriel Rossetti (1828–882) and Paul Verlaine (1844–1896), 19th Century poetry abounds in testimonies on this subject (ibid., p. 25–40), also treated in other literary genres – such as L’Éternité par les astres published by Auguste Blanqui in 1872, or Walter Benjamin’s Paris, Capital of the 19th Century. From the 1880s onwards, the debate on the question of déjà-vu left the culture of the elites and the debates of specialists (in literature, cosmological speculation, psychology, philosophy) to reach a wider public, until the Great War (ibid., p. 59 sq.). According to D. Draaisma, in the 5th Century, Saint Augustine already spoke of falsae memoriae, and he precisely referred to the Pythagoreans who considered, a millennium earlier, that this phenomenon constituted proof of metempsychosis (2008, p. 215). According to R. Bodei, Saint Augustine was perhaps the first to take this phenomenon seriously, in that he saw in it a diabolical sign denying, in the image of the eternal return of the Stoics, what made for him the heart of Christianity: the novum, which emancipated us from destiny and opposed the return of the identical (op. cit., p. 18). As late as the middle of the 19th Century, and until the end of it, déjà-vu was interpreted by several authors (poets, novelists) as the manifestation of an “earlier existence of the soul”. This interpretation had several variations. One of them stated that déjà-vu appeared at the points of intersection with a previous life, the latent memories of the latter resurfacing during a sudden coincidence with the present experience; another considered déjà-vu as proof of a much more radical hypothesis, according to which all our life would be repeated infinitely and in an identical form: déjà-vu would be a crack in time that would suddenly make us grasp the eternal return within our own existence. The English doctor Arthur Ladboke Wigan, who had defined déjà-vu in 1844 as a “feeling of pre-existence”, radically rejected these explanations in terms of eternal return, seeing it rather as a disorder of the brain (ibid., p. 222). H. Bergson excluded the case mentioned by F. L. Arnaud, as well as other cases that all referred to a notion of eternal return, because unlike déjà-vu, a sudden and short impression that surprises by its strangeness and localizes the illusory memory in an undetermined past, the subjects here found normal and continuous what they felt, and they often reported this “hallucination of memory” on a specific date (1976, p. 112).

174

The Carousel of Time

he took up about 50 in his thesis defense at the Faculty of Medicine in Paris3. As the database thus compiled was too small to allow a proper statistical analysis, E. Bernard-Leroy limited himself to a summary overview of the trends that appeared there. In particular, he noted that the phenomenon seemed to occur quite frequently in adolescence (Draaisma op. cit., pp. 218–219). Let us recall this characteristic, well documented today by Alan S. Brown (2003), and which is as important as Arthur Ladboke Wigan’s observation of a second, but never a third time, a reference to déjà-vu (ibid., p. 222). This phenomenon is particularly frequent in adolescence, and evokes a notion of a second (and never third) time: beyond these two important traits, other characteristics concerning déjà-vu are derived from interpretations in terms of dreamlike visions of sleep (ibid., pp. 222–232). One of them in particular joins a whole family of hypotheses that have in common the fact that “the déjà-vu would be the memory – not of a previous life, but of something that, in some way, was once present in our minds” (ibid., p. 223, author’s translation). Thus, according to psychologist James Sully (1882–1923)4, some dreams would be consistent with daytime experiences, and when this agreement was sufficiently great, these experiences would give rise to associations with elements of our dreams that would provoke this impression of familiarity inherent in déjà-vu. In his eyes, the latter was a kind of negative of what Freud later called diurnal remains integrated in the form of a fleeting fragment in the story of the dream. Sometimes buried for a very long time in the memory, the dream would provoke a furtive feeling of repetition in the awake subject’s daily life, and the impossibility of dating this dream would be at the origin of the impression of a déjà-vu belonging to an undetermined past (Draaisma op. cit., p. 224) – a third characteristic to be retained about this phenomenon.

3 Entitled Étude sur l’illusion de fausse reconnaissance [identificirende Erinnerungstauschung de Kraepelin] chez les aliénés et les sujets normaux, this thesis was published in bookstores under the title L’Illusion de fausse reconnaissance. Contribution à l’étude des conditions psychologiques in 1898 by Félix Alcan. As with the 1896 article, A. Binet wrote a review of it, also published in L’Année psychologique (1898, p. 729–742), and H. Bergson considered this book “essential for anyone who wants to have a clear idea of false recognition” (op. cit., p. 120, note 1, author’s translation). This bookstore’s publication is available online (in French): www.histoiredelafolie.fr/psychiatrie-neurologie/eugene-bernard-etude-surlillusion-de-fausse-reconnaissance. 4 Illusions. A psychological Study, Kegan Paul & Co, 1881. The original edition is available online: archive.org/details/illusionsapsych02sullgoog.

Déjà-vu and the Specious Present

175

In the opinion of many psychologists, this hypothesis of latent dream memories was hardly convincing5 and W. James himself thought that these speculations about déjà-vu were exaggerated: he claimed that the expression “déjà-vu” should be taken literally, because he had himself experienced this phenomenon several times and he had always managed to link the feeling of familiarity to a real memory, so that it was really a question of seeing something that had already been seen. According to him, and for many authors: “We first become aware only of analogies with the previous situation, […] hence this confusing impression of a ‘before’. And as we focus, we discover an increasing number of differences, and as the original memory is completed, the sense of familiarity fades” (ibid., p. 224, author’s translation). This hypothesis of déjà-vu caused by a partial analogy was followed by a psychoanalytical variant in 1930, by a pastor and psychoanalyst named Oskar Pfister (1873–1956). Based on the account of a young officer wounded in World War I, who linked his feeling of déjà-vu to a memory of his near drowning at the age of nine, O. Pfister, who was a Freudian, interpreted the role of the unconscious in the phenomenon as that of a defense mechanism: by searching the memory for a parallel event, the unconscious would remind us that the first time was not fatal, and thus calm us. Like W. James, he believed in a link between déjà-vu and the present content of the memory, but for him the experience of this phenomenon had a function and the similarity with the past would be its active component, while W. James held déjà-vu to be a fortuitous experience with no particular functional dimension (ibid., pp. 224–226). Let us keep in mind this fourth characteristic of déjà-vu, which consists of its connection with a previous experience of a real event. A fifth characteristic of this phenomenon consists of a notion of a double image, resulting from totally outdated anatomical conceptions6 but which had the merit of 5 Pierre Janet thus considered in a 1905 article that false recognition is a clearly pathological state, “relatively rare, at least vague and indistinct, where one would have been too hasty to see a specific illusion of memory” (Bergson 1976, p. 113, author’s translation). A contrario, H. Bergson rejects psychasthenia, studied and documented by P. Janet in a 1903 book, from the field of false recognition, although the latter has a psychasthenic character, mainly because it occurs in people who have no other anomalies (ibid., pp. 113–114). 6 Thus Arthur Ladboke Wigan, who considered déjà-vu to be the result of a brain disorder, argued that we would actually have two brains, just as we have two eyes, each of them having an autonomous conscious life: when what we wrongly call our two “hemispheres”, separated by a corpus callosum which would be a barrier rather than a bridge, orient themselves towards the same object, our concentration would increase, and conversely one of these two organs could be left to rest while the other would work – often only one of them would be active. Just

176

The Carousel of Time

suggesting that during the déjà-vu would form a clear image that would be that of the present and an almost identical image, but vague and belonging to an undetermined “before”. In this perspective, psychiatrist J. Anjel suggested in 1878 the idea of a déjà-vu caused by an extremely brief disruption in the treatment of sensory stimuli: “Perception requires us to integrate our sensory information into a coherent representation at the level of consciousness. Under normal circumstances, these two phases – Anjel called them perception and apperception – correspond so closely that we do not pay attention to the fact that they are actually two stages. But if, precisely between these two stages, a slowdown occurs, the sensory information is blurred when it must be transformed into perception. Consequently: we will have the illusion that what we are seeing has happened exactly as it did in the distant past” (Draaisma op. cit., p. 230, author’s translation). We will see later that this interpretation is very close to ours, and this is all the more so since one of its variants supposes that the conscious self would be accompanied by a “subliminal self”, an unconscious instance sensitive to stimuli that are too brief or too weak to be perceived and grasped by consciousness7: “So it can happen that we become aware of something with a fraction of a delay when it has already entered our minds. This disrupts our sense of time […] and can lead to confusion: we then have the impression that what we are witnessing belongs to the present as well as the past” (ibid., p. 231, author’s translation). Let us recap. The déjà-vu phenomenon currently has five main characteristics, the last of which leads to an interpretation that we will discuss later. This is a before a déjà-vu experience, this would be the case, a relatively weak sensory image would be formed; however, an unexpected disturbance of the environment, such as a scream, would activate the second hemisphere, and a much sharper image would then be formed. Other researchers then adopted this theory, in variants that D. Draaisma describes as even more “picturesque” (op. cit., pp. 226–230). After categorically rejecting any theory based on a hypothesis of cerebral duality, which he said, as early as 1908, was completely abandoned, H. Bergson declared that he believed “that false recognition implies the real existence, in consciousness, of two images, one of which is the reproduction of the other”, and he referred to J. Anjel’s interpretation in this respect (1976, pp. 118–119, author’s translation). 7 This interpretation was introduced by Frederic W. H. H. Myers (1843–1901), in a series of chapters published from 1892 to 1895 in the journal Proceedings of the Society for Psychical Research. His contributions to the psychology of his time, and his influence on the conceptions developed by W. James himself, were recognized by the latter in his eulogy (James 1901, pp. 380–389).

Déjà-vu and the Specious Present

177

phenomenon that: 1) would be particularly frequent for the adolescent to split it up; 2) would evoke a notion of second (and never third) time; 3) would give the impression of belonging to an undetermined past; 4) would be linked to a previous experience of a real event; and 5) would consist of a double image, clear when it is that of the present, and almost identical but vague when it belongs to an undetermined “before”. This last characteristic leads to the idea of a déjà-vu caused by the slowing down of the process that leads from sensory information about an event to its representation through its integration in a coherent form in our consciousness. That said, the phenomenon of déjà-vu was then linked to two other disturbing psychological experiences (Draaisma op. cit., p. 232–238). In 1904 and 1906, psychologist Gérard Heymans wrote the reports of two investigations linking the déjà-vu phenomenon to another psychic experience, also fleeting: “depersonalization”, a sudden state that disappears just as suddenly, and in which everything we perceive seems foreign to us, including ourselves – we then have the feeling of not speaking or acting, but perceiving our actions and words as inactive spectators8. Like Bernard-Leroy before him, G. Heymans wrote his reports on the basis of a questionnaire. He distributed it to his psychology students, of two different age groups, and then sent it, translated into German, to colleagues in Heidelberg, Bonn and Berlin. He finally collected 130 completed questionnaires, and the responses thus obtained showed three additional characteristics with regard to experiences of déjà-vu and depersonalization. They were more frequent among subjects who responded “positively” than among those who did not: greater emotionality, very pronounced mood swings and an irregular work rhythm. In most cases, these experiences had taken place in the evening, when the subject was in the company of others, but was not speaking at the time and was in a state of fatigue, and when he had just studied tedious or unrelated topics – subjects whose concentration had slackened, and psychic energy, diminished (ibid., pp. 232–235). Like greater emotionality, very pronounced mood swings and irregular work patterns, these three new characteristics were also found in subjects experiencing the sudden strangeness of a word. The same circumstances for three phenomena: the plausibility of the suspected correlation between depersonalization, déjà-vu and sudden strangeness of a word making less credible the hypotheses centered on déjàvu alone, and Heymans saw in it the manifestation of one and the same process.

8 H. Bergson (1908) was already linking déjà-vu and depersonalized from his first pages, and he attributed the creation of the word to Ludovic Dugas, author of an article entitled “Un cas de dépersonnalisation” (Dugas 1898). The same L. Dugas had already explicitly developed the expression “déjà-vu”, introduced in 1876 by É. Boirac, in an article published in 1894 (see above, note 1).

178

The Carousel of Time

He thus assumed that the feeling of familiarity of a perception was determined by associations between the current perception and past experiences, associations that would make it possible to estimate the duration separating the two: the more vague and reduced they appear, the longer the time elapsed between the current perception and the memory of the past seems. However, this is the case when there is a temporary decrease in psychic energy and concentration, so that the sudden strangeness of a word would be due to the lack of its associative links with semantic memory, and the meaning of this word disappearing, only the sound would remain. As for depersonalization, it would mean a total lack of associations, so that all aspects of the situation, not just words, would cease to be familiar. Finally, déjà-vu would occur when, without being totally lacking, associations would be weak and few in number – hence the illusion that a present experience would be the memory of a very distant past (ibid., pp. 235–236). Ultimately, according to Heymans, the sudden strangeness of a word, déjà-vu and depersonalization could be placed on a graduated scale according to the importance of the corresponding psychological disturbance in relation to the quantity of psychic energy available9. Thus, the first of these three phenomena would constitute the lightest form, because here it is only the association between the sound of the word and its meaning that disappears. At the other end of the scale, depersonalization would be due to the disappearance of all associations with the known. Finally, the déjà-vu would be in an intermediate position. Whether one of these three phenomena occurs depends, according to Heymans, on the amount of 9 The link between déjà-vu and psychic energy has been clarified on the basis of G. Heymans’ work: this phenomenon can occur when the level of concentration is normal or even high, provided that the available psychic energy is then reduced in quantity. Thus, when we concentrate most of our psychic energy on a future task, for example a speech to be given, the present experience generates only weak associations: when we enter the crowded room waiting for our speech, putting our hand on the door can then cause a déjà-vu (Draaisma op. cit., p. 236). For his part, H. Bergson evokes the notion of a mental tone that would be graduated between two extremes: maximal, when the consciousness is stretched towards action, and minimal, when it is relaxed in a dream space. In support of this notion of mental tone, he quotes the work of other authors, including G. Heymans, and suggests that the origin of false recognition should be sought in the sphere of action rather than in that of representation: “it is therefore in a lowering of the mental tone that the origin of false recognition should be sought” (1976, p. 122, author’s translation). While accepting the idea of a decrease in the general tone of psychological life, whether it is the psychic energy of G. Heymans or the “attentional tone” mentioned by Gabriel Dromard and Antoine Albès (1904), H. Bergson then hastened to point out the inadequacies of the theories of these authors, and proposed to find the origin of false recognition in a combination of a reduction in psychological tension and a doubling of the image, without artificially bringing these two theories closer together, the rapprochement having to occur on its own, he said, when taking these two directions further (op. cit., pp. 123–124).

Déjà-vu and the Specious Present

179

psychic energy available. On this basis, the latter issued two predictions: 1) déjà-vu should be more frequent than depersonalization, which is indeed the case; and 2) the psychological profile of persons subject to depersonalization should be much better defined than that of persons solely subject to déjà-vu, which was verified in the 1906 survey (ibid., pp. 236–237). The merits of Heymans’ work – verifiable conclusions, explanations less speculative than other interpretations – prompted D. Draaisma and one of his colleagues to resume the analysis of his data. It was practically the same significant correlations that were then highlighted, but only Heymans’ first prediction was corroborated by this recent work (ibid., p. 238). 8.2. Déjà-vu and the specious present: an interpretation As we have pointed out on several occasions, one of the fundamental consequences of the associative nature of human memory is the following: it is absolutely necessary that the individual actor succeeds in arranging a representation-occurrence in at least one generic representation (at least one psychological category) that has relevant semantic links with those already in his memory so that the latter can integrate them while maintaining its overall coherence. If this necessary condition is met, we can speak of a successful inscription of the representation-occurrence in its memory, whose representation in our model is then that of the specious present of the individual actor concerned. Our central hypothesis is therefore that the three problems under examination – depersonalization, déjà-vu and sudden strangeness of a word – have, as their common origin, a temporary difficulty experienced more or less intensely by the actor in question with regard to either the construction of this link if the category concerned is new in his memory or its reminder if the category where to store the representation-occurrence already appeared in it10. Let us take up again the 11 main characteristics of the phenomenon of déjà-vu, the intermediary between that of depersonalization and that of the sudden strangeness of a word, because testifying to a decrease in the quantity of psychic energy available lower than the former, but higher than the latter, in the actor who experiences it. Let us begin with the fifth characteristic, which assimilates this 10 In this sense, Bodei evokes the image of “a grain of sand that for a moment stops the tested functioning of a gear […] From this angle, reality itself manifests not as something given that would simply precede the subjective experience, but as a subjectively open work site, an unfinished construction which, through the small crack opened by the déjà-vu, shows the effort, not always successful, that everyone makes to preserve in time the global meaning of their own life, to maintain their entire personal identity by placing it on the horizon of a world with sufficient coherence” (op. cit., p. 16, original author’s use of italics, author’s translation).

180

The Carousel of Time

phenomenon to a double image, clear when it is that of the present, and almost identical, but vague, when it belongs to an indeterminate “before” and whose cause was attributed by Anjel to a slowing down of the process leading from sensory information about an event to its representation through its integration in a coherent form in our consciousness. Let us replace the terms “perception” and “apperception”, used by this author to distinguish the phase of reception of sensory information and the next phase which is for him that of representation, by those of “representation-occurrence” and “successful recording in memory”, and this characteristic perfectly fits with our hypothesis. The first phase discerned by Anjel reports on the current receipt of sensory information (the occurrence of a representation-occurrence), and the second that of the successful registration of the category, constructed or recalled on this basis, in the memory of the individual actor concerned. Whatever the reason, the relatively small quantity of psychic energy then available to this actor (G. Haymans), or the decrease in his attentional tone (G. Dromard and A. Abès), or the weakness of the general tone of his psychological life (H. Bergson), causes him a difficulty in constructing or recalling categories linked in a semantically coherent way to the combinations of those already appearing in his memory. The déjà-vu he then experiences lasts as long as this difficulty persists, which ultimately makes this phenomenon the manifestation of a brief delay in the successful registration compared to a normal situation where this registration is almost instantaneous. Analyzing the phenomenon of déjà-vu from a psychological perspective, such an interpretation finds a remarkable response in current neuroscience, which is part of a long tradition born in the 19th Century11. Indeed, J. Hughlins Jackson (1835–1911) had already drawn attention to the importance of context for brain activity in 1879, and he had shown in particular the place of organizational capacity in the production of memories, as well as the important role of affectivity in this matter (Rosenfield 1989, p. 72–73)12. It was then up to S. Freud (1856–1939) to give this last aspect, 11 Our interpretation of the déjà-vu thus reinforces the radical monist position we adopted, following H. Atlan, with regard to the Mind–Body problem (see Chapter 3, section 3.2). It should be noted that Antonio Damasio (2003) is also very close to such a position. In a very different register, the same is true, in a sense, of D. Abram (2013). 12 D. Draaisma (2008, pp. 243–244) also relates the experiences of J. Hughlings Jackson and W. Penfield, and then refers to the 1994 publication in the journal Brain – in which J. Hughlings Jackson had published many articles a century earlier – a report of a long series of measurements on 16 epileptic patients to accurately locate the epileptic focus in preparation for an intervention. These measurements involved the introduction of electrodes into the brain, and the electrical stimuli thus produced reproduced in patients the same phenomena as those already observed by J. Hughlings Jackson – sometimes in terms of déjà-vu, sometimes in terms of strangeness. Comparing these measurements with the patients’ accounts, it appeared that it was the amygdala and hippocampus, two structures belonging to the limbic system, that very

Déjà-vu and the Specious Present

181

which is at the center of the emotional investment linked to “transfer”, its full dimension. I. Rosenfield himself agrees with this: “If affectivity is absent, the interlocutor does not identify or recognize the simple statement of the episode experienced. There is no memory without affect. Emotions are essential to the constitution of memory, because they link it to an organized whole and place it in a sequence of events, in the same way as notions of time and order, which condition the recognition of memory as such, and not as the thought or vision of a single moment unrelated to the past13” (1989, pp. 73–74, original author’s use of italics, author’s translation). Rosenfield also relies here on the indirect confirmation of Freud’s views by the experiments carried out in 1933 by the neurosurgeon Wilder Penfield (1891–1976), and completed half a century later by those of P. Gloor (born 1961). All show that the electrical stimulation of limbic centers, considered essential for emotional experiences, makes it possible to bring to a conscious level the perceptions developed in the neocortex (ibid., p. 153–154). In short, even after elaboration in the temporal neocortex, what is experienced by the senses seems to have to be transmitted to the limbic structures so that the experience has an immediate character. Rosenfield reports on this subject in the following passage, which is underlined in the text published by Gloor about his experiences in 1982: “The limbic contribution specific to this process could be to assign an affect or motivation role to a percept. This limbic continuity is probably the prerequisite for a percept to be consciously experienced ancient part of the brain that has direct connections with the part of the brain responsible for vigilance and regulation of emotions, which were then stimulated (op. cit., pp. 244–248). 13 Bodei also points out the extreme importance of a troubled emotional context for the occurrence of déjà-vu, and infers a hypothesis centered on this crucial character. According to this hypothesis, “déjà-vu is the result of an unconscious reparation to compensate for something that troubles us, a reparation acting in two, partially complementary ways: by reviving the desire for life in the face of death and the loss of everything that is dear to us, or by provoking an effort of self-immunization in the face of a painful past” (op. cit., p. 36, original author’s use of italics, author’s translation). Further developed, the cause of the trouble surrounding the déjà-vu would be “the instantaneous and involuntary collision of two incompatible opposites, each of which would like to abolish the other, while at the same time being unable to do without the other: transience and eternity, ‘never again’ and ‘always again’, past and present, nothing and everything, loss and fullness of life, pain and joy, nostalgia and return home” (ibid., p. 37, author’s translation). Déjà-vu would thus reflect a conflict, arising “in moments of abandonment of the vital force, between the desire for a full life where nothing would be lost and the perception of its impracticability” (ibid., p. 38, author’s translation).

182

The Carousel of Time

or evoked; and this could imply that any consciously perceived event has an emotional dimension, however small it may be” (ibid., p. 154, author’s translation). In the wake of the more recent work of A. Damasio (1995, 1999, 2003, 2017) and many others, cognitive sciences now confirm the fundamental role of the hippocampus, prefrontal cortex and amygdala in the coordination of emotions and cognition. The long-term storage of information is thus based on the reorganization of the functional properties of neural networks involved in these brain areas: the memory trace, first treated by the cortical areas, is quickly integrated into the hippocampus, and the recent memory thus formed only lasts over time if it is gradually stabilized through a dialogue between the hippocampus and the neocortex, particularly the prefrontal cortex. During this dialogue, the hippocampus emerges, at least in part, in favor of the neocortex, which then becomes the center of our stabilized memories (Pereira de Vasconcelos 2018)14. We can therefore reasonably assume that a momentary neuronal or synaptic failure at the circuit level that leads from the treatment of the memory trace by the cortical areas to its integration into the hippocampus constitutes the neurological aspect of the cognitive disorder that déjà-vu manifests15. We will then admit, following Haymans, that the same process is at work in a more pronounced way during the phenomenon of depersonalization, except that the latter, in accordance with its prediction, is less frequent than déjà-vu, because it is linked to a more significant and probably less frequent decrease in psychic energy, the opposite being true for the sudden strangeness of a word. And this explains also the observation that these problems occur most of the time in the evening, when the actor is in the company of other people, but then does not speak and is in a state of fatigue, or has just studied tedious topics or those unrelated to each other. Indeed, not having the floor gives him every opportunity to try to satisfactorily categorize the information he perceives in his environment, but his state of fatigue at this late time of the day reduces his psychic energy, so that he has difficulty putting his own representations-occurrences into psychological categories that ensure their successful recording in his memory. 14 On the registers of very short-term memory, also called sensory registers, which store information for about a second, and short-term memory, which stores information for about 10 seconds, see A. Petitjean (op. cit., pp. 47–61). 15 According to R. Bodei, an explanation of déjà-vu in terms of temporary dysfunction of brain mechanisms would not be sufficient to clarify the qualitative aspects of personal experience (op. cit., p. 40). Taken up again later by this author (pp. 116–118), this anti-reductionist position is quite common, if not agreed, in the debates surrounding the Mind–Body problem. For our part, we reject both anti-reductionism and reductionism on this subject, in the name of the radical monism we share with H. Atlan and A. Damasio (see above, note 11 and Chapter 3, section 3.2).

Déjà-vu and the Specious Present

183

And it is hardly necessary to stress that the study of topics without a link between them is not likely to put the individual actor in a favorable psychical disposition to instantly succeed in such an enrollment. Finally, subjects who exhibit high emotionality and very pronounced mood swings, or who adopt an irregular work rhythm, see their concentration slacken, and especially their psychic energy reduced, so that they find themselves in psychological conditions conducive to a sudden feeling of strangeness of a word, déjà-vu and, in the worst case scenario, a feeling of depersonalization. Silent and tired, or cognitively dispersed: these clear conditions of the situation experienced by the actor are likely to delay the integration of one or more categories into his individual memory, and this delay results in the occurrence of one or other of the three cognitive disorders in question, the occurrence of which is likely all the more favorable as the actor’s state of fatigue is marked. Indeed, stress – and any fatigue is a form of stress – specifically targets the brain areas involved in the coordination of emotions and cognition, and thus potentially radicalizes the cognitive effects that can result from a neural or synaptic failure between the processing of the memory trace by the cortical areas and its integration into the hippocampus. However, this state of fatigue is not necessary for these phenomena to occur, which it merely aggravates. Our hypothesis is therefore consistent for the moment with seven of the eleven characteristics that we had noted with regard to the three problems concerned, whose degree of severity seems to be due to a greater or lesser decrease in psychical energy, maximum during a depersonalization synonymous with the disappearance of all associations with the known, minimal when it is the meaning of a single word that evades registration in memory, and intermediate between the two with regard to the déjà-vu16. The degree of weakness of the actor’s psychic energy probably 16 Such an interpretation is close to that proposed by A. Didierjean (op. cit., pp. 73–75), according to which the phenomenon of déjà-vu is due to a discrepancy between a cognitive mechanism, familiarity, which informs us of the presence or absence of information in our memory, and another cognitive mechanism, recall, which leads to the evocation of memory. In general, these two mechanisms work together, in parallel, but sometimes they are out of step: the feeling of familiarity is strong, but the recall fails to carry out efficient research, and the phenomenon of déjà-vu occurs. As for H. Bergson, the starting point for his interpretation of what he called “false recognition” in 1908 is an inversion of perspective: “The important question is not […] why it arises at certain times, in certain people, but why it does not occur at all at all times” (1976, p. 129, author’s translation). It is therefore important to begin by examining how memory is formed under what we can call normal conditions. According to him, memory would be “something that our consciousness possesses or that it can always catch up with, so to speak, by pulling at the thread it holds: memory comes and goes, indeed, from consciousness to unconsciousness, and the transition between the two states is so continuous, the limit so little marked, that we have no right to suppose between them a radical

184

The Carousel of Time

corresponds to the degree of difficulty he experiences in operating what we have called a successful registration, and therefore the degree of severity of the psychic disorder which is then his. The sudden strangeness of a word is its most benign form, because it is thus a single category that cannot be constructed or recalled in such a way as to give rise to a successful inscription. With déjà-vu, there are several categories that the actor temporarily fails to construct or recall in this satisfactory way. Finally, depersonalization seems to testify to a complete reversal of perspective, in the sense that it is then the individual actor’s entire memory that temporarily seems

difference of nature” (ibid., pp. 129–130). We thus rediscover the notion of levels of consciousness that we had adopted following J.-F. Le Ny, but the almost instantaneous character that our interpretation recognizes with the successful inscription of a psychological category in memory is erased by Bergson in favor of a perfect contemporaneity of the perception and formation of memory. Indeed, he writes: “Let us agree […] to give the name of perception to any consciousness of something present, both to the internal perception and to the external perception. We claim that the formation of memory is never later than that of perception; it is in line with it” (ibid., p. 130, original author’s use of italics, author’s translation). Perception would be to memory what the real object in front of the mirror is to its image seen behind the mirror (ibid., p. 136), and the present moment would be precisely the moving mirror that constantly reflects perception as a memory, so that the latter would first be a memory of the present (ibid., p. 137). However, perception would always be polarized by an intention to act, which would put us in a posture of attention to life that would make us grasp the present in the future on which it encroaches rather than in itself: for Bergson, as for the physiologists of his time, memory only selects and makes conscious what is useful for action and the future. Perception would therefore always be ahead of the memory of the present, but when the momentum that drives it stops, the memory joins it and the present is known at the same time as it is recognized. False recognition would therefore be “the most innocuous form of inattention to life” (ibid., p. 151, author’s translation). This telescoping of perception and memory, to which he finally attributes the phenomenon of false recognition, is at the opposite end of the spectrum from the occurrence of a delay in a successful memory entry in which we see for our part the psychological cause of the déjà-vu phenomenon. This difference in interpretation is essentially due to the fact that Bergson considers memory and consciousness as two separate entities – the present “splits at any moment, in its gush even into two symmetrical bursts, one of which falls back to the past [memory] while the other leaps forward [perception]” (ibid., pp. 131–132, author’s translation) – while consciousness is considered to be only a property of a memory system. For two interesting criticisms of the Bergsonian interpretation, see Bodei (op. cit., pp. 67–76) and Virno (1996, 1999). For a critique of P. Virno’s criticism – “an intelligent interpreter”, according to R. Bodei (op. cit., p. 77), but whose identification between memory and perception, on the one hand, and power and action, on the other hand, on the basis of a comparison of Bergson (1930) with Bergson (1976, 1908), would lead to bold and complicated constructions, see Bodei (op. cit., pp. 77–79). On some of Bergson’s views of memory, and the provocative nature of some of his other conceptions for contemporary neuroscience, see Gallois and Forzy (1997).

Déjà-vu and the Specious Present

185

to escape it, thus making it impossible to successfully integrate any category whatsoever. We thus have four characteristics to examine, starting with the particular frequency of the phenomenon of déjà-vu in adolescence (Brown op. cit.). How else can this be explained if not by invoking the relatively small volume of the adolescent’s memory compared to that of the adult? As a result, adolescents welcome more categories that seem completely new to them than adults: the extensive dimension of their learning is more important to them than its intensive dimension, while the opposite is gradually taking hold as they age. It seems plausible that the higher frequency of information-rich messages received in these early years makes it more difficult to transform a representation-occurrence into a semantically coherent category with the current state of adolescent memory than when it is that of an adult. Indeed, in adolescence, such categories are by definition more often to be constructed than to be recalled, whereas the opposite is gradually happening with age, and such a construction is probably more demanding in terms of psychic energy than a simple recall17. In addition, during adolescence, the anchoring of routines synonymous with cognitive resource18 savings is obviously less frequent and less pronounced than in adults.

17 Initiated in 1945 by the work of Adriaan de Groot (1965), the work on expert cognition has shown that the difference in the expert’s and the novice’s performance, for example in chess, is due to a difference in the organization of their knowledge in memory. Thus, William G. Chase and Herbert A. Simon (1973) created an experiment that revealed that, where after observing and memorizing for five seconds a board with 25 pieces arranged in a configuration borrowed from a real part, the novice was able to correctly replace about four pieces, while the expert (a master) correctly replaced 16. This better performance was due to the fact that they grouped these 25 pieces into a much smaller number of categories (chunks), thus reducing the complexity of the perceived, hence a faster analysis of the situation. This mode of operation was then highlighted in most fields of expertise (music, sport, etc.), and Fernand Gobet and Herbert A. Simon thus insisted in a series of articles on the importance of the history of the game scene, beyond its instantaneous division, and on the existence of organization plans, or models (templates) in the experts’ memory (1996a, 1996b, 1996c, 1996d, 1996e, 1998). “Older” actors developed more expertise in different fields than “younger” actors, such as adolescents, but as Merim Bilalic, Peter McLeod and Fernand Gobet (2008a, 2008b) have shown, the automatic nature of this mode of operation – the templates being operated in a routine mode – can be to the detriment of a more creative functioning (op. cit., pp. 102–110). 18 See Chapter 2, section 2.2, Hypothesis 4.13. In analyzing the phenomenon of reminiscence, D. Draaisma notes that all subjects asked to relate memories of events that mattered to them mention memories that related to the period of their adolescence. One of the explanations he considers justified in this regard is that “between the fifteenth and twenty-

186

The Carousel of Time

Constructing new categories and not recalling known categories, and this in a cognitive universe relatively less provided in diverse routines. The conjugation of these two characteristics implies that the process leading to a successful inscription of the representation-occurrence in the memory consumes more cognitive resources in adolescents than in adults, even though the quantity of such available resources does not benefit that one from the abundance that he benefits this one via the anchoring of routines. It is therefore not surprising that the brief delay in successful inscription, which we believe is the cause of déjà-vu, occurs more frequently in adolescents, who consume more limited resources than adults who are better equipped in this area. Another characteristic that we have recognized in this phenomenon is that those who have experienced it evoke in this respect a notion of second (and never third) time. As we will show in more detail below, this characteristic is explained very simply in our model. In the absence of déjà-vu as is normally the case, or when this cognitive impairment is overcome by a successful inscription into the individual actor’s current memory, the latter’s state is different from what it was before this inscription. And this new state of individual memory means that the actor concerned has no reason to experience this problem in the same form and with the same content as before, so that the very notion of a third time is by nature foreign to the phenomenon of déjà-vu. Finally, the last characteristic that remains to be examined is interpreted even more immediately in the context of our model: if the subjects of a déjà-vu situate the fuzzy image that they perceive then in an indeterminate “before”, unlike that of the present which seems clear to them, it is quite simply because the former has precisely the characteristic of not succeeding in integrating into the individual memory, unlike the latter. The fuzziness and indeterminate before of one of them are due to the impossibility of its successful inscription, and therefore of its dating, whereas the sharpness of the second signs its inscription in this memory and this marks the passage of the actor’s specious present from a given date to the following date, thereby making it possible to take a time step in the entire network.

fifth years, we generally experience more things that are worth remembering” (2008, p. 284, author’s translation).

9 The Acceleration of Time, Presentism and Entropy

We will first show that our model is that of a network whose time is historically constructed by the events that occur within it – inter-individual communications, intra-individual categorization, forgetting by erasure or reinforcement – and not the other way around: far from being part of a temporal framework that is always there, these events produce time. It follows that the network is always at the end of time: each of its specious presents is a “now”, and only this now exists, as in some models of physicists refusing the Einsteinian conception of a block universe. Because of the finite discriminatory power of individual actors in their interpretation of the world, this historically constructed time is irreversible rather than irrevocable. Historically constructed as an irreversible continuation of “now”, the time of our network is not always felt by individual actors as flowing in a uniform mode. More specifically, it is mainly an acceleration of time that we are talking about today. Nevertheless, some authors do not hesitate to combine this phenomenon with a slowdown in time to explain the form of fading of history that we would know today (Baudrillard 1992), and still others see this slowdown leading to a particular regime of historicity: a presentism, in which the disappearance of the past and the future would give way to the perfect immobility of an eternal present (Hartog op. cit.). How can we interpret this feeling of acceleration that many of our contemporaries say they feel? Is it contradictory, on a purely psychological level, with a slowdown in perceived time leading to presentism? On the contrary, we believe that it is a complementary relationship that links the latter to the acceleration or deceleration of perceived time, and we will thus show that our model also leads to a socio-historical interpretation, beyond the previous psychological explanation that it over-determines, of the form that all these phenomena take in our current societies.

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

188

The Carousel of Time

Finally, in relation to the irreversibility of historical time thus constructed, we will propose an interpretation of the content that the concept of entropy could present in our model. After a brief presentation of the genesis of the entropy concept, we will first identify, in this context, an expression of the law held in the macroscopic version of this concept, and then we will show, in this same context, that the microscopic version of the entropy takes the form of a theory.

9.1. Historical time, irreversibility and end of time The temporality whose three facets have been linked to the previous chapter gives time as such a historically constructed and irreversible character. The time of our network is historically constructed, in the sense that it is the changes observed in individual memories that move the network from one state to the next, and not the reverse. Here, the book of time is not written in advance, but is written as and when cognitive changes affect at least one individual actor as a result of communication, categorization or forgetting1. Any observed change of this type irreversibly adds one more step to the network’s course, via the additional registration or deletion of categories (or combinations of categories) in the actors’ memories2. We thus encounter a problem of circularity raised by many authors, such as A. Gonord, noting that “it is a very singular character of time to lead almost systematically to circular reflections” (op. cit., p. 26, author’s translation), this circularity being “at work as soon as one tries to clarify time by the relationship that the subject, object or language can have with it” (ibid., p. 27, author’s translation). As E. Husserl had already pointed out in paragraph 7

1 We thus refute the possibility of the existence of a time without change and affirm a strong relationalist position about time: contrary to substantialism asserting since Newton that time is a substance independent of the objects and events that take place there, for relationalism, “time is nothing else than a system of relations between things and events that are then not conceived as being in time, but as being constitutive of time – time is thus reduced to objects and events” (Benovsky 2018a, pp. 8–9, original author’s use of italics, author’s translation). Our relationalism can be said to be strong in the sense that, more than a time reduced to events, it envisages a time actively constructed by them. The controversy surrounding the idea of time without change was opened up by Sydney Shoemaker (1969), extended by Graeme Forbes (1993) and then taken up in a new light by Robin Le Poidevin (2010). On the relationship between a relationalist theory and a substantialist theory of time that would be less deadly enemies than twin brothers, see Benovsky (2011). 2 Let us recall again that we do not distinguish, in this part of the model, the individual memory of an actor from his cognitive repertoire. Useless here, however, this distinction will be fundamental for the analysis of a scientific revolution, in the sense of T. S. Kuhn (op. cit.), as we will develop it in the following chapter.

The Acceleration of Time, Presentism and Entropy

189

of his Leçons pour une phénoménologie de la conscience intime du temps (cited by Gonord, ibid.): “The perception of time presupposes itself a time of perception […]. Time is not an object like any other for a subject who thinks about it. Because to think about it takes time, and the snake bites its tail again” (ibid., author’s translation). This is true, but we have just seen precisely that our model identifies a time step in the network with a movement that erases a given specious present, while simultaneously recording the immediately subsequent specious present in a movement generated by a change in the individual memory of at least one observed individual actor. However, we have seen, following J.-M. Guyau (1902, p. 12) and N. Farouki (2000, pp. 65–66), that such a change is necessarily accompanied by a reflexive movement to grasp the perceived event through which the specious present is accomplished. The consciousness of duration then ceases to be an immediate perception to become a more or less symbolic construction, and the observed becomes an observer of itself. It is this reflexive movement that modifies his memory whose state, therefore, is no longer identical to the previous one. Our representation of the actor’s specious present at the moment when he takes himself as the object of representation at date t is not identical to that of the actor’s specious present who has just integrated his representation as an object into his individual memory and thus makes the date t + 1 happen, and since they are thus those of two successive states of his specious present, these two representations are also those of two successive states in the network3. As M. Merleau-Ponty had sensed with the notion of field of presence that he introduced into his Phénoménologie de la perception (Merleau-Ponty 1945), and that he took further in Le Visible et l’invisible (Merleau-Ponty 1964) by proposing in this posthumous work the image of a whirlpool or spiral to represent time, time as such is apprehended here as a Möbius strip to which applies a helical movement – which can indeed give the illusion of a circle. However, this pseudo-circle is in reality a turn, and if each turn drawn by this movement preserves the ontological circularity, which is that of the actor considered as being, it nevertheless activates the vertical irreversibility, which is that of the temporality of the specious present of this being, always indexed at a precise date in the network. Ultimately, time as such can therefore be represented as a Möbius strip representing a specious present in the network, and whose connections observed during each new turn leading 3 See on this point B. Ancori (2010). Our position is therefore clearly perdurantist, as that of W.V. Quine (1950), David Lewis (1986) and Mark Heller (1984): just as it is spatially extended by its cognitive repertoire and semantic memory, our individual actor is temporally extended (Benovsky 2018a, p. 7). On the question of temporal parts and the maintenance of identity over time, see Peter Van Inwagen (2000).

190

The Carousel of Time

from this specious present to its immediate successor would each time be at a level immediately higher than the previous one, thus drawing the temporal sequence of the constellations of specious present constituting the trajectory of our network. These consecutive Möbius strips would therefore not connect in a usual three-dimensional space, but in the space irreversibly tied to the fourth dimension, which is the vertical temporality of the network. Because of this helical movement printed on Möbius strips featuring the individual actor’s time as such at each time step in the network, the self of each of us is indeed the strange loop of which D. Hofstadter (2008) speaks of. We know that each state of the network thus incorporates, while happening, the trace of all the previous states of which it is then the only mode of subsistence, and we saw in the previous chapter that this “rear-guard end” that limits the beginning of the block of duration which consists of the specious present of each individual actor is accompanied by an “avant-garde end” that defines its end. This constructivist conception of duration therefore means that the network is always at the end of time: its irreversibly gone past is only preserved as a result of the traces preserved by the multiple specious presents of today, and there is absolutely nothing beyond these specious presents once they have become such4. It is quite remarkable that this conception closely meets that of some physicists who, in search of a bridge between quantum mechanics and relativity theory, reject the conception of a “block universe” where past, present and future would coexist from all eternity and in which they would have exactly the same status. On the contrary, according to the conception of “dynamic space–time” proposed by these physicists, the course of time would do space evolutionary, and would permanently produce “now”. These events would constitute the only reality, which would remove any objective existence from the future: the appearance of the “now” would be local and would form the edge of time, its present end, so that the future would never exist as it was already there, but would be permanently created as the advanced “now” (Elitzur and Dolev 2005a, 2005b, 2007)5. Finally, for the latter, as in our model, and in those of many other physicists, such as C. Rovelli (2012, 2015a, 2015b, 2018) or B. Guy (2011, 2015, 2018), change is at a more fundamental conceptual level than space and time. 4 In D. Abram’s highly original monism, time and space are not separated, and these two conceptual dimensions are linked to places. Thus, the future is associated with the horizon and the past with the interior of the Earth: the former postpones its presence by constantly moving away when we approach it, and the latter refuses its presence by modifying itself with each of our attempts to discover it (2013, pp. 235–286). 5 In the same passage of his Confessions (XI, 20) where Saint Augustine anticipated the notion of the specious present, he wrote: “It is manifest and clear that there are neither times future nor times past” and that it is incorrectly said: “there are three times: past, present and future”.

The Acceleration of Time, Presentism and Entropy

191

Crystallized in our concept of the propensity to communicate, the indissoluble link between space and network time, constructed simultaneously by the perception of events resulting from inter-individual communication, intra-individual categorization or forgetting, implies that space is no more reversible than time. Because if it were, so would time, and for that to happen, the changes made to a given state of the network and leading to the next state (or states) would have to be exactly those that would then be erased by an oversight of the first type leading the network to a subsequent state strictly identical to the initial given state. The probability of such a sequence being carried out is obviously extremely low, especially since this possible reversibility of space–time would require, by construction, to be produced with the whole population of individual actors: all other things being equal, the larger this population is, the lower this probability is6. This irreversibility of historical time must be understood in the sense of N. Georgescu-Roegen (1969, pp. 85–86) who distinguishes irreversibility from irrevocability within the class of “non-reversible” processes. Unlike an irreversible process (such as the growth and fall of a tree’s leaves), an irrevocable process cannot pass more than once through a given state (such as the entropic degradation of the universe, conceived by classical thermodynamics). However, the choice between these two qualifiers appears to be linked to the power of discrimination, which is that of the observer of the process in question: the lower this power is, the more the term irreversibility must prevail to the detriment of that of irrevocability, and vice versa. Indeed, let us imagine that this power is infinite and that the map thus becomes the territory – to use

6 Such reversibility may be possible at the individual level and in a very short-term perspective, where it would be another possible explanation for the déjà-vu phenomenon (Draaisma 2008, pp. 213–252). Still at the individual level, but in a temporal perspective that may be longer, the theme has been used extensively in the literature since H.G. Wells, particularly in the specific register of science fiction, exploiting at leisure all the temporal paradoxes thus raised, and where the reversibility of time generally seems undesirable. Indeed, like Stephen King’s 22/11/1963 hero (2013), science fiction characters are generally reluctant to change the past to which they return, because they quickly realize that the future world thus created would be even worse than the one from which they came – in this sense, they are Leibnizian characters who always inhabit the best of all possible worlds. By contrast, on the current reconstruction of a material and cognitive context identical to that of a past in which we would like to see ourselves projected again, see Time and Again, written in 1970 by Jack Finney, which Stephen King considers “THE greatest time travel story of all time” (op. cit., p. 937), as well as the interesting account by Martin Sutter in Le Temps, le temps (2013). For a philosophical interpretation of the paradoxes inherent in time travel, see Benovsky’s series of articles in Part Three (2018b, pp. 371–487); for a scientific interpretation of the paradoxes of time in physics, see Christophe Bouton and Philippe Huneman’s series of articles in Part Two (2018, pp. 115–235).

192

The Carousel of Time

Alfred Korzybski’s famous formula (2007). There would then be strictly no leaves that are similar to any other in the eyes of the observer, and the process of growing and falling evoked by Georgescu-Roegen would be perfectly irrevocable, and not irreversible. This process can therefore only appear irreversible to the extent that the observer’s power of discrimination is finite: therefore, two leaves that might seem different to him if he had a higher power of discrimination, and that he would certainly hold for different if that power were infinite, appear to him to be sufficiently similar to place them in the same equivalence class, or even in the same psychological category. In this sense, the process of growing and falling always concerns the “same” leaf in his eyes, and hence it seems irreversible. On the other hand, if the observer had infinite power of discrimination, he or she would not feel the passage of time. In questioning the meaning that time which we perceive as time that passes could have whereas time does not exist at the fundamental level, Rovelli writes: “The idea that allows us to find macroscopic time from a fundamental atemporal theory is that time appears only in this thermodynamic statistical context. Another way of saying it is that time is an effect of our ignorance of the details of the world. If we knew all the details of the world perfectly, we would not have the feeling of time passing” (2012, pp. 104–105, original author’s use of italics, author’s translation). The irrevocability of time or the absence of a sensation of its flow thus seems to constitute the two possible consequences of the infinite power of discrimination with which the observer of a process would be endowed. In our model, such a power would mean the erasure of the granular character that we know to be that of the network space, due to the disappearance of the division of the world into distinct categories, in favor of a continuum synonymous with total blurring in the interpretation of the world by the observer who would be endowed with it. And to this infinite power of discrimination would correspond an infinite memory of the latter, thus condemned, such as the unfortunate patient whom A. Luria called Veniamin and whom he followed for nearly 30 years, to be totally incapable of abstracting and categorizing anything about the experiences he was living (Luria 1995, pp. 193–305), or in the image of this imaginary character named Funes, whom Jorge Luis Borges said he perceived everything, but was incapable of general ideas – this blatant incapacity constituting, as for Veniamin, the counterpart of his exhaustive perception (Borges 1974, pp. 114–117).

The Acceleration of Time, Presentism and Entropy

193

9.2. On the sensation of acceleration of time and presentism It may seem paradoxical to speak of acceleration of time, whereas the notion of acceleration is precisely defined as an increase in the speed of a moving object, and this speed is itself defined as a relationship between a distance and a unit of time. Hence, a physicist like E. Klein has mentioned this paradox on many occasions, which it would be tedious to draw up a list of it here, and stressed the absolute non-sense that such a notion represents in his eyes. That said, it is clear that this paradox only exists if the concept of time considered covers only time described as “objective”. However, our discussion about an alleged “ancestral paradox” has precisely dispelled it by affirming the primacy of “subjective” time over this objective time. The same is true for the phenomenon of time acceleration, whose apparently paradoxical character disappears as soon as an increase in the subjectively felt speed of time, measured in terms of calendar time units, is designated. We will see that this acceleration is in no way contradictory with a slowdown of time leading to presentism: everything here is a matter of point of view. We will successively propose two complementary interpretations of this phenomenon, both of which are logically in line with our model of the structure and evolution of our network of individual actors. The first is purely psychological, and extends some of the concepts developed above about déjà-vu. The second is in a historical-sociological register, and shows that the psychological and anhistorical determinations of the perceived acceleration of time thus interpreted have recently been over-determined by the primacy given to communication on cognition in our developed societies. 9.2.1. A psychological interpretation of the acceleration of time In a chapter on the phenomena of reminiscing, D. Draaisma gives pride of place to the story of a man named Willem Van den Hull. Born in Haarlem in 1778 and died in 1858, Van den Hull’s a voluminous autobiography begun in 1841 and completed in 1854. This autobiography records memories from 74 years of Van den Hull’s life in separate 800-page notebooks of about 400 words each (2008, pp. 253–291). Draaisma points out in particular that the themes highlighted in it have the effect of expanding or contracting time: in his text, Van den Hull’s love for a certain Lina dilates time to the point of transforming a short decade into “eleven long years”; and during the years 1841–1848, when his existence seemed particularly monotonous to him, time seemed to narrow to the point that he sent in a single sentence the summary of these 7 years of his life (ibid., p. 288). It thus appears that the subjectively-felt changes in the speed of time flow are rooted in the event richness that marks a period whose length is measured in

194

The Carousel of Time

calendar time units. Looking back, a period relatively rich in events seems longer than it has been objectively, and it is thus a slowdown in time that is then felt. The opposite occurs in a period relatively poor in events, where it is an acceleration of time that the actor concerned subjectively feels. Such observations seem counter-intuitive, because our daily experience of duration generally takes the opposite direction: when nothing happens, time seems endless to us, while periods rich in various events seem to us to hit like lightning7. Where does this inversion of the speeds felt over time come from, depending on whether we consider it from a retrospective point of view or whether we are immersed in it8? The next chapter of Draaisma’s book, which gives its title to the book as a whole, provides some answers to this last question (2008, pp. 293–329). A first element results from an observation already formulated in an interrogative way in 1890 by W. James in his Principles of Psychology, and which Draaisma finds again in Ernst Jünger’s Traité du sablier published in 1954 – the latter was then approaching 60, with the feeling that life accelerates with age. How is this possible, wondered James, if hours and days are always the same as themselves (ibid., p. 294)? P. Janet’s 1877 explanation of this phenomenon is notoriously wrong: if, as he says, the apparent length of a certain period of life was really proportional to the total length of that life – a year representing one-tenth of the life of a 10-year-old child, and one-fiftieth of that of a 50-year-old man – the time felt could never slow as age progressed, which is contradicted by Van den Hull’s autobiography. As for 7 D. Draaisma notes in the literature multiple references to this link between the perceived speed of time flow and the richness in events of the period whose length is measured in terms of calendar duration: both Marcel Proust interpretation in À la recherche du temps perdu and Thomas Mann’s La Montagne magique are abound with examples of what Draaisma calls J.-M. Guyau’s “inner perspective” (2008, pp. 303–307). In 1890, the latter already drew attention to the diversity of events as a result of our perception of differences: “Remove [the] perception of differences and you will remove time. One remarkable thing in dreams is the perpetual metamorphosis of images, which, when is continuous and without sharp contrasts, abolishes the feeling of duration” (op. cit., pp. 18–19, original author’s use of italics, author’s translation). This observation is in line with that reported at the end of the previous section by C. Rovelli, who links the infinite power of discrimination that the observer of a process would have to the absence of a sense of the passage of time in the latter. Indeed, we indicated on that occasion that such a power would mean the disappearance of the granularity of space-time in favor of a continuum that now seems to us to be similar to the one mentioned here by Guyau. Would our power of discrimination become infinite when it concerns the imaginary world that is the one of our dreams? 8 “We’re going on a week’s vacation. The days go by at full speed. And the last day is here, even before we realize. Once back home, we have the impression of having gone much longer than a week. So did time go faster or slower? If each day went faster than usual, how is it that, once added up, these ultra-fast days make a week that seems so much longer than seven days?” (Draaisma op. cit., p. 312, author’s translation).

The Acceleration of Time, Presentism and Entropy

195

James, who shows more perceptiveness here, he sees in this phenomenon a description, and not an explanation, of the subjective acceleration of time and the apparent shortening of years that he attributes to a monotony of memory contents and to the simplification inherent in retrospective vision (ibid., p. 295). Let us recall from this last interpretation the role of memory in the construction of duration and rhythm, because it joins our point of view which links the experience of time to what happens in consciousness, and thus reinforces our affirmation of the primacy of subjective time over objective time. And let us also keep in mind the notion of simplifying the retrospective vision mentioned by James, because we will see that it is one of the keys to answering our question about the inversion of the speeds felt over time according to the point of view, retrospective or current, at which we are placed. Guyau lists a whole series of factors influencing our perception of time and highlights the estimation errors to which they can lead. And above all, he follows in James’ footsteps with regard to the apparent shortening of the event-poor years when we look at them from a retrospective point of view: “In our opinion, the apparent length of time appreciated at a distance is increasing due to the number of sharp and intense differences seen in the recalled events. A year full of significant and diverse events seems to be a longer one. An empty and monotonous year seems shorter: impressions overlap and time intervals, merging into each other, seem to contract” (1998, p. 99, original author’s use of italics, author’s translation). The parallel with the space he makes immediately after this passage – a space which he had said from the outset that its meaning would have preceded that of time in our evolution9 – is enlightening on this point: just as an object appears to us further away in space when we are separated from it by a certain number of objects that intertwine between it and ourselves as landmarks, and the very clear objects seem closer to us, a memory seems to us all the more older when other memories intertwine between it and ourselves, and those that are very clear in time seem more recent to us. 9 From the very first pages of his book, J.-M. Guyau affirms this primacy of space over time: “If, even in humans and especially in children, the idea of time remains very obscure compared to that of space, this is a natural consequence of the order of evolution that has developed the sense of space before that of time. We easily imagine space; we have a real inner vision, an intuition. Try to imagine time, as such, you can only do this by representing spaces. You will have to align the successive phenomena; to put one on a point of the line, the other on a second point. In short you will evoke a line of spatial images to represent time” (op. cit., p. 11, original author’s use of italics, author’s translation).

196

The Carousel of Time

Hence, the years seem so long in youth and so short in old age: “The youth are impatient in their desires; they would like to devour time, and time is dragging on. Moreover, the impressions of youth are lively, new and numerous; the years are therefore full, differentiated in a thousand ways, and the young man looks back at the past year as a long series of scenes in space [...] We classify them according to their degree of intensity and their order of appearance. The machinist is memory. Thus, for the child, the past first of January goes back indefinitely behind all the events that followed it, and the future first of January seems so far away, so eager is the child to grow up; on the contrary, old age is the setting of classical theater, always the same, a banal place [...] The weeks are similar, the months are similar; it is the monotonous train of life. All these images overlap and become one. The imagination sees time as a shortcut” (ibid., pp. 100–101, original author’s use of italics, author’s translation). James’ intuition is thus confirmed and clarified by Guyau, and we are all the more willing to adopt such an interpretation because it is perfectly consistent with the one we proposed above concerning the greater frequency of déjà-vu in adolescents than in adults: the years considered from a retrospective point of view seem longer to the adolescent than to the adult, because they have been richer in events for him, and we know that it is also this greater richness in representations-occurrences that explains the greater frequency of the phenomenon of déjà-vu in the first years of life than in the ones which follow. This perception of the relative speeds of time subjectively felt is reversed when one abandons the retrospective point of view to adopt the current point of view10. 10 What we call here the “retrospective point of view” and the “current point of view” are respectively called “perspective over time” and “perspective at a given time” by Jenann Ismael (2018), which uses these two types of perspectives to analyze the relationship between the elapsed time of lived experience and the time of the block universe of contemporary physics, and to propose a conception of John McTaggart’s (1908) “now mobile” that allows us to refute his argument that, in this famous article published in Mind, time is not a reality. On the impasses of J. McTaggart’s position, see also Jean Seidengart (2018), and for a more complete presentation of the work of this Scottish idealistic philosopher, (including the article by Mind), see Sacha Bourgeois-Gironde (2000). More generally, this discussion refers to the opposition between the position of “man on the edge of the world”, which is that of science and classical philosophy of sciences (up to and including Popper), and that of “man inside the world” of current science and epistemology. On the latter point, see, for example, M. Bitbol (2010) and B. Ancori (2010), as well as our discussion here, which began with the introduction of our concept of time as such (see Chapter 7, section 7.2), and was further developed in our analysis of the notion of the point of view from everywhere (see Chapter 7, section 7.3).

The Acceleration of Time, Presentism and Entropy

197

For the one who is living it, the period that will seem so short in retrospect to have been so monotonous that its images have been superimposed on each other and finally become one now seems to stretch endlessly. And on the other hand, the one whose length will be retrospectively associated with the richness of the distinct, numerous and intense events that had come to fill it seems to be running away at full speed in the eyes of the one who sees it now. From the point of view of its topicality, the flow velocity of subjective time relative to a period whose length is measured in calendar time units is therefore an increasing function of the event richness of this period, whereas from a retrospective point of view, it is the reverse: this flow velocity appears to us as a decreasing function of the event richness of this same period. In the end, there is no paradox here, nor is there any to talk about the feeling of accelerated time that many of our contemporaries say they are experiencing. The assertion that (subjective) time would thus pass faster and faster is necessarily formulated from a retrospective point of view, since its statement implies comparing several periods of time (calendar), which implies placing them in relation to each other in what constitutes the past of the present moment. This assertion of an acceleration of time therefore amounts to saying that these successive periods are currently proving to have been less and less rich in distinct, numerous and intense events. This is our psychological explanation of this phenomenon, and it simultaneously sheds light on the acceleration and deceleration of time in terms of the difference in points of view, according to which the event-driven wealth (or poverty) of a given period whose length is objectively measured is memorized. We will now see that this individual and timeless phenomenon takes a particular form in our current societies and that its socio-historical interpretation thus shows it today overdetermined at this collective level. 9.2.2. A socio-historical interpretation of the acceleration of time As it currently seems to be felt by our contemporaries, the acceleration of time is not contradictory, but rather complementary on a psychological level, with its slowing down, or even its total cessation with the advent of presentism mentioned above. Beyond the psychological interpretation we have just given to this apparently paradoxical phenomenon, can we explain the fact that it takes this precise form in our societies? In this perspective, let us remember that the form of the historical evolution of our socio-cognitive network depends on the comparison between the socialization rate of the idiosyncratic categories present in the current state of the network and the creation rate of new categories: as long as the first of these rates is less than or equal to the second, the tendency of the network’s trajectory towards a global informational equilibrium is at best conjured up and at worst indefinitely postponed, but when it is higher, the network converges towards its informational

198

The Carousel of Time

equilibrium along a trajectory that is simply hindered by the introduction of new categories in numbers that remain insufficient to reverse their direction. In each state of the network, the individual actor is thus confronted with a dilemma between communicating or engaging in the very essence of cognition in which categorization consists, and according to the option then most frequently chosen by all actors, the network will converge or not towards its informational equilibrium (Ancori 2014)11. We have also seen that, in the event of such convergence, the network would experience, period after period, an accelerated decrease in the rate of information production throughout its trajectory. This decrease is obviously synonymous with an accelerated depletion of event wealth within successive periods of this trajectory, so that, if it were to be proven in very concrete terms for our developed societies, it would be likely to explain the acceleration of time subjectively felt by our contemporaries. It seems that this is the case. Communicate or categorize12: here we can distinguish two sub-periods in our societies, from the feudal transformation of the 16th Century to the present day. The first, by far the longest, goes from this transformation until the mid-1970s, and the second, from the mid-1970s to the present day. 11 In 1967, Albert Einstein’s second son, Hans-Albert, reported on the BBC a memory of his mother that the loudest screams of his baby did not bother him: he continued to work “as if the noises had made him deaf” (cited by Klein 2016, pp. 23–24). This ability to emancipate oneself from any surrounding agitation, including the loudest, would be “undoubtedly one of the conditions of freedom of mind. It allows us to embark on the solitary path of what will be signalled from the outside as an ‘originality’. It is also, undoubtedly, the cost of spawning the creation. But for Einstein, this cost seemed almost nothing. Some minds are constructed in this way: there seems to be no correlation, no contradiction between tumult and concentration for them” (op. cit., p. 24). In terms of our dilemma, this means that such a noise corresponds to a multiplicity of different information perceived simultaneously by Einstein – as was the case for others, such as Montaigne and Seneca (ibid., n. 1). Such simultaneity removes all meaning from each particular piece of information, and thus assimilates their multiplicity to a total silence, propitious, as we know, to the most extreme concentration. Hence, silence, which is not only the absence of noise but an invitation to reflexivity, has been cultivated for centuries as a wealth, even a treasure, by thinkers, writers and scholars (Corbin 2016). 12 The choice between the two terms of this alternative is clearly decided at each date of the network, because then the actor communicates or cogitates. Nevertheless, over time, these two activities, although distinct, are largely interdependent and jointly produce new products. See the example of Claude Lévi-Strauss, taking advantage of his temporary exile in New York at the beginning of World War II to profoundly renew anthropological reflection thanks to his encounter with the linguistics of Jakobson and Saussure, his proximity to surrealism and the familiarity he then acquired with American anthropology. Like Max Planck, who inaugurated quantum physics thanks to his “communication” with Boltzmann, and unlike Georges Gurvitch, who did not exploit the same situation to leave his disciplinary sector, so that the continuity of his scientific trajectory was in no way disrupted, Claude Lévi-Strauss used the product of the weak creativity of the communications he then established with other disciplines to feed the strong creativity associated with his cognitive processes (Jeanpierre 2004; Ancori 2014).

The Acceleration of Time, Presentism and Entropy

199

During the nine centuries following the feudal transformation, it seems that the network could have constantly postponed the state of informational equilibrium that would otherwise have signed its final immobilization. Because feudal change has certainly led to increasingly more social communications than those of the previous era, particularly because of the rise of the urban phenomenon, these communications have remained relatively rare compared to the flows that mark our times. The people of these times probably had more cogitation than communication, and the productive richness of their categorization processes could for a long time have prevailed over the socialization of existing categories through communication. The resulting proliferation of mental categories marked the social and economic evolution of the 12th to 20th Centuries in Europe, characterized by an uninterrupted process of innovation. The main actors were first of all the craftsmen, whose knowledge gradually transformed into the knowledge of experts and engineers, at the very rate at which their tacit knowledge was transformed into formalized and codified knowledge13. Combined with a proactive State policy in favor of technical training, this evolution led, between 1760 and 1830, to the development of the first industrial revolution in Great Britain (coal, steam engines, mechanization of textiles and steel). In the society thus established, the corpus of technical knowledge that had been built up since the beginning of the 19th Century was organized into sectors (industrial mechanics, hydraulics, etc.), alongside a spectacular expansion of scientific institutions devoted to research, in companies and universities, and it led in the years 1880–1914 to the second industrial revolution (oil and electricity as new sources of energy, development of organic chemistry, the automobile, etc.)14. Much shorter than the previous one, the second period began in the early 1970s with the third industrial revolution – that of information and communication technologies, born of the meeting of theoretical computing and microelectronics. Just as Cro-Magnon man could not travel by bicycle for lack of an adequate cultural environment15, this revolution could not have taken place without the slow process of acculturation to the scientific or technological object linked to 13 On the artisanal origins of the scientific revolution, see Halleux (2009, p. 101 sq.). For the contribution of “popular” knowledge to scientific knowledge from prehistoric times to the present day, see Conner (2011). 14 We summarize here, undoubtedly in an outrageous way, the very detailed analyses of François Caron (2010). 15 The second chapter of T. Ingold (2013) begins with a rather facetious question: “Why didn’t the Cro-Magnon man travel by bicycle?” (op. cit., p. 55). The answer lies in the fact that, notwithstanding the “fundamental anatomical prerequisites that would have enabled him to accomplish such a feat […] he lived long before an object as ingenious and complex as a bicycle was developed (ibid.). Above all, "the cultural conditions necessary for cycling were not met” (ibid.).

200

The Carousel of Time

the establishment of a mass education system, as well as the emergence of a specialized press, throughout the 19th Century and the beginning of the 20th Century. However, this revolution very early on played a central role in all our societies by facilitating communication between individual and collective actors to an unprecedented degree. As early as 1969, the Arpanet prefigured the Internet; two years later, the microprocessor (Intel) appeared, and Apple marketed its first desktop computers in 1977. However, it is since 1995 that this movement has intensified with the exponential expansion of the Internet in our societies, which have since then been reputed to be knowledge-based. In 1996, there were 40 million Internet users worldwide; in mid-2011, there were 2.2 billion, with an explosion in wireless mobile platforms (16 million subscribers in 1991, and 5.3 billion in 2011) and in April 2017, they were 3.81 billion, or 51% of the world population. In July 2009, the number of social network users surpassed the number of e-mail users: in 2014, there were 1.4 billion, including 40 million in France (Castells 2012)16, and 2.91 billion, or 39% of the world population, in April 2017. At the same time, global Internet penetration rates clearly show that Western, North American and European societies are the most affected: 88% in North America and 84% in Western Europe, compared to only 33% in South Asia and 29% in Africa. Our networked societies are now seeing an ever-increasing number of individual actors enter into communication (Castells 1998), and they thus seem to be on the way to achieving the “man without interiority”, born with cybernetics in the mid-20th Century, a man who would be entirely “a communicating being” (Breton 1997)17. Finally, after the productive richness of analogy and metaphor has long been able to prevail in our societies over the socialization of existing categories through communication, it seems that we are now in the opposite situation because

16 In 2017, on average, mailbox owners checked content every 90 seconds, replied to a message received in 50% of cases in less than 47 minutes (with a peak at 2 minutes) with a message that was less than 100 words in 70% of cases. On the same date, the average daily volume of emails exchanged was 269 billion, 59.56% of which were spam. As early as 2015, in France, the United Kingdom, Germany and the United States, managers spent five and a half hours a day checking their email, and 30% of them sent professional emails full of various emotions to their superiors (Ortoli 2018, pp. 110–112). 17 Cybernetics was born with the classic article by Arturo Rosenblueth, Norbert Wiener and Julian Bigelow (1943), and one of whose objectives was to promote the notion of behavior. Five years later, N. Wiener (1948) replaced this term with that of communication, claiming that cybernetics was its science. On the history of cybernetics, see Segal (2003) and Triclot (2008). For an interpretation of the impact of cybernetics, and the communication paradigm it conveys, on the social sciences of the 20th Century, see Lafontaine (2004).

The Acceleration of Time, Presentism and Entropy

201

of technical conditions that increasingly favor communication over cognition18. Always more so, because we know that this process is cumulative, so the network should move at an accelerated pace towards a final state consisting of a single cluster of actors in which all would be strictly identical. All weak creativity related to communication would then have disappeared, and no strong creativity associated with cognition could persist in this eternal present. We are certainly not there yet, but in the tribalization phase of the world, we are experiencing today (Maffesoli op. cit.; Maffesoli and Perrier op. cit.), we know that within each tribe – the concretization in our historical societies of our theoretical concept of clusters of individual actors – the decrease in the rate of information production from period to period is even more pronounced than at the level of the entire network, due to a closer (and increasingly closer) link between actors at this local level than at the level of the network considered globally. According to Vincent Larivière (Larivière et al. 2008), this seems to be the case for some tribes in the academic world, as the productivity of some “hard” sciences has been declining since 1975. Indeed, the analysis of a very large corpus of scientific publications over a long period (1900–2004) shows that the production systems of the natural sciences, engineering and medical sciences are now in a steady state of equilibrium. Contrary to the widespread opinion that scientific literature would be all the more obsolete as it is older, the useful life of this literature has indeed increased overall since the 1970s, i.e. since the beginning of the third industrial revolution mentioned above. Science as a whole is currently based on an increasingly older body of literature: after a golden age (1945–1975), when scientists solved many of the problems they faced, no major “scientific revolution” emerged. However, we have seen that the mid-1970s marked precisely the beginning of the primacy given to communication over cognition in our societies. Is it by chance that the fascinating series of meta-analogies, presented by D. Hofstadter and E. Sander (2013, pp. 527–604), in support of their theory placing analogy at the heart of thought, leads to a series of conceptual revolutions in the physical sciences, all of which predate World War II very significantly? In our opinion, this situation has a twofold cause: on the one hand, an already long-standing cause consisting of the primacy conferred on technological applications over the fundamental sciences in the wake of the resounding success of the Manhattan project (Lévy-Leblond 2013); on the other hand, a more recent cause,

18 F. Caron defines innovation as “a recomposition process of existing knowledge” (2011, p. 9). In our view, this means that the innovation historian currently considers innovation as the result of weak creativity related to communication between actors, rather than strong creativity associated with the productive cognition of new analogies and metaphors.

202

The Carousel of Time

which stems from the very broad privilege conferred by our technological societies on communication on cognition for a little more than 40 years, and this at an accelerated rate for little under a quarter of a century. Would this promise us an entropic evolution?

9.3. Irreversibility of time and entropy of the network The notion of entropy usually covers two different expressions: at the macroscopic level, it is a law according to which in any closed system, free energy (usable for work) spontaneously tends to degrade into bound energy (unusable for work), and this unidirectional and irreversible process which thus transforms low entropy into high entropy limits a state, called thermodynamic death, where all the energy existing in the system is bound energy. At the microscopic level, entropy is a theory of statistical thermodynamics, and this theory can be summarized as follows: the order structure of matter identifies the particles of which matter is composed in a six-dimensional space (three spatial dimensions, each associated with a temporal dimension), and the instantaneous position of this set of particles defines a complexion of matter, so that each macro state represents the expression of a large number of different complexions. The transition from a low entropy characterizing the initial state of a closed material system, to a higher entropy characterizing a later state of the latter would reflect a shift towards an increasing probability of its state: the most probable state would be that of a maximum dispersion of the particles defining it, obtained from the greatest number of complexions. In this state, the concentration of the particles would be the same everywhere, regardless of the observed part of the material system. This equidistribution of particles within the latter would then represent the most likely state, because it is the most frequently observed in nature. By combining the idea of disorder with the number of complexions necessary to define the state of the system, we can link the macroscopic and microscopic expressions of the concept of entropy by saying that any closed system spontaneously tends towards a macroscopic equilibrium corresponding to the maximum degree of entropy (disorder) of its microscopic components. In this section, we will first present very briefly the genesis of the successive contents of the notion of entropy. We will then examine the meaning that macroscopic expression of entropy can have in our model of a complex socio-cognitive network of individual actors. Finally, we will conduct the same type of examination with regard to the microscopic interpretation of this concept.

The Acceleration of Time, Presentism and Entropy

203

9.3.1. A brief presentation of the genesis of the entropy concept The notion of entropy was first introduced by Sadi Carnot (1796–1832) in his Réflexions sur la puissance motrice du feu, where he showed in 1824 that steam engines work because heat passes from hot to cold. It was then taken up by many authors, in particular by Rudolf Clausius (1822–1888), who in 1865 transformed this notion into a law according to which heat cannot pass from a cold body to a warm body, and introduced on this occasion the term entropy according to a Greek word which means transformation, thus designating a measurable and calculable quantity (Clausius 1865)19. Known as the “second principle of thermodynamics”, this law stipulates that the entropy of an isolated system can increase or remain constant, but can never decrease20. The theory of entropy represents the microscopic version of the concept of entropy, initiated by the same Clausius, later clarified by Ludwig Boltzmann (1844–1906), and reinterpreted by L. Brillouin (1889–1969) in 1959. While S. Carnot thought that heat was a fluid, therefore a substance, L. Boltzmann – convinced of the existence of atoms and molecules at the end of the 19th Century when many were still were not – put forward the idea that heat would be the manifestation of the agitation of molecules, and would increase with this agitation: “This agitation mixes everything up. If some part of the molecules are moving, they are quickly carried away by the frenzy of others and also set themselves moving: the agitation spreads, the molecules collide and push each other. This is the reason why cold things heat up in contact with hot things: their molecules are hit by hot molecules and, drawn into agitation, they heat up” (Rovelli 2018, p. 43, original author’s use of italics, author’s translation). Like a mixture of cards initially arranged in order gradually causes confusion, the mixture of hot and cold, and not the other way around, confuses the molecules: initially arranged in two distinct and contrasting categories, they end up in a single and uniform category. However, as Rovelli indicates, if it is easy to understand the increase in entropy of a system by a mixture that gradually destroys its initial order, the following question is more difficult: why does this system, like all phenomena observable in the cosmos, begin in a state of low entropy (ibid., p. 44)? The answer to this question involves a notion that has become familiar to us: the explicit introduction of the observer into the statement of the theory. 19 We take this reference from C. Rovelli (2018, pp. 33–48), who is, with H. Atlan (1972, pp. 171–213), our main guide for presenting the different aspects of entropy. 20 The first principle of thermodynamics is that of energy conservation. During a transformation, energy is neither created nor destroyed: it can be converted from one form to another (work, heat), but the total amount of energy remains constant.

204

The Carousel of Time

Indeed, let us take up again with Rovelli, his card decks metaphor (ibid., pp. 44– 48), and imagine that the first half of the decks consists of red cards, and the second half of black cards: this particular configuration is ordered, therefore synonymous with low entropy, and this order will gradually be lost when the decks are mixed. The crucial point is then the following: this configuration is particular if our criterion for classifying cards is their color, and it is because this is our criterion for classifying them. It is another configuration that would be particular if we classified the cards into hearts or spades within the package, the hearts in the first half of the package and the spades in the second, or if we adopted any other classification criteria. Thus: “If you think about it, any configuration is particular: each configuration is unique, if I look at all the details, because each configuration always has something that uniquely characterizes it” (ibid., p. 45, original author’s use of italics, author’s translation). In other words, the notion that certain configurations are more specific than others only makes sense for a given criterion for classifying their elements, so that if one merely asserts the existence of a classification without specifying the criterion, i.e. if one characterizes all the cards, then no configuration is more or less particular than another, and all are therefore equivalent. According to Rovelli, the notion of particularity arises only from our approximate way of looking at the universe, and this is what Boltzmann would have shown: “Boltzmann showed that entropy exists because we describe the world in a fuzzy way. He demonstrated that entropy is precisely the quantity that measures the number of different configurations that our fuzzy vision does not distinguish” (ibid., original author’s use of italics, author’s translation). We thus find three familiar and closely related concepts: the first is the importance of the observer’s point of view for the description he gives of the observed object, hence our emphasis on the true object of epistemology, which is the link between observer and observed; the second is the significance of the amount of information in a hierarchical system, which is a measure of the information that the observer lacks to reconstruct the system in its details by proceeding at random; the third is that the difference between past and future, and more precisely the unidirectional and irreversible flow of time from the first to the second, is related to our fuzzy view of the world.

The Acceleration of Time, Presentism and Entropy

205

9.3.2. The entropy law21 and network trajectory The analogy between the second principle of thermodynamics, as expressed as a law by R. Clausius at the macroscopic level, and the overall form of the evolution of our network is relatively obvious. As we have seen, this law stipulates that any isolated system is the site of transformations that irreversibly tend to degrade its available energy to provide work with bound energy (heat, which can no longer be totally transformed into work), until reaching an equilibrium characterized by the fact that the available energy reaches its minimum value: the entropy of an isolated system can only grow and thus tend towards a maximum that it reaches when the system is in thermodynamic equilibrium. However, the physico-chemical activity that produces entropy was then considered from a more general point of view (Prigogine and Stengers 1988, p. 49 sq.). Thus, the state of equilibrium that we have just mentioned is today only a particular example of a stationary state, i.e. a state whose entropy does not change over time22. Such a state is achieved when entropy production – which measures irreversible processes within the system envisaged by Clausius – is continuously compensated by entropy input, which measures the system’s exchanges with its environment. While entropy production is positive or zero, the sign of entropy input depends on the nature of the system’s interactions with its environment. When these exchanges are such that this sign is negative, entropy production and input can compensate each other so that the system is in a stationary state. Under these conditions, the Clausius equilibrium corresponds to the particular case where exchanges with the environment do not vary the entropy, and where the production of entropy is therefore also zero. The same is formally true for our socio-cognitive network. In each of its states, it gathers a finite number of individual actors whose cognitive repertoires and semantic memories contain finite lists of combinations of psychological categories. When the only driving force for change involves inter-individual communication among these actors, we know that there inevitably comes a time when the set of recombinations constituting level 1 Batesonian learning stops: this is the time when the union of sets representing individual cognitive repertoires coincides with all 21 The term law suggests a universal scope of application, i.e. without any exceptions. Today, however, there are real “time reversal mirrors” in the field of ultrasound and electromagnetic waves. It is possible to revive a wave’s past life, and the development of these mirrors has made it possible to experimentally test this reversal of wave time in the most varied propagation environments. These mirrors reflect a wave “backwards” to its initial source, regardless of the complexity of the propagation medium, and thus shed new light on the problem of the irreversibility of time in physics (Fink 2009). 22 This type of state is also called “steady-state non-equilibrium” to clearly indicate the permanence of physico-chemical activities with regard to the system under consideration.

206

The Carousel of Time

parts of the set of psychological categories itself. In this informational equilibrium, there is no longer any difference between the cognitive repertoires of individual actors, even though it was the encounter of such differences during their communications that produced both the extension of individual memories and the increase in the volume of the global memory of the network. In terms of the example discussed earlier in this book23, this informational equilibrium was described by the matrix [aij] (t2) in Table 9.1. On this date, Hmax = 64 bits, H = 32 bits and R = 50%. All potential combinations of existing psychological categories were realized, so that c Cqj ≡ ∪i c (Si) (t2). The network was thus introduced in an informational equilibrium where it degenerated into a simple duplication of an individual cognitive repertoire. C1

C2

C33

C4

C5

Ak

1

1

1

1

1

Al

1

1

1

1

1

Table 9.1. The network in a state of maximum entropy

More generally, when this type of equilibrium is reached, any inter-individual cognitive difference is erased, so that extensive learning is no longer possible within the network, where only intensive learning opportunities (level 2 or level 3) remain. However, the latter are themselves limited, because the specular nature of Batesonian level 2 learning puts an end to this type of learning when all the combinations of elementary categories in the global memory of the network are such that q = g for all actors. The [Aij] matrix of individual cognitive repertoires, which represents the state of the network in the eyes of the observer, is no longer observable by the latter, and not only have the individual actors become perfect clones, but they are also pure automatons repeating untiringly the same message, identical for all and identically received by all, because they are constructed, emitted and received according to exactly the same modalities by everyone, and this without them knowing it. And Batesonian level 3 learning cannot be achieved either, because given the postulated absence of internal cognitive processes, this type of learning could only result from the confrontation of different individual behaviors during inter-individual communications among the actors, even though at the informational equilibrium of the network all these behaviors are identical. In the previous example, we thus assumed that these behaviors were invariant from one period to another: from t1 to t2, as from t0 to t1, each individual actor 23 See Chapter 5, section 5.1.1.

The Acceleration of Time, Presentism and Entropy

207

issued a message consisting of a combination of two elementary categories. We know that one, already shared, made communication possible, while the other, idiosyncratic, was informative for each communication partner. The transmission behaviors (made manifest by the composition of the messages sent) and the reception modalities (revealed by the modes of recording the information received in memory) were therefore strictly identical for the two actors between t0 and t1, and they therefore had no reason to change between t1 and t2. The same is true for subsequent periods, where we have just seen that they could only experience level 2 learning coming up against the limit corresponding to the disappearance of the [Aij] matrix from the observer’s point of view under the conditions we have mentioned. The informational equilibrium thus described corresponds very precisely to the thermodynamic equilibrium as conceived by Clausius. And it is easy to conceive this equilibrium as a particular case of a steady-state equilibrium of our network when we consider it to be open, i.e. when we assume that inter-individual communication is not the only driving force behind its evolution. Indeed, as we have seen previously24, opening up our socio-cognitive network consists of transforming the constants m and n into variables, either by introducing successive generations of actors (m becomes variable) or by opening up the network towards its interior (n varies because of the existence of refinements or aggregations of existing categories) or towards its exterior (n varies because of the creation of new psychological categories resulting from certain analogies or metaphors formulated by the actors, in particular when observing their natural and social environment). The entropy production of the network thus appears to result from the irreversible processes of inter-individual communication within it, while the external entropy contribution comes from the three mechanisms mentioned again just now. As we noted, our socio-cognitive network can then find itself, even with a constant number of given actors, in a steady-state equilibrium: it is sufficient for this to happen if the socialization rate of existing idiosyncratic categories, via inter-individual communication within the network, is exactly equal to the rate of creation of new idiosyncratic categories via the formulation of adequate analogies or metaphors. Let us now go beyond the simple example mentioned above, to reinterpret the most probable evolution of our socio-cognitive network, represented in its most general form in relation to entropy and the corresponding historical temporalities. When its only driving force is inter-individual communication, when information is initially distributed equally among more than two actors and when all these actors observe the same behaviors of transmitting and receiving information during their

24 See Chapter 5, section 5.1.2.

208

The Carousel of Time

communications, Chapter 5 has highlighted five central characteristics concerning the evolution of the network: 1) it leads to the formation of differentiated clusters of pairs of actors; 2) the cumulative nature of this process leads each pair of actors towards a local informational equilibrium; 3) all other things being equal, when two actors now members of different clusters could have communicated with each other in the previous period but, on the contrary, communicated with the actor whose cluster they now share, the information distance between these two clusters of actors increases more than when one of these two actors did not communicate with anyone; 4) the local equilibria towards which the clusters of actors tend are not irrevocable, unlike a global informational equilibrium; 5) one or more combinations of elementary categories tend to be shared by all the actors, thus gradually establishing a social link at the level of the entire network considered globally, which leads the latter towards an irrevocable global informational equilibrium, and not towards a juxtaposition of local equilibria, each of which would correspond to a cluster composed of two strictly identical actors. Concerning the interpretation of the macroscopic expression of entropy in our model, we can then repeat by radicalizing all the statements produced previously about our example with two actors. As we have seen25, if individual behaviors are invariant in successive communications between actors, the rate of information production decreases in an accelerated way within the global network during its trajectory as well as at the level of each cluster considered separately, where this type of evolution takes an even more pronounced form. Indeed, the propensities to communicate between individual actors are by construction higher within each of these clusters than the average propension to communicate at the level of the entire network, so that the cumulative process that tends to make individual actors ever more similar takes place locally more quickly than at the global level. When the only driving force behind the evolution of the network consists of inter-individual communication, it can be considered closed, and this process continues until the interplay of recombinations of psychological categories produced by communication comes up against the exhaustion of the pool of potential combinations not yet realized. At this point, the network has reached an irrevocable global informational equilibrium and the rate of information production becomes zero: its entropy is then maximal. And when categorization is a second driving force behind the evolution of the network, which must therefore be considered as open, each of the three possible network paths corresponds to an evolution of its entropy. 25 See Chapter 6, section 6.2.

The Acceleration of Time, Presentism and Entropy

209

If the socialization rate of existing categories is higher than that of the creation of new categories, this trajectory takes the same form as before, but at a slower pace, so that the network moves more slowly towards maximum entropy; if these two rates are equal, the network settles in a stationary state of non-equilibrium where its level of entropy remains constant and finally, if the rate of creation of new categories exceeds that of the socialization of existing categories, entropy decreases continuously within the network. 9.3.3. Entropy theory and trajectory of the complex socio-cognitive network of individual actors What about a possible interpretation of the microscopic expression of entropy with respect to our complex socio-cognitive network? Initially expressed by Boltzmann’s formula making entropy a function of the number of microstates in which a material system in a given macroscopic state can be found, the microscopic approach to the entropy law was then enriched by the contribution of Brillouin (1959). In accordance with Rovelli’s interpretation of Boltzmann’s conception of entropy, Brillouin explicitly introduced – through his conception of information as negentropy – the role of measurement and the observer in the statistical definition of entropy. More than a measure of the microscopic order of the system under consideration, the latter was then conceived as a measure of the degree of knowledge that an observer can have, through experimental measurements, of the microscopic state of this system. More precisely, by rewriting Boltzmann’s formula into a Shannon formula, it appears that the entropy of a system is proportional to the amount of information the observer would have, if he knew in which microstate this system was located, so that it measures the observer’s lack of information about the true structure of the system. Subsequently, to say that an isolated system inevitably tends to evolve towards the state of maximum disorder is equivalent to saying that the information available to the observer of this system can only decrease (Atlan 1972, pp. 171–185). The most likely historical evolution of our network is then easily readable. The observer’s lack of information on the true structure of this network can be measured both by the structural complexity of the system involved and by its functional complexity. From a comparative static point of view, it is the concept of structural complexity that must be used, so that the entropy of each state of the network is proportional to the amount of information it contains in that state. This amount measures the information that is missing to the observer to reconstruct the system in its smallest details by proceeding at random. However, when the evolution of the network is driven solely by inter-individual communication, this amount of missing information continuously increases until it reaches an end state characterized by an irrevocable informational equilibrium. Such an evolution reflects a continuous

210

The Carousel of Time

increase in the entropy of the network from its initial state to its final state: the observer of each of the successive states of this network needs ever more information to reconstruct a statistically identical state by proceeding at random. This theoretical interpretation of the concept of entropy at the microscopic level, within the framework of our socio-cognitive network model, is therefore perfectly in line with those proposed by Boltzmann and Brillouin in thermodynamic theory. Nevertheless, the concept of structural complexity is not sufficient to provide a truly dynamic analysis of the evolution of our network in relation to the microscopic expression of entropy, because it tells us nothing about the functional relationships between the nodes of this network. It is here that we need a concept of functional complexity. Let us try to clarify the meaning of such a concept in our model. By construction, each realized state of the network is here the result of the application of a set of inter-individual communications in the previous state, these communications being specified in terms of the identity of its protagonists, the structure of the messages exchanged by them and the informative values of these messages. We can therefore measure the functional complexity of any one state of the network by the amount of information that the observer of that state should have in order to be able to accurately predict the next state among all its possible states. The crucial point here is that the network observer does not observe the network in the same way as in a comparative static perspective. However, in this perspective, he considered the network at each of its successive dates solely from the point of view of its current state. In a truly dynamic perspective, his point of view is, at each date, both retrospective and prospective, because he must incorporate to some extent the memory of past states in an attempt to predict the future state: at each date marking the trajectory of the network, the observer of the latter is installed in the dynamics of its specious present. However, the fact that the rate of information production in the network tends to decrease over time implies that as the observation period lengthens, it lacks less and less additional information in order to be able to accurately predict the next state of the network among all its possible states. And this until the final moment, when it no longer lacks any more: the set of possible states is then reduced to the one that is realized, whose prediction is therefore perfect. Interpreted at the microscopic level and from a dynamic point of view, entropy thus continues to decrease in the network, until it reaches a zero level. Another way of expressing this result is to point out that, provided that the only driving force behind the dynamics of the network is inter-individual communication, and that no erasure of information due to forgetting occurs there, the redundancy of

The Acceleration of Time, Presentism and Entropy

211

this network increases with each of its time steps. This redundancy expresses the strength of the links between the actors, whose formal translation in the Shannonian world is that of conditional probabilities, and which are expressed in our model by propensities to communicate. Its continuous growth and that of the conditional probabilities or propensities to communicate which thus measure it are synonymous with ever-increasing constraints on the evolution of the network, and we know that the increasing weight of these constraints mechanically implies a decrease in the quantity of Shannonian information contained in the network. Everything is therefore happening as if the state of the network – initially characterized by a quasi-independence of the actors, corresponding to a high quantity of Shannonian information – were evolving irreversibly, and this in a continuously accelerated mode, towards a final state characterized by maximum redundancy. In such a state, the constraints measured by conditional probabilities or propensities to communicate are equally maximal. Ultimately, from the point of view of the network observer, since the rate of information production is continuously decreasing throughout the network trajectory, time seems to flow more and more quickly when its successive steps are considered retrospectively, while each experienced step seems longer. This point of view thus coincides with that of our contemporaries who say they feel an acceleration of a time that seems simultaneously motionless to them. In our model, this means that the distance between two successive turns of the helix representing the temporality of our network is decreasing, and this at the same pace at which the size of the Möbius strip representing the spatio-temporal structure linking the observer to the actors observed within the network is gradually reduced. In the eyes of this observer and those observed, the number of observable changes in the matrix representing the state of the network continues to decrease from period to period, so that everything happens as if the course from inside to outside the network, and vice versa, were shorter and shorter – as if the link between the observed actors and the observer of these actors were ever closer. As a result of this process of accelerating time subjectively felt by the observed actors and “objectively” recorded by the network observer, each shorter and shorter turn is connected more and more quickly to the next turn, one notch higher along the vertical dimension of the time helix. These phenomena are increasing to a final informational equilibrium in which the length of the path linking the inside and outside of the network becomes zero, so that the Möbius strip and the Klein bottle degenerate at a point. Observers and the observed no longer distinguish themselves at all, and their common power of discrimination becomes infinite. As for the last spire of the time helix, it finally closes in on itself in a perfect circle, and time then stops flowing.

212

The Carousel of Time

This would be the case if the only driving force behind the network were communication as we have assumed here, or, at a slower pace, if the socialization rate of existing psychological categorizations were higher than that of creating new categories representing a second driving force in the network. And finally, if the latter rate were higher than the previous one, the observer could take comfort in the decline in predictability of the network’s evolution by seeing it as a happy sign of an endless story.

10 Temporal Disruptions

We have previously considered different aspects of the evolution of the network within the framework of a historical temporality which, although discrete in nature, is not exempt from any notion of continuity. Discussing this point leads us to examine the tricky question of the theoretical articulation of a succession of discrete states, each of which gathers specious presents quite distinct from the previous and the following ones, with a stream of thoughts of which those by W. James have consistently proclaimed continuity (Debru 2008, pp. 69–77). How can we reconcile this position, so firmly repeated, with our rather Aristotelian conception of thought as a discrete succession of experiences? Within a constantly evolving consciousness, James proposes to distinguish “substantive mental states” where the sensory imagination (linked to a perception or feeling) would dominate, and “transitional mental states” (where the mind would seek relationships between what has been perceived or felt). This distinction begins a possible answer to our question (Buser 2008, p. 37 sq.). Indeed, one of the main characteristics of the flow of consciousness would be, according to James, its speed of change, like a bird that can sometimes fly, sometimes perch. The “transitional” periods (those of flight) would be there to move from one “substantive” episode (that of perched rest) to another, and thus weave a continuous link between the discrete succession of the latter – although James has often repeated that a succession of sensations was not equivalent to a sensation of succession. For other authors, some of whom were contemporaries of James, this continuity of consciousness was not self-evident, or even seemed totally illusory by being only the effect of the rapid succession of silent parts. C. Debru (2008, pp. 71–77) cites in this regard, among others drawn from the work of a biographer of James, the name Charles Sanders Peirce’s name, as arguing a distinction between

The Carousel of Time: Theory of Knowledge and Acceleration of Time, First Edition. Bernard Ancori. © ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.

214

The Carousel of Time

appearance and reality of the continuity of consciousness, and that of Wilhelm Wundt evoking segments of consciousness linked together by connections beyond discontinuities. Debru reports above all the views of our contemporary Ernst Pöppel, a strong supporter of the universality of the phenomenon of the specious present. According to the latter, there would indeed exist a universal mechanism that would divide time into three-second segments constituting as many islets of mental activity distinctly separated from its temporal neighbors. The illusion of continuity would be created by a semantic link between the anterior and posterior state – a link that could be, as he later stated, emotional, but could just as easily be at an automatic presemantic level (Debru op. cit., pp. 73–74). Like the primer from the explanation we discerned in James’ work, the one thus proposed much more recently by E. Pöppel is based on the existence of a link (“transitional states” or a potentially pre-semantic automatic link) making it possible to cover a discontinuity (between specious presents) by establishing the reality of the temporal continuity of the flow of thoughts (James) or by giving the illusion of the experience of consciousness (Pöppel). Let us keep this idea of link and automatism put forward by Pöppel, as well as the reality of the temporal continuity of the flow of thoughts so often affirmed by James. It then seems possible to find an articulation between the inscription of a discrete succession of experiences in the memories of individual actors and the proven continuity of the flow of their thoughts by reiterating our distinction between the level of the combined psychological categories in these memories, and the meta-category level where the parts of the cognitive repertoires considered semantically relevant by the actors who thus inscribe them in their individual memories are selected and organized. Indeed, as long as the set of these meta-categories is not modified, there is no reason to change the way in which the parts of the individual parts are selected and organized. The continuity of the actors’ flow of thought is then ensured by the permanence of this mode of selection and organization, although this flow is simultaneously divided, at the level of the contents of the thoughts thus selected and organized, into a succession of discrete states. Until now, it is such individual actors that we have considered – actors who observed unchanging behaviors in terms of sending and receiving messages during communication, as well as during their internal cognitive processes. In particular, these actors have always interpreted in the same way the events whose meanings they recorded in their memories. They constantly read the other in the context of the same, so that they only carried out level 1 and 2 learning within the meaning of the hierarchical categorization proposed by G. Bateson. On the contrary, actors who would perform a level 3 Batesonian learning would ipso facto interpret given events in a different way than before, so that they would now interpret the same one

Temporal Disruptions

215

in a context that has become different. Such learning would introduce a temporal disruption in individual actors’ flow of thoughts, and consequently in the evolution of their network. The occurrence of such a disruption would imply that the temporality of this network would be subject to radical uncertainty, in the sense given almost simultaneously to this expression by Frank Knight (1921) and John Maynard Keynes (1921). This type of uncertainty is distinct from risk for which probabilities can be calculated: whereas the latter characterizes a universe whose list of all possible states is known, as well as their probability distribution, radical uncertainty concerns a universe whose list of all possible states is unknown, and the same is true a fortiori for their probability distribution. Until now, we have only considered the evolution of our network in a risky universe, because the combinatorics of possible future states at each date of the network trajectory was finite. Hence, we have seen the network observer being able to draw up the list and probability distribution of these possible states, based on the combinatorics associating the current state of the network and the invariant behaviors of the observed constellation of actors that represent its nodes. This is no longer the case when certain forms of temporal disruptions are introduced into the historical evolution of our network and it is this type of disruption, correlative to the radical uncertainty, that now marks this evolution, which we will now analyze. In order to clearly identify the characteristics of these temporal disruptions compared with those of forms of evolution that draw continuous trajectories of the network, we will contrast them with the latter by successively presenting three distinct processes of revision of individual and collective beliefs. The first is a minimal revision of the beliefs of individual actors, and we will call this revision a translation that transforms each set of beliefs into a new set that strictly contains the previous one. This is modified here by a new representationoccurrence that is in no way contradictory to it, and the operator of this translation is therefore simply the “and” of classical logic. This process of minimal revision of the actors’ beliefs does not imply any temporal disruption in the flow of their thoughts, and we can thus continue to identify their individual memories here with all their cognitive repertoires. When this is not the case, i.e. if the new representation-occurrence is contradictory with the current set of beliefs of the actors, we must leave the framework of classical logic to enter the possible worlds semantics. Indeed, the process of restoring the coherence of their beliefs implies here that the actors

216

The Carousel of Time

abandon certain old beliefs and replace them with certain new ones. The “and” operator of classical logic is not an adequate tool to deal with this type of process, because it is not only a gain, but also a loss, so that the new set of beliefs does not strictly contain the previous one. This gain and loss can have a more or less radical impact. They can result in particular from learning at levels 1 and 2, and we will then speak of a weak transformation of all the beliefs of the actors – an example of which is provided by the socio-cognitive processes put into action within the framework of what T. S. Kuhn (op. cit.) called “normal science”. Certainly, some combinations of psychological categories disappear here from the actors’ memories to the benefit of others, and we must therefore distinguish the cognitive repertoires of the actors concerned from their semantic memories. But this recomposition of beliefs is carried out without major disruption to the mode of selection and organization of these combinations within the latter. Finally, the process of revising the beliefs of the actors involved in receiving new information in the form of a representation-occurrence, i.e. new and contradictory to their current beliefs, can lead to a radical transformation of the latter. It is then a level 3 learning that is thus provoked, and it implies a disruption of the mode of selection and organization of the combinations of psychological categories composing the individual actors’ memories. On the basis of a distinction between cognitive repertoires and semantic memories of individual actors, such a transformation can be illustrated by the socio-cognitive processes at work during a Kuhnian period of “extraordinary science”1. 10.1. The translation of beliefs The notion of translation here covers a process of revising beliefs such that the set of old beliefs are strictly contained in that of new beliefs. Let us call a paradigm

1 The Kuhnian model thus looks like an evolution in terms of punctuated equilibria, according to the theory and terminology introduced by Niles Eldredge and Stephen Jay Gould (1972), but it is not the only one in this respect: the child’s cognitive development would present the same evolutionary profile, made of sudden changes and longer periods of calm. Thus, until about five years of age, the child seems to conceive the functioning of the mind in the passive mode of an identical recording for all of the information received from the world; then, at that age, there would be a rapid change during which he would abandon this conception for an active vision of the functioning of the mind, the latter now appearing to him to construct in an idiosyncratic mode the information concerning the world. He would then acquire a “theory of mind” that he had previously totally lacked (Didierjean op. cit., pp. 116–122). More generally, this alternation of continuity and disruptions is that of any learning subject, as suggested by the hierarchical categorization of learning proposed by G. Bateson (see Chapter 1, section 1.2).

Temporal Disruptions

217

a set of beliefs shared by a set of actors. The grip of this paradigm can be partial (the set of beliefs considered is shared by a part of the population of actors) or total (this set of beliefs is shared by the entire population of actors). Let us take again the example of our network reduced to five actors2, and whose initial state was represented on the basis of the matrix [aij]t0 as shown in Table 10.1. C1

C2

C3

C4

C5

C6

C7

C8

C9

C10 C11 C12 C13 C14 C15

A1

1

1

0

0

0

1

1

0

0

0

0

0

0

0

0

A2

0

1

1

0

0

0

0

1

1

0

0

0

0

0

0

A3

0

0

1

1

0

0

0

0

0

1

1

0

0

0

0

A4

0

0

0

1

1

0

0

0

0

0

0

1

1

0

0

A5

1

0

0

0

1

0

0

0

0

0

0

0

0

1

1

Table 10.1. The network, the place of five different potential paradigms in t0

We can identify here five different potential paradigms, whose respective partial grips identically bring together two members: A1 and A2, A2 and A3, A3 and A4, A4 and A5, A5 and A1. Let us examine in particular the two paradigms to which A1 and A2 adhere, on the one hand, and A3 and A4, on the other hand. The first (noted P1) is centered on C2, and its different interpretations by A1 and A2 are formalized by the rest of the respective cognitive repertoires of these two actors. The same is true for the second one (noted P2), centered on C4 and whose different interpretations are formalized by the rest of the cognitive repertoires of the A3 and A4 actors. The cognitive base of each of these two paradigms thus consists of the union of the cognitive repertoires of its members, which includes a shared category (by which it is marked that it is indeed a paradigm gathering several actors), and idiosyncratic categories and combinations of categories (by which the individual freedom of interpretation of the paradigm concerned is marked). By establishing the above-mentioned inter-individual communications on date t03, we know that the state of the network on date t1 is representable on the basis of the matrix as shown in Table 10.2.

2 See Chapter 6, section 6.3. 3 It should be recalled that on this date, actor A1 transmits the message “C1C2” to A2, who simultaneously replies with the message “C2C3”, and actor A3 transmits the message “C3C4” to A4, who simultaneously replies with the message “C4C5”.

218

The Carousel of Time

C1

C22

C3

C24

C5

C6

C7

C8

C9

A1

1

1

1

0

0

1

1

0

0

0

0

0

0

0

0

A2

1

1

1

0

0

0

0

1

1

0

0

0

0

0

0

A3

0

0

1

1

1

0

0

0

0

1

1

1

1

0

0

A4

0

0

1

1

1

0

0

0

0

0

0

1

1

0

0

A5

1

0

0

0

1

0

0

0

0

0

0

0

0

1

1

C10 C11 C12 C13 C14 C15

Table 10.2. The network, the place of two different paradigms in t1

Each of our two previous paradigms has thus been enriched by new beliefs, and it is clear that each new set of beliefs thus formed strictly contains the old one. The old paradigms are therefore translatable into terms of the new ones (noted P'1 and P'2), in accordance with our definition of translation. The corresponding dynamic is, for example, that of scientific progress through the accumulation of knowledge. The latter is reflected here by the fact that these new beliefs are formalized by cognitive repertoires whose overall volume has doubled and whose shared central core has more than doubled, while being locally strengthened (categories C2 and C4 have been replaced by C22 and C24 respectively) compared to the initial cognitive repertoires. This dynamic formalizes the characteristic vision of the philosophy of “classical” (pre-Kuhnian) sciences that dominated epistemology until the “historicist turn” of the 1960s. Indeed, the Carnapian vision and the Popperian vision join on the notion of reduction from one theory to another, which stipulates that the basic concepts of the former can be defined according to the basic concepts of the latter, and that the fundamental principles of the former are logically deductible from the fundamental principles of the latter. The latter is more general than the former, in the sense that all the knowledge contained in the former is also contained in the latter (Moulines 2006). In this translation/reduction dynamic, none of the messages received by the four actors between t0 and t1 is contradictory to their initial beliefs, so that these messages are integrated with these beliefs through the classic conjunction operation (the operator “and”) to form new beliefs. This type of evolution is therefore easily formalizable within the framework of the classical logic applied to our model. On the contrary, this logic is unable to model changes in beliefs following the receipt of messages that contradict the initial beliefs. In order to deal formally with this type of situation, it is necessary to move beyond the framework of classical logic to that of the possible worlds semantics allowing us to formalize some fundamental principles in terms of belief review

Temporal Disruptions

219

that may result from receiving messages that contradict the initial beliefs (Alchourron et al. 1985; Zwirn and Zwirn 2003). 10.2. Revisions of beliefs and the possible worlds semantics The possible worlds semantics sets out several principles, two of which are of particular interest to us: (1) the message received – on the occasion of a social communication, or the individual observation of an object – always takes precedence over the initial belief, and this message is not called into question at the end of the revision process; (2) the rational actor follows a principle of minimal change and seeks to preserve his most “rooted” beliefs. Introduced by W. Quine, the maxim of “minimal mutilation” stipulates that one modifies one’s beliefs in the presence of contradictory information in order to restore their coherence by sacrificing the beliefs to which one is least attached. The rationale for the first principle is obvious: if the message received did not take precedence over the initial belief, the latter would have no reason to change, and this message would appear in the cognitive repertoire of the actor concerned, but not in his individual memory. As for the second principle, it reflects a psychological posture constantly confirmed by empirical observation of actors’ behaviors. The revision process in which beliefs are changed about an object that has not changed, but is now better known, is called rectification by logicians4. Its axiomatic part is based on six proposals, the first of which sets out the objective of belief revision theory and the others express in various ways the notion of minimal change mentioned above. Let K be the initial belief, A be the message and K*A be the revised belief. The six axioms are then as follows: (1) the axiom of “success” stipulates that the coherence of beliefs must be restored in the event of receiving a contradictory message with the initial beliefs: K*A implies A, i.e. the revised belief must contain the message; (2) the axiom of “conservation” says that K remains unchanged if A can be deduced from K; (3) the axiom of “inclusion” says that K*A keeps the part of K compatible with A; (4) the axiom of “preservation” attached to the axiom of “inclusion” says that if K and A are compatible, K*A is the conjunction of the two; (5) the axiom of “sub-expansion” and (6) the axiom of “super-expansion” make two successive messages occur by extending the intuitions of inclusion and preservation.

4 For us, it is not a question of affirming that scientific objects have no history at all: they have at least one, which is the one of our successive views of them. And it happens that the sometimes slow, hesitant and entangled process of transformation of these gazes responds to the slow, hesitant and entangled movement of their object. This is the case of the evolution of horse breeds as shown by the successive fossils of the current horse, and the history of our interpretations of this fossil succession (Latour 2007).

220

The Carousel of Time

These last two axioms are not directly concerned by our purpose, which is to model the revisions of beliefs following the reception of a message (or the observation of an object) at a time, the network changing its state correlatively. On the contrary, it is easy to verify that our modelization of the translation verifies the first four axioms: (1) the cognitive bases of the paradigms P'1 and P'2 contain the messages exchanged between t0 and t1; (2) these bases remain unchanged (with the exception of the phenomenon of reinforcement, to which we will return) when the received messages are already contained therein; (3) these bases keep their parts compatible with the received messages – in fact, they keep here all their parts, since the initial beliefs are in no way contradictory with the received messages; (4) these bases consist of the conjunction of the initial cognitive repertoires and the messages received; this rather trivial statement shows that the theory of rectification joins the classical logic when the new information is not contradictory with the initial beliefs: as we have pointed out above, it is because none of the messages received by the four actors between t0 and t1 are contradictory with their initial beliefs that these messages are integrated into these beliefs through the classic operation of conjunction to form new beliefs. Our notion of translation therefore perfectly fits with the axiomatic part of the theory of rectification. That said, some aspects of the semantic part of this same theory allow us to go beyond this particular type of evolution. This semantic part is based on the definition of a preference relationship, noted