130 89 7MB
English Pages 280 [269] Year 2022
Günther Palm
Neural Assemblies An Alternative Approach to Classical Artificial Intelligence Second Edition
Neural Assemblies
Günther Palm
Neural Assemblies An Alternative Approach to Classical Artificial Intelligence Second Edition
Günther Palm Ulm University Ulm, Germany
ISBN 978-3-031-00310-3 ISBN 978-3-031-00311-0 https://doi.org/10.1007/978-3-031-00311-0
(eBook)
# The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 1982, 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface to the First Edition
You can’t tell how deep a puddle is until you step in it.
When I am asked about my profession, I have two ways of answering. If I want a short discussion, I say that I am a mathematician; if I want a long discussion, I say that I try to understand how the human brain works. A long discussion often leads to further questions: What does it mean to understand “how the brain works?” Does it help to be trained in mathematics when you try to understand the brain, and what kind of mathematics can help? What makes a mathematician turn into a neuroscientist? This may lead into a metascientific discussion which I do not like particularly because it is usually too far off the ground. In this book, I take quite a different approach. I just start explaining how I think the brain works. In the course of this explanation, my answers to the above questions will become clear to the reader, and he will perhaps learn some facts about the brain and get some insight into the constructions of artificial intelligence. This book is not a systematic treatment of the anatomy and physiology of the brain, or of artificial intelligence, or of the mathematical tools of theoretical biology; these subjects are only discussed when they turn up in the course of the argument. After a brief introduction (Sect. 1.2), in Sect. 1.3, the course of the argument is laid in the direction of artificial intelligence. In Chaps. 2 and 3, I discuss the construction of a machine that “behaves well” or shows “intelligent behavior.” An algorithm for such a machine is developed; it is called the “survival algorithm”; a possible embodiment of it in terms of neurons is used to stimulate specific questions about real brains. In this way, the course of the argument is led through neuroanatomy and neurophysiology (Chap. 4) to the modeling of neuronal networks as an attempt to get an idea of the flow of activity in the brain (Sect. 5.2). These considerations are finally combined (Sect. 5.3) with the constructive approach in the first part of the book to create a language in which one can talk of thoughts as of states of activity in the brain, or as of events that occur in the operation of a goal-oriented computer (the survival robot). The central term of this language is the term “cell assembly” which was introduced by Hebb (1949). This is followed by more extensive speculations based on this language (Sect. 5.4) and my personal view on its anthropological and philosophical implications (Chap. 7). v
vi
Preface to the First Edition
For those readers who believe that they will never understand mathematics, I should add the consolation that I have tried to keep the mathematical formalism out of the main argument, so that it should be readable and intelligible to anyone who has enough patience and interest in the subject. Section 2.2 is perhaps the most mathematical chapter of the book, and it can be skipped by those who accept the summary at its end. The digressions and appendices which follow the main text (still available online in the backmatter of the first edition) are meant to serve as an introduction to those branches of mathematics that I believe to be most useful for this attempt to understand the human brain. In the digressions, I have tried to present the mathematics in such a way that it can be understood also by readers with a limited mathematical experience. The appendices are short self-contained monographs on a few theoretical issues that turn up in the main text. Let me now answer the last of the introductory questions “What makes a mathematician turn into a neuroscientist?” in a personal way, by just explaining how I came to write this book. Having finished my Ph.D. thesis at the mathematics department of the University of Tübingen in 1975, I started working at the Max Planck Institut für biologische Kybernetik. I worked on the theory of nonlinear systems, but at the same time I started to read about the brain. I learned several facts on brains and many different opinions or viewpoints on the interpretation of these facts. Quite soon I had to stop reading because I could not decide what to read next. This is not only because the literature on brains is tremendously large, but also because so many different problems opened up. At the beginning, I had learned that the most important things in the brain are the neurons. They transmit and process the information. I also had an idea how neurons work, how, for example, the electrical signal (called “spike”) is transmitted along the long output fiber(s) of a neuron. And I had learned that in our brain there are billions of these neurons which are highly interconnected. But was that enough? What else in the brain might influence the information flow? Do all the neurons work roughly in the same way, i.e., like those few neurons that had been investigated, or could it be that they are more complicated or specialized? I also looked at Golgi preparations of some mouse brains where one can see different shapes and arrangements of neurons. It is important to realize that this is possible only because in Golgi preparations, just a small percentage of all the neurons is stained, for otherwise you could not distinguish any one neuron, since they are fibrous, highly branched, and so closely and complicatedly interlaced that they fill nearly the whole space. Such observations again open up more questions than they answer. For example, questions pertaining to the different staining techniques: How does the Golgi technique select the few cells it stains? Does it stain the whole cell? These and many other technical problems even raised some doubts about the basic facts that I had learned up to that point, not to mention the enormous amount of much more detailed experimental results that I had not yet read (and that cannot all be read by any one person anyway).
Preface to the First Edition
vii
The whole set of problems seemed to be far too serious for only a few years of reading and thinking about the brain. At first, I had to find out what I really wanted to know. Was my basic knowledge already sufficient to explain this, and if not, where should I go into the details? I started to “think about thinking” again and I read some early works on artificial intelligence, for example, the little book by von Neumann (1958). From a simple argument based on pure logic, I understood that for any welldefined information processing, it is possible in principle to produce a device that performs it. And such a device can be built from neurons that all work in essentially the same way as those neurons that had been investigated experimentally (see Sect. 2.2). Moreover, if the brain is just a network of interconnected neurons, it is possible in principle to predict the total behavior from the dynamics of single neurons, if one knows how they are interconnected. So one has to investigate the dynamics of the single neurons in the brain and the connectivity between them, and one needs some mathematics to convert this kind of knowledge into predictions on the global flow of activity. I started to read papers on “brain modeling,” where the dynamics of large (usually randomly) interconnected networks of neurons were analyzed. Here I realized that probably the hardest part was the interpretation of the results (as statements concerning “behavior”). At the same time, I thought I should perhaps concentrate on a small part of the brain, and I chose the visual cortex, because there were so many data available on that region of the brain. But this led to several discouraging experiences. First of all, you cannot really understand the visual cortex unless you have a precise idea of how it is integrated in the working of the brain. Therefore, even for the visual cortex, it is hard to separate understanding of this part of the brain from understanding of the whole brain. This is what makes “knowledge” on the functioning of isolated parts of the brain so questionable. For example, it can be quite frustrating to contemplate what knowledge might really be expressed in the following figure (Gall’s phrenological chart, left out for unresolved copyright issues). Furthermore, if you start getting interested in details, you usually find that just the experiment you are interested in has not been performed, or at least those data that you really want to know are not reported (often for technical reasons that are hard to grasp if you do not work experimentally yourself). And you learn that data are not just given (as the word suggests), they are produced, selected, and usually contain a little bit of interpretation. Therefore, it is often not enough to read experimental papers, you have also to talk to the authors and try to find out which theory they have at the back of their minds. Often these ideas are not mentioned in the papers, because they are too hard to express or too easy to falsify. In personal communication, however, I could sometimes manage to discuss these ideas with the authors. In these discussions, it often happened that we agreed on a kind of “private language,” which was invented in the course of the discussion, and which made it possible to discuss these ideas at all.
viii
Preface to the First Edition
It may well be that my mathematical training was helpful for these discussions, although perhaps in a rather unexpected way. When you study mathematics, you learn several mathematical formalisms, like topology, vector spaces, groups, Boolean algebras. But what is more important, you learn quickly to adapt to new formalisms. This means that you learn to handle different notations and you see how an adequate notation can make an understanding much easier and often opens up a whole new field of (mathematical) research. In other words, in mathematics, you acquire a high flexibility in the invention and use of new notation. And you learn that the only way of explaining a new idea is often to invent a new language, or at least to define new terms and use new notations. It may be a lack of this flexibility in the use of language and notation that often makes it impossible to explain the theories an experimenter has at the back of his mind when performing an experiment. Without this unwritten theoretical background, even apparently plain descriptive papers often turn out not to be really intelligible. Let me use this remark to correct the common image of a mathematician as somebody who manipulates complicated formulas for several pages and finally arrives at some result that is of relevance to nobody except to himself and his few fellow mathematicians. As I said above, in mathematics, you learn to handle several formalisms. Such a formalism usually makes it possible to deduce logical consequences from given assumptions (axioms) by doing formal manipulations in a special notation. If you stick to some formalism, you can become an expert in the corresponding formal manipulations. This was a common situation historically, since the main field of application of mathematics was physics and most problems in classical physics could be dealt with in the mathematical formalism of differential equations. But today, I think, the situation is different, and the main point in studying mathematics is not to become an expert in handling one formalism, but to learn quickly to adapt to and even to invent new formalisms. A mathematician should worry much more about the “translation” between the actual problem and its representation in the mathematical formalism—although today this is often not yet regarded as part of mathematics. For a few years, I was engaged in several parallel activities: reading of experimental literature on the visual cortex, reading of neuron network theories, and still working on the theory of nonlinear systems. During that time, my ideas on the brain were strongly influenced by V. Braitenberg’s conception of the cerebral cortex in connection with Hebb’s idea of “cell assemblies.” In the autumn of 1978, I decided that I should try to fix my own ideas on the brain in terms of a model that shows “intelligent” behavior on the basis of simple neuron-like elements; I even dared to test these ideas by giving a series of lectures at the University of Tübingen. When I prepared these lectures, I found a didactical vehicle that greatly facilitated the explanation (at least from my point of view): this was the matchbox algorithm, a simple algorithm that clearly learns to play (in principle) unbeatable chess (it is explained in Sect. 2.3 and is, in fact, rather primitive). This book is based on these lectures. I regard it as a starting point for further reading and thinking about the brain.
Preface to the First Edition
ix
I am grateful to Valentino Braitenberg who prompted me to begin this book and to my mother who helped me getting it finished. She typed and retyped most of the manuscript. I am obliged to many colleagues at the Max Planck Institut für biologische Kybernetik, especially to Almut Schüz and Christian Wehrhahn who critically read the manuscript and to Ladina Ribi and Claudia Martin-Schubert who prepared the figures.
Preface to the Second Edition
When I was 25 and had almost finished my Ph.D. in mathematics on entropy of dynamical systems, I started to look for something more practical and concrete to work on where I could hopefully use some of the things I had learned in mathematics. It was in this period of time that Valentino Braitenberg convinced me that brain research or neuroscience was one of the most interesting subjects to study and definitely in need of a mathematical theory. And clearly the idea of developing a theory of thinking (for example, about a theory of thinking. . .) was very attractive. So I quickly finished my Ph.D. and took his offer to work at the MPI for Biological Cybernetics in Tübingen. There I started by discussing the projects of some of the other researchers, both in anatomy and physiology of brains, trying to help them with data analysis and statistics, and slowly developing my own topic and first approach to some kind of brain theory. It turned out to be a “computational” approach, although this word was not used at that time. For me such an approach seemed completely natural because at the MPI, I was surrounded by people who had a similar attitude, more or less because this is what cybernetics stands for. I was working in the group of Valentino Braitenberg who was mostly concerned with quantitative anatomy of the mouse cortex and speculations on potential functional consequences or interpretations which we discussed frequently at lunch with Almut Schüz and later also with Manfred Fahle and Ad Aertsen. Only a few meters away was the group of Werner Reichardt working on the fly brain, in particular on visual flight control, which was directly amenable to behavioral experiments. There I cooperated intensely with Tomaso Poggio on system identification and later also with David Marr (who was often visiting the MPI when he was in Europe) on the visual system. All this created a unique atmosphere in the institute which also attracted a number of bright students, mostly from physics or biology, among them Christoph Koch, Tobias Bonhoeffer, and Michael Erb. My heroes were the heroes of cybernetics, and some others that I probably inherited from Valentino and from my interest in probability, information theory, and visual psychophysics, namely Norbert Wiener, Warren McCulloch, Ramon y Cajal, Donald Hebb, Claude Shannon, Andrei Kolmogorov, and Bela Julesz. I was lucky to be able to work in this entertaining atmosphere, but at the same time I reserved one day in the week (the Thursday) to “visit the mathematicians,”
xi
xii
Preface to the Second Edition
i.e. the meetings and seminars of my thesis adviser Rainer Nagel, where I could discuss problems of pure mathematics which had to do with a very abstract treatment of deterministic and stochastic dynamical systems in terms of functional analysis (Banach spaces and semigroups of operators). By 1981, I had worked quite successfully on various, often biologically motivated problems that could still be considered as mathematical and I had started some cooperations with people working in more experimental neuroscience at the MPI, but somehow I still had to make up my mind which subject would be most fruitful for me, offering a promising long-term perspective to stay in the research community, get payment, and to find my own theoretical approach to a better understanding of the brain. Valentino’s suggestion to write a book about our ideas and my own achievements on the further development of Hebb’s cell assemblies helped me to make these thoughts more concrete and eventually to arrive at two particular sets of ideas, one more mathematical and one more in the spirit of computational brain research, which became the center of most of my research and teaching until today. Looking back at that time, I can now probably see the formation of these ideas more clearly. The mathematical idea was concerned with a generalization of Shannon’s information theory which might be useful in neuroscience (first formulated and published in Biological Cybernetics in 1981, and completed in a book only in 2012). The basic idea for my own approach to brain theory, which was first explained in the 1982-book (the first edition), was to use the mathematical theory of associative memory (which is summarized and freely available as an appendix of the first edition) to understand the neural representations formed in various cortical areas in terms of cell assemblies and their use in learning concepts and in solving interesting computational problems that may be involved in surviving. There were essentially two central ideas in my approach that distinguished it from others: (1) the efficiency of sparse representations and (2) the use of pattern mapping (established by hetero-association) as a universal computational primitive in “associative computers”. They assume, of course, that the cortex can be described as a network of associative memories, and that there are Hebb synapses everywhere in the cortex, which was a rather strong hypothesis at that time. Based on this “associative” view of the cerebral cortex, I could develop a picture of the process of thinking as a sequence (or rather sequences) of hetero- and auto-associations running (in parallel) in several associatively connected cortical areas. This picture did correspond well with Valentino Braitenberg’s ideas on the organization of cortico-cortical connections and with our common introspective exercises at lunch times. In order to realize my computational ideas in the early 1980s, I had to get involved with the development of a special computer hardware architecture. So I conceived and actually built my own fully parallel associative computer which was called PAN (parallel associative network); it could also have been called PALM (parallel associative learning matrix) acknowledging the pioneering work of Steinbuch. We built three prototypes: PAN 1 was built by Tobias Bonhoeffer, PAN 2 by Michael Erb, and PAN 3 in a joint project with Prof. Goser and
Preface to the Second Edition
xiii
Dr. Rückert (see also Chap. 5). Unfortunately, these computers have not been used by other people except ourselves. This is because our enterprise and most other enterprises towards massively parallel computation have been frustrated during the 1990s by Moore’s law and only now these ideas start to flourish again. In the neuroscience community, the computational ideas exposed in my book were actually far from the mainstream, which I realized visiting some of the neuroscience conferences during the 1980s. To some degree, this was even the case in theoretical neuroscience which was mostly biophysics. Essentially it was (and still is) an open question what the various communities or individuals expect from a theory in brain research or neuroscience, and what it could mean for such a theory to be “computational.” One can even perceive a fine difference between these two areas: brain research ultimately aims for the understanding of human cognitive functions, where animals, in particular “higher” mammals, are considered as more or less appropriate “models” for humans, whereas neuroscience is driven by the much broader, typically biological interest in understanding the working of many different creatures or neural subsystems for their own sake, e.g. studying the flight control in flies which is of no direct relevance for human behavior, not even for pilots. When I first went to the German neurobiology conferences, usually with a poster, I found myself in some corner with a small group of theoreticians who were considered as mostly harmless freaks displaying their crazy ideas on the posters. In front of our theoretical posters, we could talk to other theoretically interested people, but only to very few of the real neuroscientists. Visiting the real neuroscience posters, however, I could talk directly to the pre- and post-docs who were eager to explain their work and findings. The atmosphere appeared to be much more open than what I had experienced in mathematics conferences. Most of the younger people shared the motivation of achieving some understanding of how the brain might work; it could be that of a fly, a mouse, cat, or monkey, it could be concerned with vision, audition, or olfaction, but there was a common motivation. However, very often they were preoccupied with particular details, problems, or advances of the technique needed for experimentation and measurement, which could only be fully understood by those specialists who worked on the same (or a very similar) animal species and sensory or motor system with the same type of recording. Often they were slightly surprised by questions about the next stages of the brain where the signals of their “pet” neurons were presumably sent to, and the ultimate use for the behavior of the whole animal, but they were open to discuss such things. Over the years, this spirit of combining highly specialized experiences from different backgrounds and communities—maybe not in everyday work, but at least in some conferences and communications—has become more of the mainstream in neurobiology, neuroscience, and brain research. It has resulted in the establishment of new combined research topics such as computational or cognitive neuroscience, where ideas from computer science, psychology, and neuroscience can be combined to eventually understand the working of brains. In my impression, many of these new fields have been initiated in the 1980s, and my book in 1982 was perhaps one of the first attempts to lay out a theoretical research program heading in this general direction. It mostly contains speculations about perception, learning, and thinking
xiv
Preface to the Second Edition
cast in the language of Hebb’s cell assemblies, and questions aimed at the neurosciences, all based on a compact collection of the basic facts about neurons and brains that were known at that time and fortunately did not change significantly. Meanwhile, a lot of detailed knowledge has been added to this in the neurosciences in general, and also in the new disciplines of neurocomputing, computational neuroscience, and cognitive neuroscience which are aiming in slightly different directions, but are roughly overlapping with each other and with the ideas outlined in my book. While many of my ideas about perception and learning have been deeply elaborated in the last years and are now textbook knowledge, not much has been added to my speculations about the process of thinking. One reason may be that I tried to understand the process of problemsolving at a level of really hard cognitive problems which are normally not considered in neuroscience, definitely require the interaction of many cortical areas, and may even ask for “symbolic reasoning”. Today, motivated by recent advances in “deep learning”, many young scientists, both from computer science and from neuroscience, are again working in this direction and perhaps some of the ideas developed in my old book can further this motivation. Ulm, Germany
Günther Palm
About This Book
The first five chapters and Chap. 7 contain the basic ideas developed in the first edition in 1982. Each chapter contains two or three of the original chapters framed between an introduction and perspectives from today’s point of view. In some places, comments or corrections based on today’s knowledge are inserted, when necessary. "
They are indicated like this.
In Chap. 6, these ideas are further elaborated in a computational model for language understanding that was developed in our institute in Ulm during the early years of this century. The last chapters (Chaps. 7–11) put the model and computational ideas developed in the first 6 chapters into a broader context of philosophy, computational theory, neuroscience, cognitive science, and society, Chap. 7 from the perspective of 1982, the following chapters from today’s perspective. Like the first edition, also this book is aimed at a broader non-expert readership. However, the style of presentation and the amount of elaborations and illustrations have changed a bit due to stricter copyright regulations and the contemporary common use of the Internet, which was non-existent in the early 1980s. In the new edition, I have left out some figures which have become “classical” and some brief introductory explanations of special topics like mathematical notation concerning sets and functions (mappings), basic information theory, neural ionic channels, which can today be looked up easily in the Internet. The interested reader can also find those explanations in the so-called Digressions in the freely available “back matter” of the first edition, which also has some “Appendices” containing, in particular, brief expositions of my early mathematical work on associative memory and cell assemblies.
xv
Contents
Part I
Basic Facts and Ideas for a Brain
1
The Brain Is an Organ for Information Processing . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Flow of Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Thinking Seen from Within and from Without . . . . . . . . . . . . 1.4 Further Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
3 3 4 9 12 14
2
The Organization and Improvement of Behavior . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 How to Build Well-Behaving Machines . . . . . . . . . . . . . . . . . 2.3 Organizations, Algorithms, and Flow Diagrams . . . . . . . . . . . 2.4 The Improved Matchbox Algorithm . . . . . . . . . . . . . . . . . . . . 2.5 Further Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
17 17 19 27 35 45 46
3
A Neuronal Realization of the Survival Algorithm . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The Survival Algorithm as a Model of an Animal . . . . . . . . . . 3.3 Specifying the Survival Algorithm . . . . . . . . . . . . . . . . . . . . . 3.4 Further Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Local Synaptic Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
49 49 50 54 64 65 70
4
On the Structure and Function of Cortical Areas . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The Anatomy of the Cortical Connectivity . . . . . . . . . . . . . . . 4.3 The Visual Input to the Cortex . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Changes in the Cortex with Learning . . . . . . . . . . . . . . . . . . . 4.5 Further Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 73 . 73 . 74 . 89 . 93 . 101 . 103
xvii
xviii
Contents
Part II 5
6
7
Thinking in Cell Assemblies
Cognitive Mental Processes Realized by Cell Assemblies in the Cortex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 From Neural Dynamics to Cell Assemblies . . . . . . . . . . . . . . . 5.3 Introspection and the Rules of Threshold Control . . . . . . . . . . 5.3.1 Thinking In Terms Of Cell Assemblies (Hebb 1949, 1958) and Threshold Control (Braitenberg 1978) . . . . 5.4 Further Speculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Further Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Cell Assemblies: The Basic Ideas . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
111 111 113 122
. . . . .
124 128 137 139 142
A Model of Language Understanding by Interacting Cortical Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Cortical Machinery for Language Understanding . . . . . . . . 6.3 Sentence Understanding by the “Language Modules” . . . . . . . . 6.4 Disambiguation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Discussion and Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Global Computation in an Associative Network of Cortical Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Can We Accept a Mechanistic Description of Our Cognitive Mental Processes? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Men, Monkeys, and Machines . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Why All These Speculations? . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
145 145 146 152 156 159 163 167
. . . . .
175 175 176 179 185
8
Developments in Computer Science and Technical Applications . . . 8.1 Learning in Artificial Neural Networks . . . . . . . . . . . . . . . . . . . 8.2 Applications and Architectures . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 How Is All This Related to This Book? . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
189 189 191 193 197
9
New Results from Brain Research and Neuroscience . . . . . . . . . . . 9.1 Experimental Advances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 How Has All This Affected the Picture I Developed in 1982? . 9.3 Towards a Bigger Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: Computational Cognitive Neuroscience . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
203 203 205 211 213 217
Part III
Further Developments Until Today
. . . . . .
Contents
xix
10
The Development of Brain Theory . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Theory and Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Particular Issues with Modeling in Biology and Neuroscience . 10.4 Large-Scale Brain Modeling and Simulation . . . . . . . . . . . . . . 10.4.1 Statistical Approach . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Technological Approach . . . . . . . . . . . . . . . . . . . . . . 10.4.3 Computational Approach . . . . . . . . . . . . . . . . . . . . . 10.5 Large Computational Models in Cognitive Neuroscience . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
229 229 230 232 234 236 238 238 239 245
11
Do We Need Cognitive Neuroscience? . . . . . . . . . . . . . . . . . . . . . . 11.1 Cognitive Neuroscience and Humanity . . . . . . . . . . . . . . . . . . 11.2 Human–Computer Interaction and Artificial Companions . . . . . 11.3 Artificial Autonomous Agents . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
251 251 253 255 258
About the Author
Günther Palm began his studies of mathematics at the University of Hamburg and graduated at the EberhardKarls-University of Tübingen with a Ph.D. thesis on “Entropie und Generatoren in dynamischen Verbänden” supervised by Prof. Dr. Rainer Nagel in 1975. He then worked as a research assistant at the Max Planck Institute for Biological Cybernetics, Tübingen, on topics of quantitative neuroanatomy, information theory, nonlinear systems theory, associative memory, and brain theory from 1975 to 1988. During that time, he spent one year (1983/1984) in Berlin as a fellow of the Wissenschaftskolleg. In 1988, he became professor for theoretical brain research at the University of Düsseldorf. Since 1991, he is the director of the Institute of Neural Information Processing at Ulm University. He retired in 2016 and is working part-time on neural data analysis at the Forschungszentrum Jülich since 2017. Professor Palm’s research focus is on information theory, neural networks, associative memory, and specifically on Hebbian cell assemblies. By 2015, he had published more than 300 peer-reviewed articles in international journals, 60 invited contributions, and (co-) edited 8 books. He is the author of the monographs “Neural Assemblies. An Alternative Approach to Artificial Intelligence” (1982), and “Novelty, Information and Surprise” (2012).
xxi
Part I Basic Facts and Ideas for a Brain
1
The Brain Is an Organ for Information Processing
Abstract
This chapter describes the flow of information from the senses through the brain and the motor system. It introduces the information-processing approach to the brain, which is aimed at a combination of methods from information theory, information processing, computer science with experimental neuroscience. This approach is commonplace today, but in 1982, the formation of interdisciplinary research on neural information processing, artificial neural networks, or neural computation was just beginning.
1.1
Introduction
The brain is an enormously complex structure that cannot simply be understood from first principles because it is the result of a long biological evolution. So I took the constructive approach, i.e. I set out to construct a brain as an organ for information processing that enables an organism to survive in a complex world. This approach is motivated in the first section where I describe the flow of information through the organism that has to be processed or transformed in the brain. Despite its apparent simplicity, Fig. 1.1 (below) already predetermines to a large degree the approach I am taking: today it is called the computational approach; in 1982, it was still in its infancy. At that time, I would have said that my approach is motivated by information theoretical questions concerning the flow, storage, and processing of information. The essential point is that by focusing on the middle box in Fig. 1.1 we can disregard the detailed construction of the body including much of the intricate machinery of motor control and sensory processing which provides useful representations of the input and output, and concentrate on the central problem of mapping the current input to the most appropriate output.
# The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 G. Palm, Neural Assemblies, https://doi.org/10.1007/978-3-031-00311-0_1
3
4
1
1.2
The Brain Is an Organ for Information Processing
The Flow of Information
For sheer complexity the cortex probably exceeds any other known structure.—D.H. Hubel and T.N. Wiesel (1977)
In the brain the information upon which we act comes together. This is visual, acoustical, olfactory, and tactile information about the outside world, as well as information on our own state of motion (proprioception from our muscles and joints) and emotion (e.g., the hormonal concentration in our blood, the condition of our inner organs and glands). On the basis of this information our reaction for this moment is programmed, or sometimes a sequence of reactions is planned for the future. Thus, our brain may be regarded as the middle box in the following simple scheme of the information flow in an organism (Fig. 1.1). Several precautions should be taken with respect to Fig. 1.1: (a) To carry out movements is itself a fairly complicated task; therefore, there are sensory-motor servo mechanisms built into the output interface. Some of them are well known as reflexes, like the pupil reflex or the (knee) tendon reflex. (b) It is important to notice that the middle box—the brain—provides ample possibilities for internal loops in the information flow, which are not mediated through interaction with the outside world. Within the nervous system the information is transmitted by the nerve cells or neurons. Let me give a rough description of the shape and functioning of a single neuron (see Fig. 1.2). It has an input tree (dendrites) and an output tree (axon and its collaterals). Incoming signals in the input tree (postsynaptic potentials) are weighted and added
input
output brain
interface
interface
environment
Fig. 1.1 Information flow between an organism and its environment
1.2 The Flow of Information
5
Fig. 1.2 a A neuron (Golgi preparation) (Braitenberg 1978). b A synapse at about 200-fold higher magnification a presynaptic axon; d postsynaptic dendrite (Courtesy Dr. A. Schüz)
en route to the origin of the axon where an output signal is generated, a spike (or a burst of spikes). Its occurrence (or intensity) is therefore a function of a weighted sum of the incoming signals. The output signal runs through all axonal branches, reaching the synapses which connect the axon to dendritic trees of other neurons. Then it passes the synapses and is changed to the new input signal in the adjacent neurons. This new input signal can be positive or negative (excitatory or inhibitory). In the human brain there are about 20 billion neurons and on the average one neuron has about 10,000 synapses distributed over its dendritic tree, and of course,
6
1
The Brain Is an Organ for Information Processing
the average number of synapses on the axon of one neuron must be again about 10,000. Thus, a human brain contains in the order of 1014 synapses.1 The neurons whose axons provide the input to the brain, in some cases themselves serve to transform their physical input signal to neuronal excitation in their axon (in muscle-stretch receptors, pain receptors, heat or pressure receptors), in other cases they are connected to specialized receptor cells (transforming optical signals in the retina of the eyes, or acoustical ones in the cochlea of the ear, or olfactory ones in the chemoreceptors of the nose or mouth). At the end of the output (efferent) axons there are special synaptic junctions to the executing organs, e.g., the motor endplates, where neuronal excitation is transformed to contraction of the muscle. The afferent (or efferent) axons usually enter (or leave) the brain in bundles, called nerves, which consist of up to 106 fibers. The following diagram (Fig. 1.3) contains a crude subdivision of the brain, distinguishing the more “central” parts from those mainly concerned with preprocessing of the input or postprocessing (organizing) of the output. From the numbers given in Table 1.1 we can roughly estimate the informationtransmitting capacity of the various channels by multiplying the number of axons in such a channel by the amount of information one neuron can maximally transmit. In fact the estimation of this last number is itself a highly debated subject and the concrete estimates given in the literature (cf. Abeles and Lass 1975; Wall et al. 1956; Holden 1976) vary widely. Just to fix ideas, let us say it is 100 bits/s. The information-transmitting capacity only gives an upper bound on the actual information flow passing through these channels under natural conditions, since this depends on the input to the channel (compare the estimates of the actual information flow through a single axon given in Eckhorn et al. 1976). By psychophysical experiments one could try to determine the information flow through a whole input channel under natural conditions, but here one encounters still another problem. Let us take the visual input channel as an example. If the capacity of the visual channel is estimated on the basis of spatial and temporal resolution in perceptual experiments, a value can be obtained which comes close to the capacity as estimated from Table 1.1 (which is of the order 108 bits/s). However, if a picture is shortly presented to a person and afterwards he or she is given time to describe the picture, we see that the information retained about the picture, under optimal conditions, is much less than we would predict from the capacity. Therefore, we can say that the bottleneck for the information flow in this experiment is neither in the visual input channel nor in the speech output channel (since the person was given enough time to describe the picture), but in between. Of course, the visual information has to be recoded before it reaches the speech output channel. It also has to be stored for a short time in order to be expressed verbally a little later. In addition, one can assume that in order to fit into the memory, 1
Today one would say almost 1015.
1.2 The Flow of Information
7
cortex
thalamus
striatum
tectum
cerebellum
output organisation
Fig. 1.3 Inputs (from the input interface of Fig. 1.1) are shown as ; outputs (to the output . The connections in this figure consist of at least several interface of Fig. 1.1) are shown as thousand and at most 107 ¼ 10 million fibers, except from the following connections: 1. The number of cortico-cortical fibers is about 1010 (since there are about 1010 pyramidal cells that have cortico-cortical axons). 2. The number of fibers between thalamus and striatum (in both directions) is less than 108 (this estimation is based on usual fiber densities in fiber tracts and the total surface of the thalamus, which is 20 cm2). 3. The number of fibers between thalamus and cortex (in both directions) is less than 108 (same argument as in 2). 4. The number of fibers between striatum and cortex (in both directions) is less than 3 108 (analogous argument, total surface of striatum is 100 cm2). 5. The number of fibers between cerebellum and output organization centers is less than 3 107 (from the thickness of the six cerebellar peduncles which is 60 cm2). 6. The direct (olfactory) input to the cortex has almost 108 fibers (e.g., Noback and Demarest 1967) Table 1.1 Estimated order of magnitude of the number of fibers in various connectionsa
Cortex ! cortex Input ! cortex Thalamus ! cortex Cortex ! striatum Thalamus ! striatum Striatum ! thalamus Cerebellum ! output organization Output organization ! cerebellum Input ! cerebellum Cortex ! cerebellum Cerebellum ! thalamus Input ! thalamus Cortex ! output All other connections in Fig. 1.3 a
10 7 67 67 67 67 67 67 6 6 6 6 6