No Matter, Never Mind : Proceedings of Toward a Science of Consciousness: Fundamental approaches, Tokyo 1999 [1 ed.] 9789027297914, 9789027251534

To understand the neural basis of implicit memory, a cortical neural network was modeled and simulated. As a cognitive p

156 44 7MB

English Pages 407 Year 2002

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

No Matter, Never Mind : Proceedings of Toward a Science of Consciousness: Fundamental approaches, Tokyo 1999 [1 ed.]
 9789027297914, 9789027251534

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

No Matter, Never Mind

Advances in Consciousness Research Advances in Consciousness Research provides a forum for scholars from different scientific disciplines and fields of knowledge who study consciousness in its multifaceted aspects. Thus the Series will include (but not be limited to) the various areas of cognitive science, including cognitive psychology, linguistics, brain science and philosophy. The orientation of the Series is toward developing new interdisciplinary and integrative approaches for the investigation, description and theory of consciousness, as well as the practical consequences of this research for the individual and society. Series B: Research in progress. Experimental, descriptive and clinical research in consciousness. Editor Maxim I. Stamenov Bulgarian Academy of Sciences Editorial Board David Chalmers, University of Arizona Gordon G. Globus, University of California at Irvine Ray Jackendoff, Brandeis University Christof Koch, California Institute of Technology Stephen Kosslyn, Harvard University Earl Mac Cormac, Duke University George Mandler, University of California at San Diego John R. Searle, University of California at Berkeley Petra Stoerig, Universität Düsseldorf † Francisco Varela, C.R.E.A., Ecole Polytechnique, Paris

Volume 33 No Matter, Never Mind: Proceedings of Toward a Science of Consciousness: Fundamental Approaches (Tokyo ’99) Edited by Kunio Yasue, Mari Jibu and Tarcisio Della Senta

No Matter, Never Mind Proceedings of Toward a Science of Consciousness: Fundamental Approaches (Tokyo ’99) Edited by

Kunio Yasue Mari Jibu Tarcisio Della Senta Notre Dame Seishin University The United Nations University

John Benjamins Publishing Company Amsterdam/Philadelphia

8

TM

The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.

Library of Congress Cataloging-in-Publication Data No Matter, Never Mind: Proceedings of Toward a Science of Consciousness: Fundamental Approaches (Tokyo ’99) / edited by Kunio Yasue, Mari Jibu, Tarcisio Della Senta. p. cm. (Advances in Consciousness Research, issn 1381–589X ; v. 33) Includes bibliographical references and index. 1. Consciousness--Congress. I. Yasue, Kunio. II. Jibu, Mari. III. Della Senta, Tarcisio. IV. Series. QP411.N598 2001 612.8’2-dc21 isbn 90 272 51533 (Eur.) / 1 58811 0958 (US) (Pb; alk. paper)

2001043856

© 2002 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co. · P.O. Box 36224 · 1020 me Amsterdam · The Netherlands John Benjamins North America · P.O. Box 27519 · Philadelphia pa 19118-0519 · usa



SUBJECT "Advances in Consciousness Research, Volume 33"

KEYWORDS ""

SIZE HEIGHT "220"

WIDTH "150"

VOFFSET "4">

Table of contents

Sponsors and Supporters of Tokyo ’99

ix

Preface

xi

Tokyo ’99 Declaration

xv

Brain and quantum holography: Recent ruminations Karl H. Pribram

1

The mind-body and the light-matter Mari Jibu

13

Dissipative quantum brain dynamics Giuseppe Vitiello

43

What do neural nets and quantum theory tell us about mind and reality? Paul J. Werbos

63

Mathematics and the mind Edward Nelson

89

Upwards and downwards causation in the brain: Case studies on the emergence and efficacy of consciousness Francisco J. Varela

95

The importance of experience: Where for the future? Brian D. Josephson

109

Cascade hypothesis of brain functions and consciousness Gyo Takeda

113

Neural correlates of visual working memory for motion Naoyuki Osaka

127

Ontological implications of quantum brain dynamics Gordon G. Globus

137

On focus and fringe in explicit mental processing Maxim I. Stamenov

145

< R/ RE E FF

"Tad"> "Tri"> "Aer"> "Oeh"> "Nak"> "Ina"> "Hos"> "Kit"> "Oka1"> "Haj"> "Kob"> "Sai"> "Ino">

vi

Table of contents

Will: A vague idea or a testable event? C. Taddei-Ferretti, C. Musio, S. Santillo and A. Cotugno Binding and dysbinding: Ideas concerning the binding problem and a theory on motion sickness Arne Tribukait

155

167

Intrinsic contextuality as the crux of consciousness D. Aerts, J. Broekaert and L. Gabora

173

Perspective changes affect attentional access to conscious experience Ruediger Oehlmann

183

Constructing pain: How pain hurts Yoshio Nakamura and C. Richard Chapman

193

Neuronoid as the coincidence detector: A new model of neuron with ‘ongoingness’ property Hiroaki Inayoshi, Toshio Tanaka, Kenji Nishida and Tohru Nitta

207

Accumulation of rapid and small synaptic increase as a basis for implicit memory Osamu Hoshino, Satoru Inoue, Yoshiki Kashimori and Takeshi Kambara

217

What is the self of a robot?: On a consciousness architecture for a mobile robot as a model of human consciousness Tadashi Kitamura

231

Apparent “free will” caused by representation of module control Natsuki Oka

243

S2 axiomatic system: A new geometrical system to maintain the qualia of words Koichiro Hajiri

251

Reactivity of human cortical oscillations reflecting conscious perception in binocular rivalry Tetsuo Kobayashi and Kazuo Kato

261

Experimentally induced verbal slips in Japanese: Evidence from a phonological bias technique Akie Saito and Satoru Saito

273

A basic neural mechanism for acoustic imaging Satoru Inoue, Manabu Kimyou, Yoshiki Kashimori, Osamu Hoshino and Takeshi Kambara

281

< /R/TREARGET E FF

"Kas"> "Kon"> "Hir"> "Nit"> "Sag"> "Nis"> "Osa2"> "Ies"> "Kat"> "Oka2"> "pic"> "ni"> "si"> "toc">

Table of contents

A role of attention in formation of brain map for accomplishing spatial tasks Yoshiki Kashimori, Minoru Uchiyama, Satoru Inoue, Osamu Hoshino, Takafumi Yoshizawa and Takeshi Kambara

289

Consciousness and the intercortical correlation function of electroencephalograms Kimiaki Konno, Yoichi Katayama and Takamitsu Yamamoto

301

The unconscious information processing appeared on the visual ERPs during pattern matching task of masked target Tsuyoshi Hirata, Shio Murakami and Shinya Ito

307

A computational model of personality Tohru Nitta

315

A hypothesis concerning a relationship between pleasantness and unpleasantness Yasuhiro Sagawa, Hidefumi Sawai and Nobuyuki Sakai

325

Automaticity of visual attention: Effect of practice with rapid serial visual presentation Kazuki Nishiura

333

Working memory and the peak alpha frequency shift on magnetoencephalography (MEG) Mariko Osaka

341

Modularity and hierarchy: A theory of consciousness based on the fractal neural network Takeshi Ieshima and Akifumi Tokosumi

349

Category theory and consciousness Goro Kato and Daniele C. Struppa Psychological information processing in a complex Hilbert space: Fourier transformation by reciprocal matrix on ratio scale Eiichi Okamoto

357

363

Tokyo ’99 Memorial Pictures Ryouichi Kasahara

377

Name index

379

Subject index

381

vii



Sponsors and Supporters of Tokyo ’99

Academic Sponsors Inoue Foundation for Science The Fetzer Institute Nishina Foundation for Science The Asahi Glass Foundation The Kao Foundation for Arts and Sciences General Sponsors Kodansha Publishing, Inc. Rohm, Inc. Kinokuniya Books, Inc. Ibiden, Inc. Hamamatsu Photonics, Inc. Tosco, Inc. Chudenkou, Inc. Fusou Dentsuu, Inc. Arakigumi, Inc. Academic Supporters The Physical Society of Japan The Japan Society of Applied Physics The Japanese Association of Medical Sciences The Japanese Society of Psychiatry and Neurology The Japan Society of Pain Clinicians The Pharmaceutical Society of Japan The Japanese Society of Pharmacognosy The Molecular Biology Society of Japan The Japanese Cognitive Science Society The Japanese Psychological Association The Japanese Psychonomic Society The Information Processing Society of Japan The Institute of Systems, Control and Information Engineers The Biophysical Society of Japan



Preface

Every two years since 1994, the international conference “Toward a Science of Consciousness” has been held in Tucson, Arizona. Thanks to its enthusiastic hosts, Stuart Hameroff and company, the conference came together each time and gave us a sense of what a science of consciousness might be like in the coming century. Last year I was scheduled to give a talk there (at the third one, called Tucson Three) and was astonished to find so many people from so many different fields of research in the vast Tucson Convention Center. It was my immediate reaction to change my plan — not to give a purely technical talk but to present an entertaining talk of a scientific nature to the very interdisciplinary audience in the large auditorium in Tucson. My talk may have been well-received, but I myself felt uneasy. I was like a man watching the old vineyard estate burning. To put out the fire, I had only tons of wine to pour on. If I used the wine to extinguish the fire, then the old chateau would be saved but no wine would be left. If not, then the old chateau would burn to the ground and all the wine would evaporate. If I gave a technical talk outlining a scientific approach from quantum physics to neural processes and consciousness, then most of the audience would fall asleep since they would not understand the relevance of quantum physics. If I gave only an entertaining talk to show the deep relevance of quantum physics to daily life, then they might enjoy the talk but would not understand why quantum physics should be required in understanding fundamental neural processes and consciousness. In the old story, they decided in the end to pour wine on the fire, aiming to save the old chateau by losing the wine. The next morning however, they discovered, hidden beneath a hall partially destroyed by fire, many old casks filled with good wine. I like this ending to the old story, and thinking back to the lively atmosphere of the first conference, Tucson One held in 1994, I recall my colleague Mari Jibu giving a very impressive technical talk on a quantum physical approach to neural processes and consciousness. Though she delved into the details, the audience was nevertheless focused on trying to understand, to figuring out the essence of the approach she was vividly explaining. Yes, she was right and I was wrong: scientists are always required to do their best, not only in research but also in presentation and in listening. However, the ever-growing Tucson conference may be getting too big and too broad for purely scientific activity, and this was the reason we decided to hold Tokyo ’99 in a very compact way, gathering only those interested in fundamental approaches.

xii

Preface

It is my sincere hope that all participants will play the part of the quintessential scientist; you among the speakers and presenters do your best to explain your fundamental approach to a science of consciousness, and you in the audience do your best to listen to each speaker without prejudice. As a Japanese physicist this conference “Toward a Science of Consciousness — Fundamental Approaches — Tokyo ’99” has a special meaning for me. It is an occasion to revisit the pioneering approach to quantum brain dynamics by two famous elementary particle physicists, Hiroomi Umezawa and Yasushi Takahashi, back in the 1960s and 70s. Umezawa passed away a few years ago, but Takahashi, to whom I express my sincerest respect, kindly participated in Tokyo ’99, coming from Edmonton, Canada. Last but not least I would like to express my sincere thanks to Tarcisio Della Senta, Director of the Institute of Advanced Studies at the United Nations University, without whose broad-mindedness we should never have been able to hold Tokyo ’99. Moreover he reminds us to turn our attention to the ethical ramifications of a future science of consciousness. Our mindfulness of these concerns has crystallized in the form of the Tokyo ’99 declaration. In the coming four days of the conference, please open your mind to different fundamental approaches to a future science of consciousness and keep it open thereafter. If you find the atmosphere of Tokyo ’99 congenial and conducive to fundamental scientific explorations, it will not be a result of my contribution but that of many others who participated in the preparations for Tokyo ’99, to whom I would like to express my sincerest gratitude, among whom are Mari Jibu, her family members and mines. Kunio Yasue, Ph.D. Chairman



Preface xiii

The late Professor Hiroomi Umezawa, Pioneer of Quantum Brain Dynamics



Tokyo ’99 Declaration

Good afternoon, our fellow scientists and philosophers. We speak to you, and on your behalf, in a spirit of hope. In the coming years, studies of the brain and the mind will advance our understanding of consciousness. In this quest of knowledge, the hope is for improving human wellbeing and the conditions of life on Earth. Since early ages, with the power of their brain, human beings have developed knowledge and tools for doing both good and bad. Today, we have the intellectual, physical and financial resources to master the power of the brain itself, and to develop devices to touch the mind and even control or erase consciousness. We wish to profess our hope that such a pursuit of knowledge serves peace and welfare. But remember: twice in recent years we virtually failed to use brilliant scientific discoveries to serve such ends. The competition for mastering nuclear power has not made the world safer, nor has the analyses and synthesis of DNA relieved concerns raised by genetic engineering. The question of ethics is before us once again, at the dawn of new discoveries about the brain and consciousness. This time, though, we are armed with the lessons of past failures, lessons that help us to meet the imperatives of hope. Colleagues, let us take the first step. Let us turn towards the brilliant scientific discoveries of the brain and consciousness, and seek a way towards peace and welfare, along which scientists and philosophers of the world may contribute to a good conscience of humanity and ethics. Let us work for the wonders of science, instead of serving its dark powers. Together, let us explore our brain and move towards a science of consciousness that will encourage arts, ethics and thinking. This is, and will be, an endless quest, which will not be completed in a hundred days, nor in a thousand years, nor even perhaps in our life time on this planet.



xvi Tokyo ’99 Declaration

But, let us begin. Then, our fellow scientists and philosophers of the world: do not ask what you can establish with purely scientific interest only, but rather what you can do to serve human peace and welfare. Let us erase the egocentric discipline-confined approach and join the collective effort to develop a science of consciousness. And let us developed it for fundamental discoveries and for serving the hope of human welfare, never warfare. Will you join in such a historic effort towards hope? May 28, 1999, at the United Nations University, in Tokyo. Mari Jibu, Ph.D. and Tarcisio Della Senta, Ph.D.



Brain and quantum holography Recent ruminations Karl H. Pribram Radford University, USA

1.

Introduction

Several pivotal ideas have enriched our understanding of brain processes over the past half century: the idea of servocontrol by way of setpoints; the idea of communication and computation by way of programming; and the idea of image processing by way of holography are examples. These ideas have converged onto the concept “information” as this term is understood in popular usage, for instance, in the statement that we are now entering the information age. A precise reading of the concept information was proposed by Shannon and Weaver in their measure based on Boolian algebra and the fact that computer programs are based on binary codes — on/off switches and combinations thereof. For brain science, the binary code is embodied in the all or none characteristic of nerve impulses by which one part of the brain communicates with another. However, when it comes to processing, the combinations that need to be made among impulses, the computer analogy fails. This is because impulse carrying nerves branch at both their endings and the combinations take place among these branchings. The diameter of the branches is so small that they cannot sustain a propagating impulse. Rather, processing comes about by way of passive field-like transactions. This essay reviews the evidence and current speculations as to what occurs in these transactions to make selective learning and remembering as well as conscious experience possible.

2.

Awareness and temporal hold

Thus, one of the most intractable problems facing brain neurophysiologists has been to trace the transactions occurring among these branches of neurons. The received opinion is that such signals accumulate from their origins at synapses by simple

2

Karl H. Pribram

summation of excitatory and inhibitory potentials to influence the cell body and its axon and thus the cell’s output. This is roughly true but only after behavior has become automatic. Many sites among branches (dendrites) “are functionally bipolar — they both project synapses [junctions between branches] onto and receives synapses from many other processes. Hence input and output are each distributed over the entire dendritic arborization where[ever] dendrodendritic interactions are important” (Shepherd 1988:82). The anatomical complexity of the dendritic network has led to the opinion summarized by Szentagothai (1985:40): “The simple laws of histodynamicallly polarized neurons indicating the direction of flow of excitation came to an end when unfamiliar types of synapses between dendrites, cell bodies and dendrites, serial synapses, etc. were found in infinite variety.” The received opinion also focuses on the transmissive nature of synapses: thus the term neurotransmitters is, more often than not, ubiquitously applied to the variety of molecular processes stimulated by the arrival of an axonic depolarization at the presynaptic site. This focus is misplaced. In any signal processing device, the last thing one wants to do if unimpeded transmission is required, is to physically interrupt the carrier medium. Interruption is necessary, however, if the signal is to be processed in any fashion. Interruption allows switching, amplification, and storage to name a few purposes which physical interruptions such as synapses could make possible. What then might be the use to which synapses could be put when input and output are each distributed over an extent of dendritic arborization? In Languages of the Brain (Pribram 1971), I suggested that any model we make of perceptual processes must take into account both the importance of Imaging, a process that constitutes a portion of our subjective (conscious) experience, and the fact that there are influences on behavior of which we are not aware. Automatic behavior and awareness are often opposed, the more efficient a performance, the less aware we become. Sherrington noted this antagonism in a succinct statement: “Between reflex [automatic] action and mind there seems to be actual opposition. Reflex action and mind seem almost mutually exclusive — the more reflex the reflex the less mind accompanies it.” Evidence was then presented that indicates that automatic behavior is programmed by neural circuitry mediated by nerve impulses, whereas awareness is due to the synaptodendritic microprocess, the excitatory and inhibitory postsynaptic potentials and their effect on dendritic processing. The longer the delay between the initiation in the dendritic network of postsynaptic arrival patterns and the ultimate production of axonic departure patterns, the longer the duration of awareness. Recent support for this proposal comes from the work of David Alkon (Alkon et al. 1996) and his colleagues who showed that as the result of Pavlovian conditioning there is an unequivocal reduction in the boundary volume of the dendritic arborizations of neurons. These neurons had previously been shown to increase

Brain and quantum holography

their synthesis of mRNA and specific proteins under the same Pavlovian conditions. Although these experiments were carried out in molluscs, such conditioning induced structural changes are akin to the synapse elimination that accompanies development. The hypothesis put forward thus states that as behavioral skills are attained, there is a progressive shortening of the duration of dendritic processing that occurs between the initiation of post-synaptic arrival patterns and the production of axonic departure patterns. This shortening is presumed due to the above described structural changes in the dendritic network which facilitate transmission.

3.

Information processing by Gabor wavelets

By contrast, the field-like processing that occurs in the fine branches of nerves in the brain are mediated by fluctuations of potential differences (across the nerve membranes) that range from depolarizing (excitation) to hyperpolarizing (inhibition). The fluctuations can be described in a fashion similar to the fluctuations of a voice in a telephone signal. This similarity led brain scientists in the 1970’s to use a wavelet rather than a binary basis to describe their recordings of brain processes. Wavelets were first used by Denis Gabor to ascertain what might be the limit of usable compression of a telephone message sent over the Atlantic cable. He found that for each frequency in the voice, the time taken for a half wave length had to be encoded if the communication were to be understood. Gabor used a Hilbert phase space to make his calculations. He pointed out that his mathematics was identical to that used by Heisenberg to address the limit to which the momentum and location of a subatomic unit — a microphysical quantum — could be specified. Attempts at greater specification would lead to indeterminancy — in quantum physics, an uncertainty in the accuracy of simultaneously specifying momentum (measured in terms of spectral density) and location (measured in terms of space). In Gabor’s use of the Hilbert space, the indeterminancy was between spectral density and time. Gabor therefore called his wavelet a quantum of information. Gabor pointed out that the quantum of information was not necessarily related physically to a subatomic quantum. However, Gabor did relate his measure to the reduction of uncertainty which was Shannon’s measure of the amount of information in a communication. A few years after his work leading to the concept of quanta of information, Gabor tackled the problem of enhancing the resolution of an imaging process. He targeted electron microscopy, suggesting that enhancement could result if the recording medium encoded interference patterns generated by the interaction of reflected electrons with a non-reflected reference beam. The scheme was not

3

4

Karl H. Pribram

immediately successful, but during the early 1960s Ernest Leith was able to implement the process using radiant energy (light). Whereas the information (communication) concept was based on a Hilbert space in which spectrum and spacetime were the coordinates (coordinated, conjoined), Gabor’s image process was based on the Fourier relationship in which spectrum and spacetime are disjoined. Thus the ordinarily experienced spacetime image is transformed into its spectral representation. This representation distributes the image by encoding not only frequency and amplitude but the nodes of interference among phase relationships between cosine and sine representations (that is, the nodes encode complex numbers). Recently, using Independent Component Analysis and other techniques, it has been shown that phase encoding is critical to enhancing the resolution that Gabor sought. Gabor named the spectral representation a hologram because of its holistic nature: every point in the ordinary spacetime image becomes distributed over the surface of the holographic spectral representation in such a fashion that every part of the hologram provides a representation of the whole. Brain scientists found it useful during the 1970’s to use Fourier-like processes to describe the receptive fields of neurons concerned with sensory imaging. But as in the case of information processing in telecommunication, the Gabor wavelet turned out to be a more accurate representation of the neural process than did the Fourier transform. I used the term holonomy to denote this modification of holography for brain function. More recently, studies derived from imaging by way of functional nuclear magnetic resonance (fMRI) have used the term “quantum holography” to describe the process. The remainder of the paper addresses the evidence for holonomy — that is, quantum holography in the brain’s processing of sensory input. In 1951, reviewing the state of our knowledge of auditory processes for Steven’s Handbook of Experimental Psychology, Licklider ended with: If we could find a convenient way of showing not merely the amplitudes of the envelopes but the actual oscillations of the array of resonators, we would have a notation (Gabor 1946) of even greater generality and flexibility, one that would reduce under certain idealizing assumptions to the spectrum and under others to the wave form … the analogy … [to] the position-momentum and energy-time problems that led Heisenberg in 1927 to state his uncertainty principle … has led Gabor to suggest that we may find the solution [to the problem of sensory processing] in quantum mechanics.

4. Gabor-like processing in sensory systems During the 1970’s it became apparent that Gabor’s notation also applied to the cerebral cortical aspect of visual and somatic sensory processing. The most elegant

Brain and quantum holography

work was done with regard to the visual system. Daugman developed a twodimensional version of the Gabor function and showed that such a function well describes receptive fields in the visual cortex. He, as well as others (see Pribram 1991: 36–39), then showed that the image structure is carried by a small fraction of the Gabor coefficients, that is, those that are far out on the tails of a distribution and not by a point to point image representation. In the process the amplitude and thus energy reduction is over 98%, making the process 60 times more efficient! Recently this finding has been confirmed during Independent Component Analysis, which showed that amplitude coding will not reveal the fine structure (e.g., lines and edges) of a scene. It takes higher order statistics (the tails of the distribution), which are equivalent to phase in the spectral domain, to accomplish the desired resolution. In my laboratory, we were able to simulate our recordings of cortical receptive fields in the somatic-modality by imposing a rectangular frame (a Gaussian accomplished the same result) to form a sync function and then to sample the sync function with a Gaussian. The frame was assumed to be the result of the boundaries of the dendritic arborization; the sampling was assumed to represent the activity of any single axon sampling the extent of dendritic input to which it has access. A recent review by Tai Singe Lee (1996) in the IEEE casts these advances in terms of 2D Gabor wavelets and indicates the importance of frames and specifies them for different sampling schemes. For the monkey, the physiological evidence indicates that the sampling density of the visual cortical receptive fields for orientation and frequency provides a tight frame representation through oversampling. The 2D Gabor function achieves the resolution limit only in its complex form. In this regard, Pollen and Ronner (1980) did find quadriture (even-symmetric cosine and odd-symmetric sine) pairs of visual receptive fields. Currently, recordings made with multiple microelectrodes and data analysis with sufficiently powerful computers makes it possible to readily obtain additional data of this sort and to determine the conditions under which such encoding might occur. The neurophysiological community has come to terms with the distributed nature of what can be conceptualized as a “deep structure” of cortical processing (Pribram 1997). The accepted view is that distribution entails the necessity of binding together the disparate sites of processing. Binding in this view is thus a property not of a deep but of a surface structure composed of widely separated modules made up of circuits of neurons. Binding is accomplished by temporal synchronization of spatially distinct oscillating neural processes. The emphasis has been that under the conditions which produce binding, no phase lead or lag is present. However, regarding the deeper structure within a processing site, Saul and Humphrey (1990, 1992a,b) have found cells in the lateral geniculate nucleus that produce phase lead and phase lag within modules in the cortical processing initiated by them. In the somatosensory system, Simons and his group (Kyriazi and Simons 1993) have been analyzing the timing of the thalamocortical process to show how

5

6

Karl H. Pribram

it enhances “preferred” features and dampens “non-preferred” ones; that is, it sharpens sensory discrimination. The process thus can act as a frame, that “captures” relevant features or combination of features. These results give promise to Gabor’s prediction that we might find the solution to sensory (image) processing in the formalism, and perhaps even in the neural implementation of quantum information processing. What makes the implementation so difficult is that, as noted, the deep structure of cortical processing goes on in the synaptodendritic processing web, the dendritic arborizations where brain cells connect with each other. As Alkon points out in a Scientific American article (1989): Many of the molecular [and structural] transformations take place in dendritic trees, which receive incoming signals. The trees are amazing for their complexity as well as for their enormous surface area. A single neuron can receive from 100,000 to 200,000 signals from separate input fibers ending on its dendritic tree. Any given sensory pattern probably stimulates a relatively small percentage of sites on a tree, and so an almost endless number of patterns can be stored without saturating the system’s capacity.

5.

Proposal for a quantum physical basis for selective learning

How, then, is a particular pattern formed and implemented in subsequent behavior? A quantum-like process appears to be tailored to the occasion. The reasoning goes like this: In massively parallel distributed neural network processing simulations (PDP), Hebb’s rule and its extensions have become popular. The rule states that facilitation occurs whenever neurons adjacent to a synapse become simultaneously activated. Hebb noted that The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become “associated”, so that activity in one facilitates activity in the other. The details of speculation that follow are intended to show how this old idea might be put to work again, with an equally old idea of a lowered synaptic “resistance” under the eye of a different neurophysiology from that which engendered them. (It is perhaps worthwhile to note that the two ideas have been combined only in the special case in which one cell is associated with another of a higher level or order in transmission, which it fires; what I am proposing is a possible basis of association of two afferent fibers of the same order — in principle, a sensory-sensory association, in addition to the linear association of conditioning theory.) Donald O. Hebb, The Organization of Behavior, 1949, John Wiley & Sons, Inc., 335 pages.

Brain and quantum holography

Hebb left ambiguous the particular neurons with which he was concerned but because he was interested in S-S associations, the facilitation he sought was most likely between cortical neurons simultaneously activated by afferent input. A half-century earlier Freud was more specific: If adjoining neurons are simultaneously cathected [potentiated] this acts like a temporary facilitation of the contact-barrier [synapse] lying between the two and modifies the course [of the current]. (As quoted from Strachey 1955: 323) The Qn current [action current, nerve impulse train] will divide up in the direction of the various contact barriers [synapses] in inverse ratio to their resistance … Thus, the course taken is dependent on the relation of facilitations. (As quoted from Strachey 1955: 300) If we were to suppose that all the contact barriers [synapses] were equally well facilitated or (what is the same thing) offered equal resistance, the characteristics of memory would evidently not emerge. … Memory is represented by the differences in the facilitation between neurons. (As quoted from Strachey 1955: 300) The state of facilitation of one contact-barrier [synapse] must [therefore] be independent of that of all the other contact-barriers of the same neuron, otherwise there would … be no preference and thus no [memory or] motive. (As quoted from Strachey 1955: 301)

Hebb is interested in associativity, Freud in selective processing which demands specificity. Both agree, as do current neural networkers and neuroscientists that some sort of synaptic facilitation is involved — or more precisely, as in Alkon’s work, that processing in patches of the dendritic arbor become altered during conditioning. Thus once the process has been started, it will continue to some stable configuration. But how is selectivity initiated? How does selection among synaptic facilitations start? One obvious answer is that there is a genetic basis. But if that were the case, we could only learn what heredity has programmed us to learn. The human brain appears to be much more like a general purpose processor. We are genetically programmed to use language, but whether we speak English or Japanese depends on postpartum selective learning. In a network where all facilitating are initially equal, a straight-forward way by which variation can be introduced is through a quantum or quantum-like process. As noted, each axon divides towards its ending into teledendrons, fine branches similar to dendrites. An approaching volley of nerve impulses (action currents in an older terminology) will activate all branches alike. The assumption is that the activity in the branches can be modeled by a Schrödinger-like wave equation. As the activity enters synaptic clefts it is “read”, the equation is reduced (collapses) into a singularity: One particular part of the activity in the branches is facilitated at the

7

8

Karl H. Pribram

expense of those in the other branches. Selectivity has been inaugurated — the next approaching volley finds an inhomogeneous field of synapses. This view of neural modifiability (in conditioning, learning and conscious experience) is similar to Penrose’s (Hameroff and Penrose 1994) self orchestrated collapse of the wave function. The process does not take place in the main trunks of axons, however, but in their branching teledendrons. (In axonal trunks the Penrose-Hameroff model can account for saltatory conduction by way of a passive propagation of a soliton wave from one node of Ranvier to the next.) The proposed process is congruent with Eccles suggestion that synaptic chemistry plays a role in the quantum-like neural process. However, in the present proposal, the macro-chemical synaptic process is the “observing instrument” that leads to the wave function reduction. The synaptic chemistry remains classical, not quantum-like in its function as suggested by Beck and Eccles (1992) and shown to be untenable by Wilson (1999: 185–200) in The Volitional Brain, pp. 185–200. Post-synaptically, in the dendritic web there is a continual build-up of processes that can again be modeled by the wave function. These processes spread passively in every direction until “read” by the perikaryon — the cell body — which samples a part of the web. The “reading” again collapses the wave function, reducing it to a singularity at the axon hillock. There are characteristics of the dendritic web that make a quantum-like process plausible. As in the case of the teledendrons, dendrites are small diameter branches of neurons. These branches are connected not only to teledendrons through synapse but to each other in a like manner. Also there are non-synaptic contacts (tight junctions) where electrical transactions can be more immediate. The picture becomes even more interesting when we consider the spines that extend perpendicularly from the dendritic fiber, hairlike structures (cilia) onto which axon branches terminate. Each spine consists of a bulbous synaptic head and a narrow stalk which connects the head to the dendritic fiber. Thus, synaptic depolarizations and hyperpolarizations become relatively isolated from the dendritic fiber because of the high resistance to the spread of polarization posed by the narrowness of the spine stalk. It appears, therefore, “that there is an isolation of the activity from the rest of the cell … Part of the strategy of the functional organization of a neuron is to restrict synaptic sites and action potential sites to different parts of the neuron and link them together with passive electronic spread” (Shepherd 1988: 137). Furthermore, “it has been shown that synaptic polarization in a spine head can spread passively with only modest decrement into a neighboring spine head” (Shepherd et al. 1985: 2192). Thus, spine head polarizations passively spread to interact with each other extra-cellularly as well as via the intracellular cable properties of dendrites. The interactions (dromic and antidromic) among spine-originated dendritic potentials (that need to become effective at the cell’s axon) thus depend on a process which is “discontinuous (non-local?) and resembles

Brain and quantum holography

in this respect the saltatory conduction that takes place from node to node in myelinated nerve” (Shepherd et al. 1985: 2193). The intracellular spread of dendritic polarizations can be accounted for by microtubular structures that act as wave guides and provide additional surface upon which the polarizations can act (Hameroff et al. 1987; Hameroff et al. 1993). The extracellular spread may be aided by a similar process taking place in the glia which show a tremendous increase in the metabolism of RNA when excited by the neurons which they envelope (Hyden 1969). But these mechanisms, by themselves, do not account for the initial relative isolation of the spine head polarizations, nor the related saltatory aspects of the process. To account for these properties we turn to the dendritic membrane and its immediate surround. Dendritic membranes are composed of two oppositely oriented phospholipid molecules. The interior of the membrane is hydrophobic as it is formed by “lipids which form a fluid matrix within which protein molecules are embedded — the lipids can move laterally at rates of 2F/sec; protein molecules move about 40 times more slowly (50 nm/sec or 3Fm/min)” (Shepherd 1988: 44). Some of the intrinsic membrane proteins provide channels for ion movement across the membrane. The outer layer of the membrane “fairly bristles with carbohydrate molecules attached to the membrane protein molecules: glycoproteins. The carbohydrate may constitute 95% of these molecules [which form a] long-branching structure [that resembles] a long test tube brush, or a centipede wiggling its way through the extracellular space. It attracts water, imparting a spongy torpor to the extracellular space” (Shepherd 1988: 45–46). On the basis of these considerations, Jibu, Hagan, Hameroff, I and Yasue (1994) proposed that a perimembranous process occurs within dendritic compartments during which boson condensation produces a dynamically ordered state in water. This proposal originates in the work of Umezawa and his collaborators Ricciardi, Takahashi and Stuart (Ricciardi and Umezawa 1967; Stuart, Takahashi, and Umezawa 1979). First, Ricciardi and Umezawa pointed out the possibility of a domain structure that provides a long range order within each (in my terms, teledendronic field of a) neuron. Then, Stuart, Takahashi and Umezawa generalized this idea to a more extended (in my terms, dendritic) region of brain tissue, assuming the existence of two quantum fields interacting with each other. We have gone on to speculate that as each pattern of signals exciting the dendritic arborization produces a macroscopic, ionically produced change of the charge distribution in the dendritic network, it triggers a spontaneous symmetry breaking of a radiation field (a boson condensation) altering the water molecular field in the immediately adjacent perimembranous region. A quantum macroscopic domain of the dynamically ordered structure of water is created in which the electric dipole density is aligned in one and the same direction. It is this domain of

9

10

Karl H. Pribram

dynamically ordered water that is postulated to provide the physical substrate of the interactions among polarizations occurring in dendritic spines. As noted, when, by passive conduction, an ensemble of nerve cell bodies (perikarya) becomes excited, they sample (read) the wave wave function leading to its reduction. As a consequence, the entire axonic – teledendronic – synaptic – dendritic – perikaryonic – axonic process is selective and can, through iteration, form a distributed memory store. This store can become accessed by way of the same process by which it was formed: the reduction of the quantum (or quantum-like) wave forms which act as attractors. Smolensky captured the essence of the formalism as follows: The concept of memory retrieval is reformalized in terms of the continuous evolution of a dynamical system towards a point attractor whose position in the state space is the memory, you naturally get dynamics of the system so that its attractors are located where the memoris are supposed to be; thus the principles of memory storage are even more unlike their symbolic counterparts than those of memory retrieval. As quoted in Pribram 1991, p. xxviii.

References Alkon, D. L. (1989). Memory storage and neural systems. Scientific American, 261, 42–50. Alkon, D. L., Blackwell, K. T., Barbour, G. S., Werness, S. A. and Vogl, T. P. (1996). Biological plausibility of synaptic associative memory models. In K. H. Pribram and J. King (Eds), Learning as self-organization. Mahwah: Lawrence Erlbaum Associates. (pp. 247–262) Beck, F. and Eccles, J. C. (1992). Quantum aspects of the brain activity and the role of consciousness. Proceedings of the National Academy of Sciences, 89, 11357–11361. Hameroff, S. (1987). Ultimate computing: Biomolecular consciousness and nano technology. Amsterdam: North Holland. Hameroff, S., Dayhoff, J.E., Lahoz-Beltra, R., Rasmussen, S., Insinna, E.M. and Koruga, D. (1993). Nanoneurology and the cytoskeleton: Quantum signaling and protein conformational dynamics as cognitive substrate. In K. H. Pribram (Ed.), Rethinking neural networks: Quantum fields and biological data. Mahwah: Lawrence Erlbaum. (pp. 317–376) Hyden, H. (1969). Biochemical aspects of learning and memory. In K. H. Pribram (Ed.), On the biology of learning. New York: Harcourt, Brace and World. (pp. 95–125) Jibu, M., Hagan, S., Hameroff, S.R., Pribram, K.H. and Yasue, K. (1994). Quantum optical coherence in cytoskeletal microtubules: Implications for brain function, BioSystems, 32, 195–209. Kyriazi, H. T. and Simons, D. J. (1993). Thalamocortical response transformations in simulated whisker barrels. Journal of Neuroscience, 13, 1601–1615. Lee, T. S. (1996). Image representation using 2D Gabor wavelets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18, 959–971. Hameroff, S. and Penrose, R. (1995). Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. In J. King and K. H. Pribram (Ed.), Scale in conscious experience: Is the brain too important to be left to specialists to study? Mahwah: Lawrence Erlbaum Associates, 241–274.



Brain and quantum holography

Pollen, D. A., and Ronner, S. E. (1980). Spatial computation performed by simple and complex cells in the cat visual cortex. Experimental Brain Research, 41, A14–15. Pribram, K.H. (1997). The deep and surface structure of memory and conscious learning: Toward a 21st Century model. In Robert L. Solso (Ed.), Mind and brain sciences in the 21st Century. Cambridge: MIT Press. (pp. 127–156) Pribram, K. H. (1991). Brain and perception: Holonomy and structure in figural processing. New Jersey: Lawrence Erlbaum. Pribram, K. H. (1971). Languages of the brain: Experimental paradoxes and principles in neuropsychology. Englewood Cliffs: Prentice-Hall; Monterey: Brooks/Cole, 1977; New York: Brandon House, 1982. (Translations in Russian, Japanese, Italian, Spanish) Ricciardi, L. M. and Umezawa, H. (1967). Brain and physics of many body problems. Kybernetik, 4, 44–48. Saul, A B. and Humphrey, A.L. (1992a). Evidence of input from lagged cells in the lateral geniculate nucleus to simple cells in cortical Area 17 of the cat. Journal of Neurophysiology, 68, 1190–1208. Saul, A. B. and Humphrey, A. L. (1992b). Temporal-frequency tuning of direction selectivity in cat visual cortex. Visual Neuroscience, 8, 365–372. Saul, A. B. and Humphrey, A. L. (1990). Spatial and temporal response properties of lagged and nonlagged cells in cat lateral geniculate nucleus. Journal of Neurophysiology, 64, 206–224. Shepherd, G. M. (1988). Neurobiology (2nd Edition). New York: Oxford University Press. Shepard, G. M., Brayton, R. K., Miller, J. P., Segey, I., Rindsel, J. and Rall, W. (1985). Signal enhancement in distal cortical dendrites by means of interactions between active dendritic spines. Proceedings of the National Academy of Sciences, 82, 2192–2195. Strachey, J. (1955). The standard edition of the complete psychological works of Sigmund Freud, Vol. 1. London: Hogarth Press. Stuart, C. I. J. M., Takahashi, Y. and Umezawa, H. (1979). Mixed-system brain dynamics: Neural memory as a macroscopic ordered state. Foundations of Physics, 9, 301–327. Szentagothai, J. (1985). Functional anatomy of the visual centers as cues for pattern recognition concepts. In D. Chagas, R. Gattass and C. Gross (Eds), Pattern recognition mechanisms. Berlin: Springer. (pp. 39–52) Wilson, D. (1999). Mind-brain interaction and violation of physical laws. In B. Libet, A. Freeman and K. Sutherland (Eds), The volitional brain: Towards a neuroscience of free will. London: Imprint Academic. (pp. 185–200)

11



The mind-body and the light-matter* Mari Jibu Notre Dame Seshin University, Okayama, Japan

1.

Brain as matter

Quantum Brain Dynamics (QBD) is a physical theory describing the fundamental process in the brain in terms of quantum field theory. The theoretical framework of QBD itself is not restricted at all to the brain fundamental processes, but valid for describing the fundamental processes in general living matter. “Brain or living matter would have nothing common to do with quantum field theory mainly used in elementary particle physics, but with biophysics. Why quantum field theory is needed?” Most frequent question from those who are unfamiliar to the latest development of quantum field theory, is it? It is a typical question thrown from those who stand against the idea that quantum theory is needed for understanding the mechanism of mind in the brain, too. Hiroomi Umezawa, the founder of QBD already warned such a naïve misunderstanding: The most frequent question against QBD is that the real brain tissue seen in an autopsy is not microscopic but macroscopic. However, this criticism is not valid at all. Quantum mechanics is, indeed, restricted to the microscopic world, but quantum field theory is valid for both microscopic and macroscopic worlds. Many people do not understand this point at all.

Being macroscopic matter the brain must be described within the realm of the latest theoretical framework of quantum field theory, the only unified physical theory giving both microscopic and macroscopic world views. Human body including the brain is nothing but macroscopic matter made of infinitely many elementary particles and quasi particles subject to the fundamental laws of physics. The human brain is well-known to manifest a macroscopic structure consisting of about 14 billion neurons and surrounding glial cells ten times more than neurons and each neuron is connected to each other via thousands of synapses (Pribram 1991).

14

Mari Jibu

Many people would feel like this: “Is the cellular structure of micrometer scale that can be seen only by an electron microscope really a ‘macroscopic’ structure? Is it?” In modern physics, such a cellular structure of micrometer scale is thought as a macroscopic structure of matter, and a microscopic structure of matter is those smaller than the nanometer scale such as molecules, atoms, nuclei, and so on. Therefore, for the purpose of describing various physical processes taking place in the vicinity of neurons and glial cells in a unified framework, the latest framework of quantum field theory covering both microscopic matter of atoms and molecules and macroscopic matter of living cells (Ricciardi and Umezawa 1967; Stuart et al. 1978, 1979). You may claim that classical physics is enough to describe empirically and phenomenologically the overall motion of macroscopic matter. However it is the latest framework of quantum field theory due to Umezawa (1993) which revealed that a condensation of infinitely many elementary particles and quasi particles in the microscopic scale becomes macroscopic matter, the whole of macroscopic matter itself becomes subject to the macroscopic laws of classical physics, and constituting elementary particles and quasi particles become governed by the microscopic laws of quantum mechanics. In Umezawa’s words: Quantum mechanics is of finitely many degrees of freedom, but quantum field theory is of infinitely many degrees of freedom. This difference gives rise to the essential difference between quantum mechanics and quantum field theory. p‘Infinitely many’ is not ‘extremely many’ at all; it is totally different from ‘finitely many’ such as ‘extremely many.’ Even if you gather ‘extremely many’ microscopic constituents, it is no longer macroscopic at all as long as it remains ‘finitely many.’ If you gather ‘infinitely many’ microscopic constituents, however, it becomes of macroscopic nature. Historically speaking, quantum mechanics was first discovered, then applied to classical fields and quantum field theory was proposed. Today, it is known that both classical fields and quantum mechanics can be derived in a unified way from quantum field theory, claiming the equal standing of microscopic and macroscopic worlds.

Indeed, extremely many elementary particles form a condensation of macroscopic scale gathering infinitely many quasi particles aside. Existence of such infinitely many quasi particles in the cellular structure of living matter in the macroscopic scale has been out of consideration in the conventional molecular biology and biophysics. The essential aspect of life phenomena of living cells may be related strongly to the so far neglected condensation of quasi particles, and quantum field theory is really indispensable for the deeper understanding of life and living matter.

The mind-body and the light-matter

2.

The sea of cell membrane

Let’s observe the microscopic (in the daily life sense) but macroscopic (in the physics sense) world of the brain tissue. A single neuron is connected to thousands of other neurons via synapses. A synapse is a structure in which a swelling of the cell membrane of a neuron is facing to the cell membrane of another neuron. Similar structure in which the cell membrane of a neuron is facing to that of another neuron would be the gap junction and the spine synapse. The former structure is made of a group of membrane proteins in the cell membrane of a neuron cross-linked to another group of membrane proteins in the facing cell membrane of the neighboring neuron. The latter structure is a smaller synapse formed on the top edge of a dendritic spine. Those neuronal connections play the role of chemical switches for the huge electro-chemical network in and among the neurons, and manifest various functional activities. Since it is known that certain types of molecules (neurotransmitters and neuromodulators) are ejected in the space in such neuronal connections, we may expect the distribution of receptor proteins in the adjacent cell membrane so that the neuronal connection works as a chemical switch. Let’s see, therefore, the molecular structure of cell membranes. It is made of mainly phospholipid molecules and additionally protein molecules and other biomolecules. The phospholipid molecules form a double-layered membrane structure (bilayer), and the remaining protein molecules and biomolecules (call them cell membrane molecules) are embedded and drifting in the “sea” of cell membrane phospholipid bilayer just as the icebergs floating and drifting in the sea. This is due to the fluid mosaic model of the structure of cell membranes by Singer and Nicolson (1972). Among many kinds of cell membrane molecules transmembrane proteins such as the ATPase are of most importance for realizing the functional activity of the neuronal connections, since they play the role of channels for the transmembrane ionic flow. Therefore, we restrict our discussion to those transmembrane proteins in what follows. A transmembrane protein is known to perform both rotational and lateral diffusion in the “sea” of cell (plasma) membrane. The rotational diffusion is less important for the variation of functional activity of the neuronal connection since it does not affect the spatial distribution of channels in the region of neuronal connection such as a synapse. However, the lateral diffusion of transmembrane proteins in the region of neuronal connection changes the functional activity of the neuronal connection drastically, because they are the receptors of neurotransmitters and modulators emitted from the facing cell membrane immediately adjacent to the cell membrane in question.

15

16

Mari Jibu

Those lateral diffusions of transmembrane proteins are not entirely random as the Brownian motion but manifest an ordered trend. Namely, the lateral diffusion of a transmembrane protein is not a mere thermal diffusion driven by the environmental thermal noise, but a controlled diffusion having an ordered trend to move in the “sea” of cell membrane phospholipid bilayer. Due to this controlled diffusion of transmembrane proteins the neuronal connections can manifest variable functional activity in an ordered fashion; the functional activity of a neuronal connection becomes no longer unique but diverse even against the emission of the same neurotransmitters. Then, a question may arise: What is it, controlling the lateral diffusion of transmembrane proteins, and consequently making the neuron to manifest functionally ordered action? By the recent experimental observation of a single molecule attached to a transmembrane protein, the cytoskeletal network of protein filaments immediately adjacent to the cell membrane (call it a membrane skeletal network) is found to work as a mechanical boundary (i.e., a fence) to confine each transmembrane protein into a small compartment of the cell membrane. (Jacobson et al. 1987; Sako and Kusumi 1994; Kusumi and Sako 1996) The diffusion of many transmembrane proteins is regulated from the cytoplasmic side through the mechanical interaction with the membrane skeletal network. The average dimension of a compartment is about 600 nm, and the transmembrane proteins manifest passive trend to perform diffusions restricted by the mobility and directionality of the fence made of the encircling protein filaments of the membrane skeletal network. Transition from a compartment to the neighboring one can occur. Such a regulation of the lateral diffusion of transmembrane proteins from the cytoplasmic side is inherent in each neuron, and understood as giving rise to a genetically produced control strategy, resulting in each neuron’s functional activity of genetic origin. Now a further question arises: Does only the genetically produced fence structure of the membrane skeletal network control the lateral diffusion of transmembrane proteins? The control of the lateral diffusions of transmembrane proteins determines the functional activity of the neuronal connection and so the characteristics of the switchings of the chemical networks among the neurons. Regulation of the lateral diffusion of the transmembrane proteins from the cytoplasmic side in terms of the fence structure of the membrane skeletal network works in a long time scale, resulting in realizing a variation of the functional activity of the neuronal connection too slow to adjust against various incoming stimuli. Therefore, another control mechanism must exists so that the lateral diffusions of transmembrane proteins can determine the functional activity of the neuronal connection in a short time scale subject to various incoming stimuli.

The mind-body and the light-matter

Filled in the gap space of such a neuronal connection as the synapse is water called intercellular liquid, and we have to reveal the physical property of water in the perimembranous region immediately adjacent to the cell membrane. The NMR spectral analysis of water in the living biological cell revealed that water in the vicinity of cytoskeletal structure and cell membrane suffer from much less thermal fluctuation than the bulk water, and the disordered thermal diffusion of water molecules is highly restricted. Hydrophilic groups in the surface of membrane proteins constituting the membrane skeletal network in the inner perimembranous region or the extracellular matirices in the outer perimembranous region are tightly connected to water molecules by the hydrogen bond or the Coulomb force, and water molecules are also tightly connected to each other by the hydrogen bond. The water in the perimembranous region is therefore in a highly ordered dynamical state and manifest an ordered dynamics free from thermal fluctuation (Jibu and Yasue, 1995). In QBD a peculiar macroscopic ordered state of the dynamical system of water and electromagnetic field interacting with each other in the perimembranous region has been shown to play an important role in forming a condensate of quasi particles affecting the lateral diffusion of transmembrane proteins. (Jibu et al. 1998) We will look for a new control mechanism (call it a membrane organizer) of the lateral diffusion of transmembrane proteins by means of a certain physical effect generated by the condensate of quasi particles in the perimembranous region which may deserve for determining the functional activity of the neuronal connection in a short time scale subject to various incoming stimuli.

3.

Tunnel photon condensate in the perimembranous region

There are two neighboring systems immediately adjacent to the cell membrane in which the membrane organizer could exist. They are perimembranous regions just outside and inside the neuron, and we call them the outer and inner perimembranous regions, respectively. As the inner perimembranous region suffers from the interaction with the cytoskeletal structure and functioning of cytoplasm, it may play a role of membrane organizer in terms of a primitive intracellular informational dynamics driven by the cytoplasmic chemical and physio-chemical reactions. From a system theoretical point of view, the inner perimembranous region can be a primitive membrane organizer, meaning that it provides the transmembrane proteins with certain control whose control policy reflects the intracellular cytoplasmic activity of the individual neuron. It is primitive in a sense that the biological activity of a primitive unicellular organism except the environmental reaction is generated by the cytoplasmic activity.

17

18

Mari Jibu

It is helpful to take a look at a unicellular organism to understand the existence of two different kinds of membrane organizers. The outer perimembranous region of a unicellular organism is directly connected to the environmental circumstance of the cell. It plays a role of environmental membrane organizer by which the primitive unicellular organism can modify the geometric and functional configuration dynamics of transmembrane proteins in the cell membrane to be adapted to the environment. Thus, the membrane organizer of a unicellular organism consists of two parts: The first one is the primitive membrane organizer which controls the geometric and functional configuration dynamics of molecules in the cell membrane with a control policy given genetically by the intracellular cytoplasmic activity. The second one is the environmental membrane organizer which controls it with a control policy driven by the environmental circumstance of the cell. In case of the multicellular organism such as the brain tissue, the primitive membrane organizer of each constituent cell remains essentially the same as in the case of a unicellular organism. However, the environmental membrane organizer changes drastically. It cannot be even called “environmental” membrane organizer, because most cells forming the brain are not exposed directly to the environmental circumstance of the multicellular organism. Each constituent cell is exposed only to the local environment of the intercellular space. We call it, therefore, a local environmental membrane organizer. It exists in the outer perimembranous region in the intercellular space, and suffers from the trespassing of neurotransmitters and modulators when they are emitted from the neighboring neurons. Therefore, the local environmental membrane organizer realizing the dynamical control of the controlled lateral diffusions of transmembrane proteins in the cell membrane directly related to the functional activity in the short time scale may be found in the outer perimembranous region in the intercellular space. Recently, for the purpose of clarifying the existence of distributed patterns of activity serving as an ideal substrate for experienced perceptual awareness and subsequent storage of that experience, we gave a detailed analysis of the dynamically ordered structure of water in the outer perimembranous region of the intercellular space (Jibu and Yasue 1995; Jibu et al. 1998). Since thermal fluctuation and dissipation of water molecules in the perimembranous region are 106 as small as that of bulk water, it is kept far from thermal equilibrium. Electromagnetic field plays a principal role as an ideal substrate accounting for the distributed and systematized patterns of activity in the perimembranous region of the intercellular space. Namely, electromagnetic field manifests two distinct modes; a normal wave mode with real wave number and a tunneling or evanescent wave mode with imaginary wave number. The former is essentially the well-known part of electromagnetic field binding atoms and molecules dynamically with each other. The latter is the damping part of electromagnetic field corresponding to a

The mind-body and the light-matter

leak field that can be usually neglected in the case of bulk water but certainly not in the case of a thin layer of water in the perimembranous region. Due to the intrinsic electric dipole moment, the water molecule interacts strongly with electromagnetic field in the perimembranous region. The electric dipoles of water molecules are systematized globally in the perimembranous region to realize a uniform configuration. It is the dynamically ordered structure of water in the perimembranous region supported by the normal wave modes of electromagnetic field. A long-range alignment of electric dipoles cannot be extended to the whole perimembranous region, but restricted to a domain with linear dimension smaller than a characteristic length called a coherence length. It is estimated to be less than 50µm. (Jibu et al. 1994) Thus, we have a distributed spatial structure of the perimembranous region composed of the non-overlapping domains of dynamically ordered states of water smaller than the coherence length. This domain structure can be understood as basic to the distributed patterns of activity in the large extent of extracellular fluid in the intercellular space with which the system of controlled lateral diffusions of transmembrane proteins are expected to interact.

4. Tunnel photon condensate as a local environmental

membrane organizer The domain structure of distributed systematized patterns in the perimembranous region was revealed to be an ideal physical substrate for the distributed patterns of activity surrounding the cell membrane. It provides us with a postulated mechanism of experienced awareness and subsequent storage of that experience (Jibu et al. 1998): Each stimulus driven by the experience flowing into the cell membranes in terms of neurotransmitters produces a change of the functional configuration of the transmembrane proteins. Then, this change triggers the uniform alignment of electric dipoles of water molecules in the perimembranous region immediately adjacent to the cell membrane in question. Namely, a spatial domain of the dynamically ordered structure of water with the size smaller than the coherence length is created in which the electric dipoles are aligned in one and the same direction. It is this domain of the dynamically ordered structure of water that is postulated to be a basic part of the physical substrate coordinate with perceptual awareness of the external stimuli in question (Ricciardi and Umezawa 1967; Stuart et al. 1978, 1979).

Modern condensed matter physics revealed the existence of long-range correlation waves in such a uniform alignment of electric dipoles as the dynamically ordered structure of water in the perimembranous region (Umezawa 1993). Those long-range correlation waves are called Nambu-Goldstone modes. The Nambu-Goldstone

19

20

Mari Jibu

modes were shown to be generated by a very small energy perturbation, and so each domain of the dynamically ordered structure of water postulated to be a basic part of the physical substrate coordinate with perceptual awareness of the external stimuli may generate the Nambu-Goldstone modes at all times thanks to the noisy thermal circumstance of the neurons. Those ever generating Nambu-Goldstone modes characteristic to each domain of the dynamically ordered structure was regarded as the physical substrate coordinate with the retrieval of the perceptual awareness of the external stimuli stored in the dynamically ordered structure of water. Furthermore, it is known that the long-range correlation waves of aligned electric dipoles creates a longitudinal mode of the electromagnetic field (Umezawa 1993). This longitudinal mode is described by the electric field vector subject not to the usual Maxwell equation but to the modified Maxwell equation with mass term. By the modified Maxwell equation, we find that the electromagnetic field in the spatial domain of the dynamically ordered structure of water becomes a nonpropagating wave mode with imaginary wave number. Photons are light quanta associated to the propagating wave modes with real wave numbers of the electromagnetic field. Light quanta associated to the non-propagating wave mode with imaginary wave number are called evanescent photons, virtual photons, or tunnel photons. Due to the form of the modified Maxwell equation, they have nonvanishing mass M ~ 13.6 eV (Del Giudice et al. 1982, 1985, 1986, 1988). Namely, we have massive photons in the perimembranous region. The intercellular space between the neurons is filled up with not only the ordered domains of water but also the non-propagating modes of the electromagnetic field in which tunnel photons with mass M manifest a condensation. This condensation of tunnel photons overlaps the spatial domain structure of the dynamically ordered states of water in the perimembranous region (Jibu et al. 1998). Each spatial domain structure of the dynamically ordered states of water in the perimembranous region is regarded as a physical substrate coordinate with perceptual awareness of the external stimuli. Like the Nambu-Goldstone mode, each condensation of tunnel photons overlapping the spatial domain structure of the dynamically ordered states of water is regarded as a physical substrate for the retrieval of the perceptual awareness of the external stimuli stored in the dynamically ordered structure of water. As the tunnel photons do not interact with each other, their condensation can be treated as an ideal Bose gas confined in each ordered domain of the perimembranous region with linear dimension of the order of the coherence length. Then, a standard calculation gives the critical temperature

T =a

2 M

The mind-body and the light-matter

which is the maximum temperature for the condensation of tunnel photons maintained, where α is a constant given explicitly. Putting the explicit values of constants and parameters, we find the critical temperature T lies in the range of 300–1000K which goes well along with the actual body temperature. It is of particular interest to regard the condensation of tunnel photons overlapping the spatial domain structure of distributed systematized patterns in the perimembranous region as the physical substrate for the local environmental membrane organizer. Since each tunnel photon condensation in the perimembranous region of the intercellular space is located immediately adjacent to the system of the lateral diffusions of transmembrane proteins in the cell membrane, those transmembrane proteins suffers from a strong systematized drift force due to the macroscopic electromagnetic field induced by the tunnel photon condensation. Such a drift force can be calculated to be proportional to the gradient of the square density of the electric filed vector. (Del Giudice et al. 1992) Because each condensation of tunnel photons overlapping the spatial domain structure of the dynamically ordered states of water is regarded as a physical substrate for the retrieval of the perceptual awareness of the external stimuli stored in the dynamically ordered structure of water, the above drift force well reflects the systematized control policy in accordance with the perceptual awareness. In this sense, the tunnel photon condensation in the outer perimembranous region of the intercellular space can be regarded as a local membrane organizer of the neuron.

5.

General anesthesia

Modern surgery would be inconceivable without general anesthetics. Yet, after more than a century of research, there are very few points of agreement about how and where they act. The extraordinary diversity of theories which have been proposed over the years is matched only by the diversity of molecules which can induce reversible loss of consciousness. Anesthetics also became an important tool in the investigation of consciousness. Indeed, general anesthetics form no single chemical class but include substances as dissimilar as xenon and chloroform. This apparent lack of specificity, together with the remarkable observation that general anesthesia can be reversed by high pressure, poses a unique pharmacological problem. Pharmacologists would expect high pressure to be a particularly useful tool when trying to elucidate the mechanism of action of a drug. However, the intriguing observation that general anesthesia can be reversed by high pressures of order of 150 atmospheres provides what might be a important clue as to how anesthetics act. General anesthetics have considerably large electric dipole moments and disturb the ordered maintenance of the macroscopic condensation of tunnel photons.

21



22

Mari Jibu

From a physical point of view, the critical temperature of the condensation is understood to become lower due to the electric dipole disturbance of anesthetics, and so the condensation becomes incomplete. It is straight to see in quantum field theory that the critical temperature of the macroscopic condensation is proportional to the pressure. Therefore, if general anesthetics act the critical temperature of the macroscopic condensation of tunnel photons, the pressure reversal can be well understood within the quantum field theoretical framework of QBD. We are doing a new experiment to detect the tunnel photons in collaboration with Hamamatsu Photonics.

Note * We thank Hamamatsu Photonics very much for supporting Tokyo ’99.

References Del Giudice, E., Doglia, S. and Milani, M. (1982). A collective dynamics in metabolically active cells. Physics Letters, 90A, 104–106. Del Giudice, E., Doglia, S., Milani, M. and Vitiello, G. (1985). A quantum field theoretical approach to the collective behaviour of biological systems. Nuclear Physics, B251, 375–400. Del Giudice, E., Doglia, S., Milani, M. and Vitiello, G. (1986). Electromagnetic and spontaneous symmetry breaking in biological matter. Nuclear Physics, B275, 185–199. Del Giudice, E., Preparata, E. and Vitiello, G.. (1988). Water as a free electric dipole laser. Physical Review Letters, 61, 1085–1088. Del Giudice, E., Doglia, S., Milani, M. and Vitiello, G. (1992). A dynamical mechanism for cytoskeleton structures. In M. Bender (Ed.), Interfacial phenomena in biological systems. New York: Marcel Dekker. Jacobson, L., Ishihara, A. and Inman, R. (1987). Lateral diffusion of proteins in membranes. Annual Reviews of Physiolgy, 49, 163–175. Jibu, M. and Yasue, K. (1993). Intracellular quantum signal transfer in Umezawa’s quantum brain dynamics. Cybernetics and Systems: An International Journal, 24, 1–7. Jibu, M. and Yasue, K. (1995). Quantum brain dynamics — An introduction. Amsterdam: John Benjamins. Jibu, M., Hagan, S., Hameroff, S. R., Pribram, K. H. and Yasue, K. (1994). Quantum optical coherence in cytoskeletal microtubules: Implications for brain function. BioSystems, 32, 195–209. Jibu, M., Pribram, K. H. and Yasue, K. (1996). From conscious experience to memory storage and retrieval: the role of quantum brain dynamics and boson condensation of evanescent photons. International Journal of Modern Physics, B10, 1745–1754. Jibu, M., Yasue, K. and Hagan, S. (1997). Evanescent (tunneling) photon and cellular ‘vision’. BioSystems, 42, 65–73.

The mind-body and the light-matter

Kusumi, A. and Sako, Y. (1996). Cell surface organization by the membrane sleleton. Current Opinion in Cell Biology, 8, 566–574. Pribram, K. H. (1991). Brain and perception. New Jersey: Lawrence Erlbaum. Ricciardi, L.M. and Umezawa, H. (1967). Brain and physics of many-body problem. Kybernetik, 4, 44. Sako, Y. and Kusumi, A. (1994). Compartmentalized structure of the plasma membrane for receptor movements as revealed by a nanometer-level motion analysis. Journal of Cell Biology, 125, 1251–1264. Singer, S.J. and Nicolson, G.L. (1972). The fluid mosaic model of the structure of cell membranes. Science, 175, 720–731. Stuart, C.I.J.M., Takahashi, Y. and Umezawa, H. (1978). On the stability and non-local properties of memory. Journal of Theoretical Biology, 71, 605–618. Stuart, C. I. J. M., Takahashi, Y. and Umezawa, H. (1979). Mixed-system brain dynamics: neural memory as a macroscopic ordered state. Foundation of Physics, 9, 301–327. Umezawa, H. (1993). Advanced field theory: micro, macro and thermal physics. New York: American Institute of Physics.

Appendix — Celebrating Dr. Pribram’s 85th anniversary I started my career in neuroscience and biophysics with a kind and warmhearted remote direction by Dr. Karl Pribram a few years before he left Stanford University. My presence today in the scientific community would be hardly realized if I were out of his direction full of deep insight and encouragement. In the very first day of Tokyo ’99, I had to defend my Ph.D. dissertation in Okayama University Medical School and when I were back to the conference auditorium I could tell Dr. Pribram of my success in defending my dissertation. He was the first who celebrated my Ph.D. and in this opportunity I like to celebrate his 85th anniversary by making an exposition of my first (unpublished) paper written in the first year under Dr. Pribram’s remote instruction from Stanford.

Control mechanism of the neuronal plasticity: The drifting channel hypothesis Mari Jibu

Abstract It is suggested that in the neuronal membrane the channel proteins drift due to the ondulatory collective motion of membrane phospholipid which is tuned over the neuron under the control of incoming graded potentials at the neuronal junctions, e.g., synapses, ephapses, and tight junctions. As a result, the neuronal membrane manifests in itself a spatial pattern of the ionic channel distribution which forms a chemical network controlled by the graded potentials. This pattern formation processing of the channel distribution may provide us with a mechanism of the neuronal plasticity important for the short-term memory processing in the brain.

23

24

Mari Jibu

Among the neuronal junctions, those placed on the chemical network of channel proteins manifest higher efficiencies and those out of it do lower ones. It is also suggested that the transfer process of the short-term memory to the long-term one may be the result of crosslink of channel proteins due to antibodies or lectins. The chemical network formed by the channel distribution becomes in part immovable due to the chemical cross-link process. Efficiencies of the neuronal junctions placed on such an immovable chemical network of channels may be kept higher for longer period. Certain important characteristics of the brain function may be well explained by the present drifting channel hypothesis.

A1. Introduction In this paper we present a set of speculative hypotheses concerning the functions of membrane phospholipid oscillations of the neuron, in particular, the collective drifting motion of channel proteins controlled by the incoming graded potentials at the neuronal junctions. The controlled mobility of the channel proteins may be understood as the origin of the neuronal plasticity. It will provide us with a model of control mechanism of the efficiencies of neuronal junctions. This model will tell us also how the short-term memory (STM) can be transferred to the long-term memory (LTM). It is now widely believed that the fundamental memory processes in the brain are realized by the assembly of neurons interconnected with the axonal and dendritic junctions. If one passes to the extreme, such an assembly of neurons may be considered as a circuit or network in which terminal nodes are neurons and wirings are axons and dendrites. Because one did not know how the neuron works as the most fundamental processing unit of the brain, many authors proposed so far a variety of too simplified models of neuronal networks. Assuming the possibility of recombining the wirings, the neuronal network has been shown to manifest the rearrangement processes against the external stimuli (Anderson and Rosenfeld 1988). As a problem of arithmetic the analysis of such a flexible network is not so sophisticated but tedious, and further investigations are still now given extensively. Although such formal approaches with simplified models of neuronal networks may provide us with a partial understanding of the computational function of the brain, they cannot bring us to the real understanding of the brain function solely because they do not reflect the fundamental processes of the neuron. What needed for the current research of the brain function is the better understanding of the fundamental processes occurring in a single neuron. In particular the molecular biological understandings of the origin and control mechanism of the neuronal plasticity are needed from both experimental and theoretical points of view. As concerns a considerable amount of experimental verifications of the synapse plasticity (i.e., one of the neuronal plasticity taking place in the chemical synapses) has been reported recently. (Katsumaru et al. 1982; Murakami et al. 1982) On the other hand, a very few proposals have been given in the theoretical verification of the neuronal plasticity. A hypothesis that the dendritic spines manifest the actin induced contraction and the synapse efficiency changes due to the contraction, proposed by Crick (1982), may provide us with a possible mechanism of the dendro-dendritic synapse plasticity. However, no hypotheses concerning both origin and control mechanism of the whole neuronal plasticity have been presented.

The mind-body and the light-matter

Pribram (1971) has pointed out the importance of dendritic microprocesses in the neuronal membrane of dendrites subject to the integrated control of analogue graded potentials. Such a dendritic microprocess can be considered conceptually as a holoscopic pattern of polarizations and depolarizations on the neuronal membrane. In the present paper, we present a much more realistic model of neuronal microprocesses together with new hypotheses concerning both origin and control mechanism of the neuronal plasticity. Our start point is the molecular biological consideration of the collective motion of the neuronal membrane in analogy with the recently discovered fundamental control processes of the amoeba-like cell (Meihardt 1982; Ueda and Kobatake 1983; Sato et al. 1985).

A2. Neuronal membrane Before proceeding to the exposition of our new hypotheses concerning the origin and control mechanism of the neuronal plasticity, it may be worthwhile to present here a brief sketch of the structure of the neuronal membrane from the molecular biological point of view. This is because the fundamental processes relevant to the neuronal plasticity are believed to take place within the two-dimensional extent of the neuronal membrane. Let us see a typical nerve cell, that is, a neuron in the brain. Like other types of biological cells, the neuron is surrounded by the membrane which we call the neuronal membrane. From the point of view of cell morphology, the neuron has been considered to be made of a cell body, several dendrites, and one (or vanishing) axon. However, from the functional point of view of molecular biology, it may be convenient to consider that the neuron is made of cytoplasm and the neuronal membrane surrounding it irrespective to its morphological aspect. This is because the detailed electric and chemical action of a neuron is almost determined by the distribution of ionic channel proteins on the neuronal membrane as it will be seen later on. The terminal knobs and dendrites of other neurons contact with the neuronal membrane of the neuron in question. Those contact points are called neuronal junctions. They are classified into three types; that is, the chemical synapses, the electrical ephapses, and the tight junctions. The neuronal junctions play the role of entrance gates through which the neuronal membrane suffers from the action potential of other neurons. There, both chemical interactions by the neurotransmitters and electric ones by the membrane potentials may take place between the neuronal membranes. Let us see the microstructure of the neuronal membrane from the molecular biological point of view (Alberts et al. 1983). Like other cell membranes, the neuronal membrane is nothing but a lipid bilayer of 4~5 nanometers thick. Planer density of the lipid bilayer is about 5×106/micrometers2. The lipids constituting the neuronal membrane bilayer are amphipathic ones. A great majority of them are the phospholipid and minorities are the cholesterols and the glycolipids. The lipid bilayer of the neuronal membrane manifests a dynamical structure showing the fluidity. In other words, the lipid bilayer can be considered as a two dimensional fluid of lipids. The fluidity or the viscosity of the lipid bilayer is determined by the constituents. For example, the cholesterols control the fluidity, and the more the phospholipids contain double bonds the less viscous the lipid bilayer becomes. The glycolipids are distributed on the surface of the neuronal membrane and so may suffer from the external environment of the neuron.

25

26

Mari Jibu

The neuronal membrane is not a mere lipid bilayer. There exist other constituents which are very important from the point of view of the membrane activity. They are proteins such as the ionic channel proteins, the enzymes and the entigens. Those proteins are called membrane proteins. They are embedded in the lipid bilayer and show considerable mobility in the two dimensional extent of the neuronal membrane. For example, the diffusion constant of the planner diffusions of typical proteins in the membrane is reported to fall into the range from 5×10−9 centimeters2/sec to 10−12 centimeters2/sec. As one can easily imagine it, the membrane proteins are drifting and floating in the “sea” of lipid bilayer just like the icebergs in the sea. The membrane proteins perform diffusive motions and drift in the neuronal membrane until the planner distribution pattern of them becomes the equilibrium one. Thus the drifting motion of membrane proteins are nothing but the well-known Brownian motions. The distribution pattern of the membrane proteins in the neuronal membrane is the most fundamental and important factor in determining the action and behavior of the neuron. For example, the distribution pattern of the ionic channel proteins in the membrane determines the intra-membrane ionic circuit relevant for the neuronal information processing. Unfortunately no experimental and theoretical considerations have been reported concerning the mechanism to determine and control the distribution pattern of the ionic channel proteins. Of course, the pattern formation of the protein distributions in the neuronal membrane might suffer from various controls in different states. It may be easily understood that the genetic control is most influential in the biogenetic phase. Unlike the other types of biological cells, the neuron has a specific characteristic that the genetic control becomes less influential in the later phases. On the other hand, the external electrical and chemical controls become dominant so that the neurons in the neuronal network can modify flexibly their reactions against the mutual influences through the neuronal junctions. Such a controlled pattern formation process of the distribution of ionic channel proteins in the neuronal membrane may be understood as the origin of the mechanism of neuronal plasticity. However, this is a speculation. As we emphasized before, neither theoretical (i.e., hypothetical) nor experimental investigations of the control and formation mechanisms of the distribution pattern of the ionic channel proteins in the neuronal membrane are known. For better understanding of the action and behavior of the neuron, we need to look for a theoretical and hypothetical model of the pattern formation process of the ionic channel distributions in the neuronal membrane.

A3. Basic hypotheses We present now the exposition of a dynamical model of fundamental processes in the neuronal membrane with the new speculative hypotheses concerning the control mechanism of the neuronal plasticity. What we need there is the principle and mechanism for the pattern formation process of the membrane proteins in the neuronal membrane as we have pointed out in the preceding section. For the purpose of making the hypotheses more realistic, we should better refer to the case of the perceptual fundamental processes in the cell membrane known in the much simpler biological systems. For this aim it may be worth noticing the recent experimental

The mind-body and the light-matter

discovery of the membrane phase-wave control mechanism of the action and response of the amoeba-like unicellular organism (Ueda and Kobatake 1983; Sato et al. 1985; Yasue 1988). It is found that the responsive motion of the cell against the stimuli is controlled by the ondulatory collective motion of the cell membrane. Each local part of the cell membrane manifest a quasi periodic nonlinear oscillation induced by the Ca2+ controlled actin-myosin contraction-relaxation process. The external stimuli given locally in the cell membrane modify the phase of the local membrane oscillations. This local disturbance of the phase spread immediately over the whole cell membrane because no additional energy is needed to modify the phase of the membrane oscillations. Thus, the principal effect of the external stimuli to the cell membrane oscillations is to modify their phases and generate there a phase wave, that is, a spatial distribution of the phase differences. Massive proteins in the cell membrane have week chemical couplings to the lipid bilayer and suffer from the cell membrane oscillations surrounding them. The resulting motion of the massive protein in the membrane is a Brownian motion, i.e., a diffusion with the drift velocity proportional to the gradient of the phase difference of the cell membrane oscillations around the massive protein. Such diffusions of the massive proteins in the cell membrane may produce the observed responsive motion of the unicellular organism against the external stimuli. In other words, the phase wave controlled diffusions of the massive proteins may be understood as the perceptual fundamental processes in the cell membrane for the simplest biological system of unicellular organism. It seems not only quite interesting but also challenging to generalize the perceptual fundamental processes in the cell membrane of the unicellular organism to the neuronal activity in the brain. Namely, certain perceptual fundamental processes similar to those in the cell membrane of the unicellular organism may be thought to take place in the neuronal membrane of the brain. The principal hypothesis, with which we are going to develop a new molecular biological model of the mechanism of the neuronal plasticity, thus concern the control mechanism of the distribution of ionic channel proteins in the neuronal membrane lipid bilayer. The first hypothesis is the following: Each local element of the lipid bilayer of the neuronal membrane manifests a physical oscillatory motion induced by the Ca2+ controlled actin-myosin contraction-relaxation process. This oscillatory motion will be called the fundamental or reference oscillation of the neuronal membrane. As every local element of the bilayer oscillates, the whole neuronal membrane shows an ondulatory collective motion which we call a reference wave of the neuronal membrane. In the vicinity of a neuronal junctions, the fundamental oscillation of the neuronal membrane may suffer from the external perturbation of the junctional graded potential. This perturbation results principally in the modification of not the amplitude but the phase of the oscillation. This is simply because the variation of the phase needs much less energy than that of the amplitude. This fact also suggests that the local variation of the phase of the fundamental oscillation near the neuronal junction spread and propagate to the whole neuronal membrane in a time interval comparable to one or two period of the fundamental oscillation. At the neuronal junctions not only the graded potentials but also the rapidly changing impulses will be imposed in general. However, such impulses give no net effect on the quasistationary modulation of the fundamental oscillations because they changes too rapidly. As

27

28

Mari Jibu

we will see later on, the ionic channel proteins lie on the quasi-stationary modulation pattern of the fundamental oscillations and form the chemical network being unchanged until stable as long as the graded potentials vary drastically. Therefore, for the analysis of the pattern formation process of the stable distribution of membrane proteins, we should better work with the shorter time scale with which the imposed graded potentials are safely considered as slowly varying quantities. We claim those considerations as the second hypothesis: The whole fundamental oscillations of the neuronal membrane are controlled quasiinstantaneously by the slowly varying graded potentials imposed at the neuronal junctions. The reference wave of the neuronal membrane modulates due to the spatio-temporal pattern of the imposed graded potentials. As we have seen before, the membrane proteins such as the ionic channel proteins are drifting horizontally in the lipid bilayer of the neuronal membrane. The week chemical coupling between the proteins and the lipids makes the drifting motions of membrane proteins sensitive to the surrounding fundamental oscillations of the neuronal membrane. Of course, they are also subject to the random perturbations due to the thermal fluctuating motions of the membrane lipids, and as a result their motions manifest the Brownian motions with the drift velocities controlled by the reference wave of the neuronal membrane. Such a Brownian motion is nothing but a two dimensional diffusion in the membrane bilayer. The diffusion constant may not deviate largely from that of general membrane proteins in the cell membrane. We may safely assume the value ν larger than 5×10−9 and smaller than 10−12 in the centimeters2/sec unit. To determine the drift velocity of the diffusion of the membrane protein controlled by the reference wave of the neuronal membrane, we need the following third hypothesis: Due to the weak chemical coupling between the membrane protein and the lipid bilayer, the drift velocity of the lateral diffusion of the protein in the neuronal membrane is given in the first approximation by a two dimensional directional vector proportional to the spatial frequency vector (i.e., the wave-number vector) of the reference wave of the neuronal membrane. In other words, the membrane proteins drift here and there in the lipid bilayer tracing the most varied direction of the reference wave. The situation may be analogous to the drift of the raft in the sea. It may be worthwhile to notice here a physical fact that the material floating or drifting in the ondulatory surroundings is mostly affected if the wave-length of the ondulatory surroundings is longer than the linear dimension of the material. Therefore, the above hypothesis should be understood as an approximate one in a sense that it remains valid for smaller values of the spatial frequency. However, the reference wave of the neuronal membrane may not have the high spatial frequency components, because the energy of the fundamental oscillations of the neuronal membrane remains small. Therefore, the third hypothesis provides us with practically a good approximation of the control mechanism of the spatial distribution pattern of the membrane proteins. It is widely believed nowadays that the spatial distribution pattern of the membrane proteins, especially the ionic channel proteins, in the neuronal membrane determines the behavior of the neuron as the fundamental processing unit in the neuronal network of the brain.

The mind-body and the light-matter

This is because the intra-membrane ion transportation of the neuron is activated along the successively distributed ionic channel proteins. Namely, the spatial distribution pattern of the ionic channel proteins determines a chemical network of ion transportations inside the neuronal membrane. We notice again that the mechanism of the formation or reformation process of the ionic channel distribution in the neuronal membrane is not known. It is of quite importance, therefore, to develop a theoretical or hypothetical model of that pattern formation process. The above mentioned three basic hypotheses will explain the formation and reformation processes of the spatial distribution pattern of the ionic channel proteins by means of the stationary (i.e., standing) reference wave of the neuronal membrane.

A4. Fundamental equations To make a detailed exposition of our new model of neuronal plasticity based on the three basic hypotheses, we need necessarily a mathematical formulation. We will restrict ourselves to the intuitive mathematical manipulations so that the readers unfamiliar with the issues in mathematical or theoretical biology can easily understand the essential feature of the model. First, we need a numerical method to indicate the arbitrary position in the twodimensional extent of the neuronal membrane. Because we are interested in the spatial distribution pattern of the membrane proteins, we need to illustrate the horizontal motion of each membrane protein in the membrane lipid bilayer. From the geometrical point of view, the form of the neuronal membrane is a two-dimensional closed surface if one neglects its extremely small thickness. Therefore, it is convenient to introduce an imaginary twodimensional surface covering the outside of the real membrane lipid bilayer of the neuron. We call it the neuronal membrane manifold or simply the neuronal membrane in our model. We dot the horizontal position of the center of mass of each polymer of membrane protein or lipid onto the neuronal membrane manifold, and call it a representative point of the membrane constituents. Since the whole frame of the membrane is made of the lipid polymers, the extremely many representative points of lipid polymers distribute densely in the while neuronal membrane manifold. Those of protein polymers distribute less densely. The fundamental oscillations of the membrane lipids, that is, the reference wave of the neuronal membrane can be represented by a numerical wave spreading over the whole membrane manifold. For the facility of mathematical description of the model, we denote the neuronal membrane manifold by a symbol M and introduce there a two-dimensional coordinate system so that each point in M is labeled by a pair of real numbers (x1,x2). This pair is called a coordinate of the point and abbreviated as x=(x1,x2). (Recall that any place on the surface of Earth can be labeled by a pair of real numbers denoting the latitude and longitude.) As the representative points of lipid polymers are densely distributed in the manifold M, we should imagine a representative oscillatory degree of freedom of the fundamental oscillation in each point x of M. The fundamental oscillation is in general nonlinear and the representative oscillatory degree of freedom in x may be given numerically by the expression q ( x ,t ) = a ( x ) e

iS ( x ,t )

(1)

29

30

Mari Jibu

for the appropriate time range of t. Here, a(x) denotes the amplitude and S(x,t) the phase of the fundamental oscillation. The role of the imaginary unit i is merely formal for the mathematical convenience to describe the oscillatory degree of freedom. (If you do not like it, we may replace the above expression by the familiar trigonometry θ(x,t) = aeiS(x,t).) The fundamental oscillations of the neuronal membrane are induced by the Ca2+ controlled actin-myosin contraction-relaxation process due to the first hypothesis. As the actin monomers may be thought to distribute almost uniformly in the lipid bilayer of the neuronal membrane, the amplitude of each fundamental oscillation may remain almost the same. This means that the representative fundamental oscillation (1) becomes q ( x , t ) = ae

iS ( x ,t )

(2)

for any position x and time t. If we consider the highly idealistic case in which the density of the Ca2+ ion changes uniformly in the whole neuronal membrane, the actin-myosin contraction-relaxation processes proceed uniformly. In this case, the fundamental oscillation (2) becomes independent of the position x, q ( t ) = ae

iS ( t )

(3)

and if we approximate it by a harmonic (i.e., linear) oscillation we have q ( t ) = ae

−i wt

(4)

where  is the circular frequency. The period of the oscillation is given, of course, by T=

2p

(5)

w

In the realistic case of the neuronal membrane, the fundamental oscillations are hardly considered being synchronized due to the various external perturbations. There, the fundamental oscillations may be represented by the asynchronous form (2). The second hypothesis claims that the major external perturbations to the fundamental oscillations of the neuronal membrane are due to the graded potentials imposed at the neuronal junctions. For realizing the second hypothesis in our model, we need to introduce still a new degree of freedom in every position x of the neuronal membrane manifold M which represents the distribution of the imposed graded potentials. Let V(x) be the value of the slowly varying graded potential in milli-Volt unit at each position x of M. As it is considered as a slowly varying quantity in the time scale of the cognitive fundamental process in the neuron, it may not depend on time t in our model. Of course, V(x) is nonvanishing only for every position x in the vicinities of the neuronal junctions. Thus, the function V(x) of position x represents not only the value distribution of the graded potentials but also the spatial distribution of neuronal junctions. We call it the external potential. Since both neuronal junctions and graded potentials imposed there are external parameters of the perceptual fundamental processes of the neuron taking place in the neuronal membrane, the functional form of the external potential V(x) may be assumed as a given quantity in the present model. Due to the second hypothesis, the whole fundamental oscillations are modulated in phase by the imposed graded potentials. In other words, the imposed graded potentials

The mind-body and the light-matter

control quasi-instantaneously the phase pattern of the fundamental oscillations over the whole neuronal membrane. To write down the mathematical representation of this control policy we need further proceeding to the analysis of the reference wave composed of the fundamental oscillations. First, we consider the idealistic situation in which the external potential V(x) vanishes identically. Because of the absence of external perturbations, the fundamental oscillations of the neuronal membrane may be assumed fully synchronized as given by Eq. (4). As we have seen in the second hypothesis, the first effect of the onset of the graded potential is the variation of the phase of the fundamental oscillations. Then, in the presence of the external potential V(x), the unperturbed fundamental oscillations q ( x , t ) = q ( t ) = ae

(6)

−i wt

become modulated in their phases. Since the external potential V(x) is slowly varying and does not depend effectively on time t, the phase modulations due to it may not depend on time, too. Thus, the fundamental oscillations of the neuronal membrane in the presence of the external potential may be given in the first approximation by q ( x , t ) = ae

− i w t +iW ( x )

≡ ae

iS ( x ,t )

(7)

Here, W(x) is the phase modulation in the position x of the neuronal membrane manifold M. Second, we consider the spatial frequency of the reference wave composed of the modulated fundamental oscillations (7). As it is well known, the ondulatory motion (7) has the angular frequency defined by ∂S ∂t

(8)

=w

and the spatial frequency or wave number vector by ∇S ( x , t ) = ∇W ( x )

(9)

in each position of x of M. Here, — is the symbol to take the gradient and —W(x) is, for example, a two-dimensional vector given by

 ∂W ( x ) ∂W ( x )  ,  2  ∂x 1  ∂x

∇W ( x ) = 

(10)

This spatial frequency —W(x) shows the distribution pattern of the quasi-stationary phase modulations of the reference wave θ due to the imposed external potential V(x). Now we may write down the control policy of the phase modulation of the fundamental oscillations due to the slowly varying graded potentials. Let f be a constant representing the strength of the coupling of the graded potentials to the fundamental oscillations in the unit of milli-Volt × centimeters2. Then, we adopt the simplest form of a reciprocal relation between the spatial frequency of the reference wave —W(x) and the external potential V(x), 1 2

2

∇W ( x ) + fV ( x ) = constant

(11)

31

32

Mari Jibu

Namely, the absolute square of the spatial frequency balances with the external potential in each position of the neuronal membrane manifold M. As the spatial frequency illustrates the number of waves in the unit length, it shows the rate of spatial change of the phase of the fundamental oscillations. Therefore, the phase difference of the neighboring fundamental oscillations becomes small for larger values of the external potential and vice versa. In other words, the imposed graded potential at each neuronal junction controls the fundamental oscillations there so that the phase modulation is suppressed by increasing the graded potential. Since the balance equation (11) between the spatial frequency and the external potential is valid for every point of the neuronal membrane manifold M, the totality of imposed graded potentials determines the distribution pattern of the spatial frequency of the reference wave θ over the whole neuronal membrane. So far we have introduced two fundamental quantities important in realizing our first and second hypothesis concerning the control mechanism of the neuronal plasticity. They are the reference wave θ given by Eq. (7) and the external potential V(x) representing the spatial distribution of the imposed graded potentials at the neuronal junctions. They are subject to the global balance equation (11). However, the activity or reaction of the neuron against the imposed graded potentials cannot the described by only investigating the fundamental oscillations of the neuronal membrane. As we have seen in the preceding section, the neuronal activity is determined mainly by the distribution pattern of the membrane proteins on the neuronal membrane. Thus, the next step is to realize the third hypothesis within our model. Recall that our third hypothesis claims the following. The weak chemical coupling between the membrane protein and the lipid bilayer forces the former to drift in the latter being carried away by the reference wave. Therefore, the membrane protein diffuses in the two dimensional extent of the membrane lipid bilayer with the diffusion constant ν and the drift velocity proportional to the spatial frequency vector —S =—W. Of course, the diffusion constant ν takes different value for different class of membrane proteins. Because we are mainly interested in the pattern formation of chemical networks of ionic channel proteins, we consider here the diffusions of Na+ ion channel proteins. Let ρ(x,t) be the number density of the Na+ ion channels at time t in the vicinity of a point x on the membrane manifold M. The functional form of ρ(x,t) thus illustrates the instantaneous spatial distribution of the Na+ ion channels. Due to the third hypothesis the Na+ ion channel proteins manifest collective motions represented by the common diffusion process with the drift velocity µ−1—W and the diffusion constant ν. where µ is a constant standing for the immobility of the channel proteins against the reference wave of the membrane lipid bilayer. This fact can be written in the well-known Fokker-Planck equation ∂ r ( x ,t ) ∂t

 1 = − div  ∇W ( x ) r ( x , t ) + n ⌬ r ( x , t )   m

(12)

where div denotes the divergence of a vector field on the manifold M and ∆ = div— is the Laplacian operator on M. This is the fundamental equation to determine the time evolution of the spatial distribution of the Na+ ion channel proteins in the neuronal membrane manifold. Suppose that the graded potentials are imposed at the neuronal junctions. They are represented by the external potential V, and controls the spatial frequency modulation of the

The mind-body and the light-matter

reference wave θ over the whole neuronal membrane manifold by the balance equation (11). The spatial frequency modulation thus determined controls the time evolution of the distribution density of the Na+ ion channels ρ in the neuronal membrane. It illustrates the pattern formation process of the chemical network of ionic channels in the neuronal membrane which may be understood as the fundamental process of the neuronal plasticity. In general, the distribution density ρ subject to the Fokker-Planck equation (12) may vary drastically and represents no stable (i.e., time-independent) chemical network of the ionic channels. However, it is well known that the distribution density ρ approaches to a unique time-independent distribution density very quickly. We denote it by ρeq(x) and call it the equilibrium or stationary distribution. Einstein’s famous consideration on the Brownian motion claims that the equilibrium distribution ρeq(x) of the diffusion subject to the Fokker-Planck equation (12) is given by the formula 1

W (x)

(13)

req ( x ) = e mn

Such a stationary solution of the Fokker-Planck equation (12) as the equilibrium distribution (13) describes the stable distribution pattern of the Na+ ion channel proteins in the neuronal membrane manifold. Being driven or controlled by the spatial frequency —W of the reference wave θ through the Fokker-Planck equation (12), the stable distribution density is determined by the phase W of the reference wave through a simple formula (13). Recall that the phase distribution W of the reference wave is determined approximately by the balance equation (11) in terms of the external potential V. As the spatial frequency — of the reference wave has been related to the drift velocity of the diffusion of the ionic channel proteins, the balance equation (11) may be interpreted as the dynamical equation for the diffusion. We rewrite Eq. (11) as 1 2m

2

∇W ( x ) + gV ( x ) = l

(14)

where g = fµ−1 and consider it as the fundamental dynamical equation determining the drift velocity of the ionic channel proteins in terms of the imposed external potential. So far we have tacitly assumed that the ionic channel proteins perform diffusions independent of each other. This may be a good approximation as long as the distribution density of the channel proteins remains not high. However, to make the model much more realistic, we would better consider the mutual interaction between the ionic channel proteins of the same class. As the distribution density becomes higher in a certain domain of the neuronal membrane manifold, the ionic channel proteins interact there with each other by mutual collisions. Consequently, the repulsive force between the ionic channels takes place in the domain and we must introduce there the population pressure (Nagasawa 1980) of the distribution of Na+ ion channel proteins, Q ( x ,t ) = −

2 mn ⌬ r ( x , t )

2

r ( x ,t )

(15)

33

34

Mari Jibu

Because we are interested in the equilibrium distribution of the channel proteins, the relevant form of the population pressure becomes Qeq ( x ) = −

2 mn ⌬ req ( x )

2

(16)

req ( x )

We should take this population pressure into account in the dynamical balance equation (14), rewriting it as 1 2m

2

∇W ( x ) + Qeq ( x ) + gV ( x ) = l

(17)

This is the fundamental dynamical equation in our model of neuronal plasticity from which the stable equilibrium distribution of the Na+ ion channels ρeq(x) can be determined subject to the imposed graded potentials represented by the external potential V. Notice that the degree of freedom of the fundamental membrane oscillations disappears in the fundamental equation (17). It describes a direct control policy of the equilibrium distribution of the ionic channel proteins due to the imposed graded potentials without referring to the background reference wave of the neuronal membrane oscillations. Although the dynamical balance equation (17) describes completely the fundamental pattern formation process of the stable equilibrium distribution of the ionic channel proteins, it is a nonlinear partial differential equation for the unknown variable ρeq(x) and quite difficult to be analyzed from the mathematical point of view. However, it may be converted into a simpler form by introducing the new variable u(x )=

(18)

req ( x )

obtaining

 mn2 −  2

 

⌬ + gV ( x )u ( x ) = l u ( x )

(19)

This is a linear elliptic partial differential equation and can be solved as a well-known eigen value problem. It deserves to be the fundamental equation of the present model of neuronal plasticity based on our three hypotheses. Notice that the constants µ,ν and g are all given quantities and the functional form of the external potential V is also given. The unknown quantities in Eq. (19) are u(x) and λ which are called the eigen function and the eigen value, respectively. For any combination of the graded potentials at the neuronal junctions, the problem of finding the controlled stable spatial pattern of the channel distribution in the neuronal membrane is now replaced by the problem to find the eigen function u(x) and the eigen value λ of the eigen value equation (19). We call u(x) the order parameter and λ the degree of order of the distribution of the ionic channels. Equation (19) will be called the order equation of the ionic channel distribution.

The mind-body and the light-matter

A5. A model of neuronal plasticity Having derived the fundamental equation to determine the reaction behavior of the neuron against the slowly varying graded potentials imposed at the neuronal junctions, we are now in the best position to develop a new model of the neuronal plasticity. In our model the neuronal plasticity arises from the flexibility of the stationary distribution of the ionic channel proteins in the neuronal membrane lipid bilayer. Because the stationary equilibrium distribution of the ionic channels forms a chemical network of ion flows in the neuronal membrane, its flexibility results in the rearrangement of the chemical network with respect to the imposed graded potentials. The pattern of the chemical network of ionic channels determines the reaction behavior of the neuron as the fundamental processing unit in the neuronal network of the brain. Schematically speaking the neuronal membrane functions as a switchboard controlled by the slowly varying graded potentials penetrating through the neuronal network. Following is the new model of the neuronal plasticity: Let us consider the neuronal membrane manifold M on which the neuronal junctions are placed. Let N be the total number of the neuronal junctions and aj =(aj1,aj2) be the coordinate of the center point of the j-th junction. At each neuronal junction, the electric potential is imposed transmitting from the other neurons via dendrites or axons. Let Uj be the electric potential imposed at the j-th junction in the milli-Volt unit. It evolves, of course, as the time passes. We write the time dependence of Uj by Uj(t) with the time parameter t. Typical time evolution of the imposed electric potential Uj(t) is known to be composed of the two different component, that is, the slowly varying graded potential Vj(t) and the rapidly varying potential Ij(t) called the impulse. The former change only adiabatically compared with the time scale of the diffusion of the membrane proteins and we may assume Vj(t) ª Vj but the constant value Vj may manifest adiabatic change. This means that Vj can be considered constant when it is referred to in the fundamental pattern formation process of the distribution of membrane proteins but time-dependent when refereed to in the macroscopic process of perception. The latter is the well-known information carrier between the neurons, and varies so rapidly that the stable equilibrium distribution of the membrane proteins does not suffer from it. Therefore, the pattern formation of the equilibrium distribution of proteins is subject to the control of only the slowly varying graded potentials Vj ’s. Indeed, as we have seen in the preceding section, the fundamental dynamical equation (19) to determine the equilibrium distribution depends on the external potential V(x) which is the smeared sum of the Vj ’s, N

V (x )=

∑V h j

aj

(x )

(20)

j =1

where haj(x) denotes a certain unit function peaked around aj which represents the small spatial extent of the j-th neuronal junction. Let us suppose that the slowly varying graded potentials imposed at the neuronal junctions happen to take the values Vj ’s and the external potential V(x) becomes (20). Then, the ionic channel proteins diffuse in the neuronal membrane suffering from the control of the imposed slowly varying graded potentials. They fall into the stable equilibrium distribution very quickly and forms a stationary distribution pattern in the membrane. Such a stable

35

36

Mari Jibu

equilibrium distribution is represented by order parameter u(x) determined by the order equation (19) with respect to the external potential V(x). The equilibrium distribution ρeq(x) thus determined by the external potential V(x) (i.e., the imposed graded potentials) manifests a spatial pattern of the spatial distribution of the ionic channels in the membrane manifold. By tracing the region of higher distribution density, we may think of the segments of densely distributed channel proteins along which the ionic flow of charge can be transmitted by the well-known successive chain reaction of ionic channels. Such a segment is called the internal chemical circuit, and the totality of such segments is called the chemical network. Suppose that the equilibrium distribution ρeq(x) of the ionic channel proteins have a number of internal chemical circuits. If certain neuronal junctions are placed on the common internal chemical circuit, they are thought to be connected with each other and transmit the impulses among them by the successive chain reaction of ionic channels densely distributed in the segment, that is, the internal chemical circuit. If certain two neuronal junctions are not lying on any common internal chemical circuit, they are separated effectively and the impulse imposed on the one junction cannot be transmitted to the another junction. Therefore, the equilibrium distribution ρeq(x) of the ionic channel proteins defines a stationary chemical network with which certain neuronal junctions are connected with each other and the remainings are kept disconnected. In this sense, the equilibrium distribution of the ionic channels in the neuronal membrane plays the role of wiring connections between the neuronal junctions through which the impulses transmit. The function of the neuron as the fundamental processing unit is therefore analogous to that of the switchboard. The switching of the wiring connections is controlled by an operator, whereas the switching of the internal chemical circuits in the neuronal membrane is controlled by the graded potentials imposed in advance to the impulses. In this sense, the neuron behaves just like the automatic dial switchboard in which the control of the wiring connections is done automatically by putting the dial number signal into the switchboard. However, unlike the automatic dial switchboard, the neuron has a typical feature that not only a dial number signal determines the wiring connections but the whole dial number signals in its entireness determines it. Namely, the spatial pattern of the chemical network of densely distributed ionic channels is determined by the whole imposed graded potentials. As the fundamental processing unit of the neuronal network of the brain, the neuron plays the role of an automatic dial switchboard controlled globally by the whole dial signals of graded potentials. The neuron manifests in forming the chemical network in the neuronal membrane. This neuronal plasticity results in the plasticity of the effective connectivity of each neuronal junction as it is explained above. Such plasticity of the neuronal junction as the synapse plasticity has been widely believed to be the basis of the short-term memory processes. The short-term memory is carried by the topology of the effective connectivity between the neurons which can be modified by the neuronal plasticity. We will propose in the proceeding section a model of memory mechanism of the neuronal network in the brain from the point of view of the present model of neuronal plasticity.

A6. Mechanism of short-term memory Let us consider the ensemble of extremely many neurons connected with each other by the neuronal junctions and surrounded by the glial cells. Electric potentials flow from any

The mind-body and the light-matter

neuron to the neighboring ones through the neuronal junctions. In other words, the neurons form an electric circuit which is called the neuronal network. Two different types of electric potentials flow there, that is, the slowly varying graded potential and the rapidly varying pulse potential called the impulse. In our model of the neuronal plasticity, the former plays the role of control signal and the latter that of information carrier for the information processing scheme of the neuronal network. Each neuron in the neuronal network plays the role of a switchboard of variable gates for transmitting the impulses from certain neuronal junctions to others. The mutual switching pattern of the neuronal junctions in each neuron is determined by the chemical network of ionic channel proteins in the neuronal membrane, and so suffer from the control of slowly varying graded potentials imposed at the neuronal junctions. We investigate now the fundamental mechanism and process of memory in the neuronal network. It may be true that the totality of potential flows in the whole neuronal network illustrates the state of perceptual awareness of the brain. External stimuli from the environment are transformed into the additional potential flow by the sense organs, and then transmitted into the neuronal network. We call it the external potential flow. Of course, the neuronal network suffers continually from the incoming external potential flow as long as the sense organs are exciting. The state of perceptual awareness of the brain evolves due to not only the continually incoming external potential flow but also the internal potential flow which exists irrespective to the excitation of the sense organs. Therefore, even the simplest fundamental process of perceptual awareness as the memory process seems to prevent us from the detailed analysis of the mechanism. For making the fundamental mechanism of memory processes clearer, we consider a highly idealistic case in what follows. Let us suppose that the neuronal network remains free from the incoming external potentials. Then, the internal potential flow in the neuronal network becomes necessarily stationary. As the distribution pattern of the chemical network of ionic channel proteins in each neuronal membrane is completely determined by the slowly varying graded potentials flowing into the neuron, the effective connectivity between the neurons through the neuronal junctions remains unchanged unless the external potential are imposed. Suppose that the sense organ generates the external potential due to a certain external stimulus. Then, it will be transmitted into the neuronal network. Namely, a certain group of neurons suffers from the external potential imposed at the neuronal junction receiving directly the transmitted potential. The slowly varying components of the imposed external potential are then added to the graded potentials already imposed at the neuronal junctions. Consequently, the spatial distribution pattern of the ionic channel proteins in the neuronal membrane of each neuron belonging to the group deviates from the previous stationary equilibrium distribution. Then, the chemical networks of ionic channels on the neuronal membranes are varied. In other words, the external stimulus generates a perturbation in the chemical network of ionic channels on each neuron of the group. Since such an even small perturbation of the chemical network modifies the previous effective connectivity between the neurons, the total potential flow pattern in the neuronal network begins to deviate from the previous one. As a result, the perturbation of the chemical network due to the external stimuli propagates finally into the whole neurons in the neuronal network, and the topology of the effective connectivity of the neuronal network is modified.

37

38

Mari Jibu

We have shown that the effect of the external stimuli is stored in the topology of the effective connectivity of the neuronal network due to the neuronal plasticity. The topology is determined by the totality of the equilibrium distributions of the ionic channel proteins in the neuronal membranes. Those equilibrium distributions are determined by the order equation (19) with respect to the whole imposed graded potentials. It is worthwhile to notice here that not only the effect of the external potential is stored in the chemical network of ionic channels but the cooperative effect of it with the internal graded potentials is. This reflects the fact that the memory process of the external stimulus depends strongly on the state of perceptual awareness. In general, the neuronal network suffers from the continual perturbations of external stimuli. Therefore, even after the rearrangement of the chemical networks on the neuronal membranes due to the external stimulus in question, they are always going to be reformed by the proceeding external stimuli. The stored memory of the external stimulus, that is, the effect of the external stimulus to the pattern formation of the chemical networks of ionic channels is getting diffused by the effects of the proceeding external stimuli. The neuronal plasticity plays the key role in the memory process of the external stimuli, though it renders the memory being diffused due to the proceeding memory processes of the succeeding external stimuli. In this sense, the present model of the mechanism of memory processes based on the neuronal plasticity concerns with the short-term memory (STM). Therefore, for the purpose of explaining the well-observed transfer process from the short-term memory to the long-term one, we would better go back to our model of neuronal plasticity and look for another hypothesis on the control mechanism of the pattern formation process of ionic channel distribution which provides us with the possibility of fixing the pattern of the ionic channel distribution.

A7. Cross-link of channel proteins and long-term memory We have shown that the fundamental process of the short-term memory in the neuronal network of the brain is essentially based on the pattern formation process of the distribution of ionic channel proteins in each neuronal membrane controlled by the imposed graded potentials at the neuronal junctions. The plasticity of the spatial distribution pattern of ionic channels in each neuronal membrane results in the plasticity of the topology of effective connectivity of the neuronal network. Although this neuronal plasticity makes it possible to store the effect of the external stimulus in the topology of the neuronal network, it also renders the stored topology continually perturbed and diffused by the similar effects of the proceeding external stimuli. This may explain the short-term memory process in which the stored memory of the external stimuli is kept only for a shorter period. However, we need a certain mechanism to suppress the neuronal plasticity for further explaining the transfer process of the short-term memory to the long-term memory. Once the neuronal plasticity is suppressed in a certain group of neurons, the corresponding part of the topology of the effective connectivity of the neuronal network remains free from the continual perturbations of the proceeding external stimuli and kept fixed for a longer period. In other words, the memory of the external stimulus will be maintained for a longer period and it becomes the long-term memory. Thus, what we need for the better understanding of the transmitting process of the short-term memory to the long-term one is simply the mechanism of suppressing the neuronal plasticity in our model.

The mind-body and the light-matter

Let us recall the origin of the neuronal plasticity in our model. The ionic channel proteins manifest the diffusive collective motions in the membrane lipid layer under the control of the graded potentials imposed at the neuronal junctions. The equilibrium distribution of the ionic channels thus changes its form with respect to the adiabatic change of the imposed graded potentials. Namely, the imposed graded potentials renders the ionic channel proteins to distribute along the peak of the equilibrium distribution. The distributed ionic channels forms a chemical network along which the ionic current flows. The neuronal plasticity arises from the mobility of the chemical network in the neuronal membrane against the variation of the imposed graded potentials as we have seen in the preceding sections. Therefore, for suppressing the neuronal plasticity, we need a certain mechanism to fix the distribution of ionic channels. The following is the hypothesis concerning the mobilization of the distributed ionic channel proteins: Densely distributed ionic channel proteins can be cross-linked with each other by the chemical cross-link process due to the appropriate antibodies or lectins. The cross-linked channel proteins show the high immobility against the interaction with the fundamental oscillations of the lipid bilayer, and they do not diffuse effectively. This is our last hypothesis concerning the collective dynamics of the diffusions of ionic channel proteins in the neuronal membrane. It provides us with the possibility to suppress the plasticity of the equilibrium distribution of channel proteins (i.e., the chemical network) in the neuronal membrane. Following is a scenario of the transfer process form the shortterm memory to the long-term one based on the cross-link hypothesis in our model of neuronal plasticity: Let us suppose that a certain external potential is generated by the sense organs due to the external stimuli, and consequently the equilibrium distribution ρeq is rendered unstable due to the proceeding external potentials and the memory of the effect of the previous external stimulus diffuses. Let τ be the typical time scale in which the equilibrium distributions of channel proteins remain almost unchanged. The value of τ shows the time interval during which the imposed graded potentials of each neuron can be considered almost constant. Of course, it may depend on the keeping time of the external stimuli. Therefore, for certain external stimuli τ takes smaller values and for others longer ones. The values of τ, however, cannot be unlimitedly large, because the neuronal network suffers from the continual perturbations of the external stimuli. Let T be the typical time scale which represents the mean reaction time of the cross-link process of the ionic channel proteins by the antibodies or lections. This means that T stands for the time length necessary to cross-link the neighboring ionic channel proteins. Of course, the actual value of T may vary depending on the densities of antibodies or lectins as well as the temperature. The transformation of a short-term memory given by the equilibrium distributions ρeq of ionic channels on all the neuronal membranes to the long-term memory is accomplished if the keeping time τ of the equilibrium distributions becomes larger than the reaction time T of the chemical cross-link process of neighboring ionic channel proteins. Namely, suppose that the ionic channel proteins distributed in the neuronal membranes according to the equilibrium distributions ρeq’s are cross-linked with the neighboring ones by the antibodies or lectines, then the effective linear dimension and mass of the (cross-linked) channel proteins become large. In other words, the continual interactions of the fundamental

39

40

Mari Jibu

oscillations of the lipid bilayer to the channel proteins become relatively weak, and so the channel proteins hardly drift in the “sea” of membrane lipid bilayer. Then, the (cross-linked) channel proteins remain immobile and distribute still according to the equilibrium distribution ρeq’s even though the imposed graded potentials vary. Such an immobile distribution of cross-linked channel proteins in each neuronal membrane manifests a stable chemical network of longer life time, and the totality of those stable chemical networks on the neuronal membranes supports the long-term memory. If there are neuronal junctions on the stable chemical network of each neuron, the effective connectivity between those junctions will be kept high for longer period. Such a stable topology of the effective connectivity between neurons has been believed to play the key role in the long-term memory processes. It is worthwhile to notice here that all the ionic channel proteins distributed according to the equilibrium distribution ρeq are not cross-linked, but only a small portion of channel proteins are cross-linked and form a stab le chemical network on the neuronal membrane. The remaining unlinked channel proteins diffuse under the control of the imposed graded potential and contribute to the short-term memory processes of the proceeding external stimuli.

A8. Discussions and outlooks Starting from the analogy with the recently discovered fundamental control processes of the amoeba-like cell, we have developed the model of neuronal plasticity based on the molecular biological considerations of the collective motions of the neuronal membrane lipid bilayer and the ionic channel proteins. Our basic hypotheses provided us with the following schematic image of the fundamental processes taking place in the neuronal membrane: Consider the two-dimensional spatial extent of the membrane lipid bilayer of each neuron. The membrane lipid bilayer is not a static object but a dynamical one which manifests a global ondulatory motion of lipids controlled by the slowly varying graded potentials imposed at the neuronal junctions. In other words, the membrane phospholipids manifest the nonlinear oscillatory motions due to the Ca2+ controlled actin-myosin contraction-relaxation process. The imposed graded potentials affect the Ca2+ ion density globally and determine the phase difference of the oscillatory motion of the lipids over the whole neuronal membrane. If we pass to the extreme of analogy, the neuronal membrane looks not like the static pond surface but like the dynamic sea surface. The membrane proteins such as the ionic channel proteins are drifting in the “sea” of neuronal membrane. They are subject to the interactions with surrounding membrane lipids, and suffer from the continual perturbations of their oscillatory motions. As a result, the ionic channel proteins diffuse in the neuronal membrane with the drift velocity proportional to the spatial frequency of the ondulatory collective motion of the membrane lipids. The diffusion of ionic channel proteins falls into a stationary equilibrium distribution immediately, and they distribute over the whole neuronal membrane according to the equilibrium distribution ρeq. This equilibrium distribution ρeq is determined by the order equation (19) with respect to the slowly varying graded potentials. Because the ionic current may be transferred along the successively aligned ionic channel proteins, those distributed along the peak of the equilibrium distribution may be understood to form a chemical network. The

The mind-body and the light-matter

spatial pattern of the chemical network given by the equilibrium distribution of ionic channels determines activity or behavior of the neuron as the fundamental processing unit in the neuronal network. It determines the effective connectivity of the neuronal junctions. Those placed on the chemical network manifest higher connectivity and those out of it do lower one. As the imposed graded potentials vary adiabatically, the corresponding equilibrium distribution ρeq varies its form and so the chemical network does. In other words, the imposed slowly varying graded potentials control the spatial pattern of the chemical network in each neuronal membrane, and consequently it controls the effective connectivity of the neuronal junctions. The neuron plays the role of automatic switchboard. The neuronal network seems in this point analogous to the telephone network in which each neuron may be considered as a switchboard. The connectivity of the switchboard is controlled by the slowly varying graded potentials. This immediate response of the equilibrium distribution of ionic channel proteins is the origin of the neuronal plasticity. Due to this neuronal plasticity, the neuronal network can rearrange the internal connectivity against the external stimuli. The short-term memory process owes to this neuronal plasticity. The transformation process of the short-term memory to the long-term one necessarily needs certain mechanism to suppress the neuronal plasticity. Although several molecular biological effects may result in suppressing the drifting diffusion of ionic channel proteins, we have taken the cross-link process of channel proteins by the antibodies or lectins into a serious consideration. If the equilibrium distribution ρeq of the channel proteins happens to be kept almost unchanged for a time interval longer enough to process the cross-link reaction between the channel proteins and the antibodies, a small portion of channel proteins distributed along the peak of ρeq will be cross-linked. Once cross-linked with each other, the channel proteins manifest higher immobility and remain fixed in the same position. Consequently, the chemical network of ionic channels becomes stable and does not change its spatial pattern even if the imposed graded potentials vary due to the proceeding external stimuli. The neuron with such a stable chemical network manifests a stable effective connectivity between the neuronal junctions. The long-term memory has been believed to be realized by such a stable topology of effective connectivity of neurons in the neuronal network. The present model of neuronal plasticity concerns only the collective dynamics of membrane lipid bilayers and the ionic channel proteins. We are not saying that this illustrates the whole neuronal plasticity. There may be of course other types of neuronal plasticity due to the restoration process of neuronal junctions. However, they are far beyond the scope of our model of neuronal plasticity. We conclude the paper by the following remarks: We have tacitly assumed that the neuronal membrane is the lipid bilayer composed of lipids of the same kind. However, the real membrane bilayer contains not only many kinds of lipids but also other polymers such as choresterols which may be considered as the impurity. The viscosity of the membrane bilayer is known to change drastically if the density of impurity becomes high. Namely, the more the lipid bilayer contains the impurity, the less the ionic channel proteins drift in the neuronal membrane. Because the density of impurity increases in the aging process of neurons, it can be easily understood that for aged neuronal networks the neuronal plasticity does not function well to realize the short-term memory process. Notice that the long-term

41



42

Mari Jibu

memory already stored does not suffer from the aging process of neurons. This explains in part the amnestic syndrome. Even if the membrane lipid bilayer contains few impurities, it suffers from the adsorption of many glycoproteins due to the increase of glycoproteins in the blood. The adsorption of glyco-proteins to the ionic channel proteins alienate them to drift in the membrane bilayer. As a result, the neuronal plasticity is weakened by the increase of glycoproteins in the blood. To recover the neuronal plasticity, we need to wash out the absorbed glycoproteins, that is, to sleep or to rest.

References Alberts, B., Bray, D., Lewis, J., Raff, M., Roberts, K. and Watson, J. D. (1983). Molecular biology of the cell. New York: Garland Publishing. Anderson, J. A. and Rosenfeld, E. (1988). Neurocomputing. London: MIT Press. Crick, F. (1982). Do dendritic spines twitch? Trends in Neuroscence, 5, 44–46. Katsumaru, H., Murakami, F. and Tsukahara, N. (1982). Actin filaments in dendritic spines of red nucleus neurons demonstrated by immunoferritin localization and heavy meromyosin binding. Biomedical Research, 3, 337–340. Meihardt, H. (1982). Models of biological pattern formation. London: Academic Press. Murakami, F., Katsumaru, H., Saito, K. and Tsukahara, N. (1982). A quantitative study of synaptic reorganization in red nucleus neurons after lesion of nucleus interpositus of the cat: An electron microscopic study involving intracellular injection of horseradish peroxidase. Brain Research, 242, 41–53. Nagasawa, M. (1980). Segregation of a population in an environment. Journal of Mathematical Biology, 9, 213–235. Pribram, K. H. (1971). Languages of the brain. New Jersey: Prentice Hall. Sato, H., Ueda, T., Akitaya, T. and Kobatake, Y. (1985). Oscillations in cell shape and size during locomotion and in contractile activities of physarum polycephalum, dictyostelium discoideum, amoeba proteus and macrophages. Experimental Cell Research, 156, 79–90. Ueda, T. and Kobatake, Y. (1983). Quantitative analysis of changes in cell shape of amoeba proteus during locomotion and upon responses to salt stimuli. Experimental Cell Research, 147, 466–471. Yasue, K. (1980). Wave cybernetics: A simple model of wave-controlled nonlinear and nonlocal cooperative phenomena. Physical Review, A38, 2671–2673.



Dissipative quantum brain dynamics* Giuseppe Vitiello Universita’ di Salerno, Italia

1.

The birth of the quantum model of brain

In this report I will present the main features of the quantum model of the brain and of some of its recent developments including dissipative dynamics. I will closely follow and summarize a recent paper of mine with Eleonora Alfinito and the more extended presentation of one of mine forthcoming publication (Vitiello 2000). For shortness, I do not report on some recent work on modeling the quantum brain model by means of neural network with long range correlations among the net units (Pessa and Vitiello 1999). The experimental work by Lashely in the forties has shown that many functional activities of the brain cannot be directly related to specific neural cells, rather they involve extended regions of the brain. In Lashely’s words, as reported by Pribram (1991), “all behavior seems to be determined by masses of excitation, by the form or relations or proportions of excitation within general fields of activity, without regard to particular nerve cells”. Pribram’s work, confirming and extending Lashely observations, brought him in the sixties to introduce concepts of Quantum Optics, such as holography, in brain modeling (Pribram 1971, 1991). The results of the pioneering observations by Lashely and Pribram have been subsequently confirmed by many other observations and it is now well established that neural connectivity is of primary importance in the brain functional development, rather than the single neuron cell activity (Greenfield 1997a,b). The description of non-locality of brain functions, especially of memory storing and recalling, also was the main goal of the quantum brain model proposed in the 1967 by Ricciardi and Umezawa (1967), and of its successive developments worked out by Stuart, Takahashi and Umezawa (1978, 1979). This model is based on the Quantum Field Theory (QFT) of many body systems and its main ingredient is the mechanism of spontaneous breakdown of symmetry. Independently from these works, in 1968, Herbert Fröhlich published the paper “Long range coherence and energy storage in biological systems” (Fröhlich 1968), where the concepts and the formalism of quantum theories also were applied. In 1973 Alexander S. Davydov

44

Giuseppe Vitiello

and N. I. Kislukha pointed out that the great efficiency of energy transfer over long distances on muscle fibers could only be the result of nonlinear quantum dynamics (Davydov 1982). In one of his papers (Fröhlich 1988), Fröhlich reminds us that Ilya Prigogine presented his ideas about “dissipative structures” (Prigogine 1962; Prigogine and Nicolis 1977) in the first Versailles meeting organized by the Institut de la Vie in 1967. In those same years, long range correlations, dynamically generated through the mechanism of symmetry breakdown in QFT, were recognized to be responsible for ordering in superconductors, superfluid helium, and other systems presenting stable ordered ground sates (Anderson 1984). Also stimulated by such developments Fröhlich proposed in 1968, “as a working hypothesis, that phase correlations of some kind, coherence, will play a decisive role in the description of biological materials and their activity” (Fröhlich 1988). These same developments, together with the recognition of the crucial role of nonlinearity in dynamics, also are at the basis of Haken’s formulation of Synergetics (Haken 1977). Quantum phenomena present so new, unexpected aspects with respect to the conception of the world based on classical physics that the “temptation” to look at living systems from the Quantum Mechanics (QM) perspective has been early on very strong. Schrödinger wrote his book What is life? (Schrödinger 1944) in 1944. At that time, even more than today, living matter was a real mystery. Living systems appear to be open, far from equilibrium systems. Nevertheless, they present stable functional properties, space and time ordering, and at same time great capability to respond to external stimuli and to accommodate to quite different environmental conditions. To scientists it was clear that chemistry had to play the main role in the understanding of living matter. There was, however, a diffuse belief that, beyond chemistry, some sort of force had to underlie the life phenomena. The newly born quantum physicists thought that a key to studying the forces ruling living phenomena could be found in the wonderful world of QM. After all, the same chemistry was enormously benefited by the QM principles: a quantum key had been found to explain the Periodic Table of the Elements. Such a key is the Pauli principle, a truly quantum principle which dictates how electrons fill up the electronic shells in the atoms. Moreover, QM was also able to describe the motion of the electrons in their shells and thus it became possible to understand the phenomenology of many chemical reactions. QM thus led to a unifying understanding of many facts previously described by chemistry. Quantum physicists did not, however, substitute for their chemist colleagues in their work. The physicists were (and are) exploring only the dynamics underlying the rich phenomenology studied by the chemists, who were (and are) continuing their specific work. A flux of continuous exchange of knowledge goes on between the two scientific communities. Both, chemists and physicists, have

Dissipative quantum brain dynamics

however a deeper insight into the atomic and molecular world after the discovery of QM. In a similar fashion the hope is that a deeper insight into the life phenomena may also be reached by the joint efforts of physics and biology. The paper on brain and the physics of many-body systems (Ricciardi and Umezawa 1967) was written when Umezawa was still at Istituto di Fisica Teorica in Naples, before leaving to Wisconsin in 1967. At that time Eduardo Caianiello, who founded the Istituto in 1957, was pursuing his research in renormalization in field theory and at once he was deeply involved in the mathematical description of nonlinear binary decision elements and neural nets. His neuronic equations in Cybernetics were already well known. Many physicists and mathematicians were visiting and working in Naples. Professor Valentin Braitenberg also joined the Istituto and his contribution to the establishment of the Division of Cybernetics was essential. Caianiello soon later, in 1968, founded the Laboratory of Cybernetics at Arco Felice, in the Naples area. In those years, as well known, many progresses were made in QFT: it was the “Heroic Period” (1960–1975), as Robert Marshak calls it (Marshak 1993), of the formulation of the standard gauge theory of strong and electroweak interaction. And the Istituto in Naples was sharing such an exciting atmosphere: besides the Caianiello’s group, there were other research groups working in solid state physics and in elementary particles. The activity of Umezawa and of his associates was focused on the spontaneous breakdown of symmetry, a hot subject in those days of discussion on gauge theories. It is then not a case that in such a scientific atmosphere, merging Cybernetics with gauge theories, the model of the brain as a manybody system had come to the light. It was clear to Umezawa that the role of long range correlations dynamically generated by the breakdown of symmetry was too relevant to be “confined” to particle physics and to solid state physics. For the first time physicists had in their hands the possibility to give a quantitative description of collective modes for a physical system not simply on a statistical, kinematic basis, but on a dynamical ground. It was, thus, also not a case that almost at the same time, in 1968, Herbert Fröhlich proposed his model of living matter in terms of coherent boson condensation and collective excitations (Fröhlich 1968, 1988). The physiological observation that storing and recalling information appear as diffuse activities of the brain, not lost even after destructive action of local parts of the brain or after treatments with electric shock or with drugs (Pribram 1971, 1991), suggested to Ricciardi and Umezawa that the brain may be in states characterized by the existence of long range correlations among its elementary constituents. Such long range correlations were assumed to play a more fundamental role than the functioning of the single cell in the brain activity. By resorting to results in the physics of many-body systems, it was then an obvious step forward for them to suppose that the dynamical generation of such long range correlation could be provided by the mechanism of spontaneous breakdown of symmetry: the resulting

45

46

Giuseppe Vitiello

mathematical model is thus a quantum model where the brain is described as a macroscopic quantum system.

2.

Kinematic ordering and dynamic ordering

Among the perspectives from which Science studies natural phenomena there is the “naturalistic” or “phenomenological” perspective. The first, fundamental step in the knowledge of Nature belongs to the naturalistic perspective. It has to do with the recognition of the great variety of phenomena, of species, of samples presented to us by Nature. It has to do with making up classifications, catalogs, grouping similar entities into families; in brief, collecting data, finding relations among them, working out statistics. Our encyclopedic, precious archives of detailed facts about Nature comes from such a hard effort, patiently carried on over the past centuries. Natural sciences always have in themselves such a naturalistic perspective. They also have, however, another perspective, which I will call the “dynamical” perspective. This dynamical perspective has to do with the study of the forces and of the changes, or the evolution, in space and in time. This is the perspective from which the scientist tries to reach a unifying picture of the many data and catalogs and different properties observed in the naturalistic phase of the research. From such a dynamical perspective, he tries to make up mathematical models in terms of evolution equations, which I will globally refer to as the dynamics, underlying the rich phenomenology collected in his observations. The dynamics provides that level of knowledge which I simply denote as the understanding of the descriptive level of knowledge accumulated in the naturalistic phase. Scientific knowledge is only reached when both levels, the naturalistic level and the dynamical one, are fully explored. In this way Science provides its dynamical description of Nature. Some time had to pass by before people would be convinced that collecting data and making catalogs and classifications were not enough, they were and are necessary but not sufficient steps to knowledge. Still today, there are some who, impressed by the large quantity and by the beautifulness of the data collected in some sector of natural sciences, remain confused by such a richness and mistakenly consider the necessary naturalistic perspective also to be a sufficient one. However, “quantity” is not enough! The scientific knowledge of Nature is only reached by supplementing the necessary but not sufficient naturalistic approach with the dynamical perspective. It is such a joint effort that makes Science so powerful in practical applications. It is possible that the time is now ripe to shed some light also on the dynamics underlying the large amount of knowledge accumulated by molecular biology in the naturalistic phase. Moving from such a hope Fröhlich observed that, although many valuable efforts have been put into play in biochemistry, nevertheless the question

Dissipative quantum brain dynamics

still remains open of how order and efficiency arise in living systems, and then coexist with random fluctuations in biochemical processes. It is well known that macroscopic laws exhibiting ordering and regularities in the behavior of ensembles of large number of entities, say atoms or molecules, are predicted by statistical mechanics. Since living systems are made by a large number of particles, statistical regularities may well emerge in their macroscopic phenomenology. Schrödinger however points out that such an order, better, in his words, such “regularities only in the average” emerging from the “statistical mechanisms” is not enough to explain the “enigmatic biological stability”. Pretending to explain the biological functional stability in terms of the regularities of statistical origin would be the “classical physicist’s expectation” that “far from being trivial, is wrong” (Schrödinger 1944). Schrödinger calls it the “naïve physicist” answer and he argues that it is wrong since there is biological evidence (he refers to hereditary phenomena) which shows that very small groups of atoms, “much too small to display exact statistical laws”, have control of observable large scale features, very sharply and strictly determined, of the organism. According to Schrödinger it is of not much value to trace back the “enigmatic biological stability” to the “equally enigmatic chemical stability”. According to him, this is the point where the “Quantum Mechanics evidence” enters into play: namely, by explaining the stability of configurations of a small number of atoms, which has no explanation in classical physics, QM explains the stability of certain biological features. The enormous progress of molecular biology in fact supports his arguments on the “smallness” of the number of the atoms controlling the system macroscopic features in a highly stable way; just think, for example, of the strict and stable molecular (atomic) ordering in the DNA and of its determinant role in the biological macroscopic organization. Most interesting is Schrödinger’s distinction between ordering generated by the “statistical mechanisms” and ordering generated by “dynamical” quantum (necessarily quantum!) interactions among the atoms and the molecules. (Actually Schrödinger refers to “an interesting little paper” by Max Planck on the topic “The dynamical and the statistical type of law.”) Schrödinger’s distinction between the “two ways of producing orderliness” (Schrödinger 1944) appears to be of crucial relevance in the study of living matter. Time ordering and functional stability in living systems manifest themselves as pathways of biochemical reactions sequentially interlocked. One crucial problem of molecular biology is that such pathways cannot be expected to occur in a random chemical environment. Common experience is that even the simplest chemical reaction pathway, once embedded in a random chemical environment, soon collapses. Chemical efficiency and functional stability to the degree observed in living matter, i.e. not as “regularity only in the average” (Schrödinger 1944), seem to be out of reach of any probabilistic approach solely based on microscopic random kinematics.

47

48

Giuseppe Vitiello

Similar difficulties arise with the understanding of the generation of order in space, resulting in organized domains and tissues of the living systems. Understanding the dynamical ordering of cells into tissues is certainly an urgent task in biology and medicine in order to understand and possibly to prevent the opposite situation, namely the evolution of a tissue into a cancer. The failure of any model solely based on random chemical kinematics, or even on the assembling the cells one-by-one by means of short range forces, is under everybody’s eyes. Classical statistical mechanics and short range forces of molecular biology, although necessary, do not therefore seem to be completely adequate tools. It appears to be necessary to supplement them with a further step so to include underlying quantum dynamical features. Once more in Schrödinger words: “it needs no poetical imagination but only clear and sober scientific reflection to recognize that we are here obviously faced with events whose regular and lawful unfolding is guided by a ‘mechanism’ entirely different from the ‘probability mechanism’ of physics” (Schrödinger 1944). As in chemistry at the beginning of this century, it looks like that also in modern molecular biology a step forward must be made with the help of modern quantum theories describing the intricate nonlinear dynamics of the elementary components. One may then attempt to consider a minimal, but essential, set of macroscopic requirements for living systems, on the basis of their observable features, to be described as emerging not from a purely statistical microscopic configuration of the components, but from a microscopic dynamical scheme. Following such a path, Emilio Del Giudice, Silvia Doglia, Marziale Milani, Giuliano Preparata and myself (1985, 1986, 1988a–c) have formulated in the 80’s the QFT approach to living. Similarly, according to Ricciardi and Umezawa (1967), any modeling of functioning of the natural brain cannot rely on the knowledge of the behavior of any single neuron. The dynamical generation of long range correlations which appear in the brain as a response to external stimuli needs to be investigated.

3.

Spontaneous breakdown of symmetry and collective modes

In the study of systems aimed to simulate certain functions of the brain, Hopfield (1982) asked the question whether stability of memory and other macroscopic properties of neural nets are derivable as collective phenomena and emergent properties (Hopfield 1982). The methods of classical Statistical Mechanics have been shown to be very powerful tools in answering Hopfield’s question (Amit 1989; Mezard et al. 1987). However, here I want to call the reader attention on the quantum dynamical origin of the collective modes as it emerges in the physics of many-body systems. In such a case, the available tool, experimentally tested, to study collective modes is QFT.

Dissipative quantum brain dynamics

In QFT spontaneous breakdown of symmetry occurs when the equations controlling the time evolution of the system are invariant under some group, say G, of continuous transformations, but the minimum energy state (the ground state or vacuum) of the system is not invariant under the full group G. General theorems then show (Itzykson and Zuber 1980) that the vacuum is an ordered state and massless particles (collective modes) propagating over the whole system are dynamically generated and are the carrier of the ordering information (long range correlations): order manifests itself as a global property which is dynamically generated. For example, in ferromagnets the magnetic order is a diffused, i.e. macroscopic, feature of the system. Ordering is thus achieved by the presence (condensation) of the collective mode in the vacuum state. Consider the example of the crystal. How much successful and realistic can be the description of a crystal based on the “one-by-one” assembling of the atoms in their sites with short range interaction forces? Or, else, what is the probability of obtaining the crystal ordering out of a probabilistic distribution of atoms ruled solely by random kinematics? The answer to both these questions of course is: there is no hope to get the crystal in that ways! The crystal, as well known to solid state physicists, is dynamically generated by long range correlations among the atoms: these are kept in their sites by exchanging phonons, which are the quanta of the long range correlations. Since the collective mode (called the Nambu-Goldstone boson) is a massless particle, its condensation in the vacuum does not add energy to it: the stability of the ordering is thus insured. As a further consequence, infinitely many vacua with different degrees of order may exist, corresponding to different densities of the condensate. In the infinite volume limit they are each other physically (unitarily) inequivalent and thus they represent possible physical phases of the system: this appears as a complex system equipped with many macroscopic configurations. The actual phase in which the system sits is determined once some external agent selects one vacuum among the many available minimum energy states. In the case of open systems, i.e. systems interacting with the environment and therefore possibly subjected to external actions, transitions among inequivalent vacua may occur (phase transitions). Dissipation, namely the energetic exchange with the environment, leads thus to a picture of the system “living over many ground states” (continuously undergoing phase transitions) (Del Giudice et al. 1988c). Note that even very weak (although above a certain threshold) perturbations may drive the system through its macroscopic configurations (Celeghini et al. 1990): (random) weak perturbations thus play an important role in the complex macroscopic behavior of the system. The observable quantity specifying the ordered state, called order parameter, acts as a macroscopic variable since the collective modes present a coherent dynamical behavior. The order parameter is specific of the kind of symmetry into play.

49

50

Giuseppe Vitiello

The value of the order parameter is related with the density of condensed Goldstone bosons in the vacuum and specifies the phase of the system with relation to the considered symmetry. Since physical properties are different for different phases, the value of the order parameter may be considered as a code specifying the system state. All of this is a well known story and the conclusion is that stable long range correlations and diffuse, non-local properties related with a code specifying the system state are dynamical features of quantum origin. Ricciardi and Umezawa thus proposed a quantum model of the brain based on the theory of the many-body systems with spontaneous breakdown of symmetry (Ricciardi and Umezawa 1967). I would like to stress that in QM the von Neumann theorem states that the representations of the canonical commutation rules are all unitarily equivalent. This theorem does not hold in QFT since there the number of the degrees of freedom is infinite. Therefore QM is not adequate for the description of systems with many different phases. In QM therefore does not exist any possibility for the spontaneous breakdown of the symmetry and for the description of the dynamical generation of ordered states. This rules out the use of QM in the description of brain functions. Such a crucial point should be never overlooked in the discussions on brain modeling.

4. The quantum model of brain Ricciardi and Umezawa write in the Introduction of their paper (Ricciardi and Umezawa 1967): … “in the case of natural brain, it might be pure optimism to hope to determine the numerical values for the coupling coefficients and the thresholds of all neurons by means of anatomical or physiological methods” … “many questions immediately arise…is it essential to know the behavior in time of any single neuron in order to understand the behavior of natural brain? Probably the answer is negative. The behavior of any single neuron should not be significant for functioning of the whole brain, otherwise an higher and higher degree of malfunctioning should be observed…”. And they conclude that the postulate of the existence of “special” neurons cannot be of any help, for the simple reason that there is no anatomo-physiological evidence for such structures. In the mathematical model, the brain elementary constituents are not the neurons and the other cells and physiological units (which cannot be considered as quantum objects), but some dynamical variables, called corticons, able to describe stationary or quasi-stationary states of the brain. The elementary constituents exhibit coherent behavior and macroscopic observables are derived as dynamical output from their interaction. Information printing is achieved under the action of external stimuli producing breakdown of the continuous phase

Dissipative quantum brain dynamics

symmetry associated to corticons. The information storage function is thus represented by the coding of the ground state through condensation of collective modes called symmetron (Stuart et al. 1978, 1979). The stability of the brain states is thus derived as a dynamical feature rather than as a property of specific neural circuits, which would be critically damaged by destructive actions or by single neuron death or deficiency. In recent developments of the model, worked out by Jibu and Yasue (Jibu et al. 1994; Jibu and Yasue 1995; Jibu et al. 1996), the symmetron modes have been identified with the Nambu-Goldstone bosons, called in the following the dipole wave quanta (dwq). The corticon modes have been identified with the vibrational electric dipole field. In analogy with the QFT approach to living matter (Del Giudice et al. 1985, 1986, 1988a–c), the dwq are generated in the breakdown of the electrical dipole rotational symmetry. In conclusion, in the quantum brain model external stimuli aimed to information printing trigger the spontaneous breakdown of symmetry. The stability of the memory is insured by the fact that coding occurs in the lowest energy state and the memory non-local character is guaranteed by the coherence of the dwq (or symmetron) condensate. The recall process is described as the excitation of dwq modes under external stimuli of a nature similar to the ones producing the memory printing process. When the dwq modes are excited the brain “consciously feels” (Stuart et al. 1978, 1979) the pre-existing ordered pattern in the ground state. Short-term memory is finally associated to metastable excited states of dwq condensate (Ricciardi and Umezawa 1967; Sivakami and Srinivasan 1983). The electrochemical activity observed by neurophysiology provides (Stuart et al. 1978, 1979) a first response to external stimuli which has to be coupled with the dipole field (or corticon) dynamics through some intermediate interaction. Suppose now a vacuum of specific code number has been selected by the printing of a specific information. The brain then sets in that state and no other vacuum state is successively accessible for recording another information, unless the external stimulus carrying the new information produces a (phase) transition to the vacuum specified by the new code number. This will destroy the previously stored information (overprinting): vacua labeled by different code numbers are accessible only through a sequence of phase transitions form one to another one of them. This is the problem of memory capacity, which arises because in the model the code numbers are associated to only one kind of symmetry (the dipole rotational symmetry). In order to allow the recording of a huge number of memories, the model could be extended in such a way to present a huge number of symmetries (a huge number of code classes) (Stuart et al. 1978, 1979). This, however, would introduce serious difficulties and spoil the model practical use.

51

52

Giuseppe Vitiello

Let me remark that the fact that each memory recording process produces the cancellation of the previously stored information is due to the fact that vacua, i.e. memory states, of different codes are indeed unitarily inequivalent vacua (cf. the previous sections), and therefore they cannot overlap in any way, they are orthogonal states. This is actually a positive feature of the model, since it means that there is no possibility of confusion or overlapping between distinct information inputs. This would be not the case in a model based on Quantum Mechanics, rather than on QFT. As mentioned, in Quantum Mechanics all the ground state representations are unitarily equivalent and therefore any physical (and informational) distinction among them disappears. The problem with overprinting is that the memory capacity is too small, it only allows one single information. In the following section I will discuss the central role of the dissipative character of the brain dynamics in solving the problem of memory capacity without recourse to the introduction of a huge number of symmetries.

5.

The dissipative quantum model of brain and the arrow of time

The quantum model of brain is founded on the fact that the brain is an open system, namely it is in continuous interaction with the environment (the external world). The memory recording process can only starts when triggered by the external input. However, in order to simplify the mathematical treatment Ricciardi and Umezawa considered only the case of stationary or quasi-stationary brain states, which are appropriate to the description of a closed or quasi-closed system. In such a treatment the mathematical formalism presents invariance under time-reversal symmetry. There are no sensible physical changes if the sign of the time variable is switched from “plus” to “minus” or vice-versa. Such a formalism clearly may describe only in a crude approximation the brain dynamics. It is however remarkable that the model is able to describe several functional features, such as stability and nonlocality of memory states, which are physiologically observed. Let me notice that, once under the action of some external stimulus some information has been recorded, then, as a consequence, time-reversal symmetry is broken: Before the information recording process, the brain can in principle be in anyone of the infinitely many (unitarily inequivalent) vacua. After information has been recorded, the brain state is determined and the brain cannot be brought to the state configuration in which it was before the information printing occurred. This is, after all, the content of the warning: NOW you know it!… Once you come to know something, you are another person. This means that in brain modeling one is actually obliged to use the formalism describing irreversible time-evolution. Information printing introduces the arrow of time into brain dynamics. Due to the memory printing process time evolution of

Dissipative quantum brain dynamics

the brain states is intrinsically irreversible: this is the content of the (trivial) observation that “only the past can be recalled”, which is in fact a way to say that the brain is an open, dissipative system. Getting information thus implies the breakdown of time-reversal symmetry, namely it introduces the distinction between the past and the future, a distinction which did not exist before the information recording. Before the recording process time could be always reversed. A central feature of the quantum dissipation formalism (Celeghini et al. 1992) is the duplication of the field describing the dissipative system. The duplicate, or doubled field represents the environment and the reason for its inclusion into the formalism is to close the system under study. I will not insist on the mathematical formalism in this report. It can be find elsewhere (Celeghini et al. 1992; Vitiello 1995). However, let me denote by A(k) and Ã(k) the dwq mode and the “doubled mode” required by the canonical quantization of dissipative systems, respectively. Here k generically denotes the field degrees of freedom, e.g. spatial momentum. The à mode is the “time-reversed mirror image” of the A mode and represents the environment mode. Let N[A(k)] and N[Ã(k)] denote the number of A(k) modes and Ã(k) modes, respectively. Taking into account dissipativity requires (Vitiello 1995) that the memory state, identified with the vacuum |0(N)>, is a condensate of equal number of A(k) and Ã(k) modes, for any k: N[A(k)] − N[Ã(k)] = 0, for any k. In the state |0(N)>, N specifies the set of integers {N[A(k)]}, for all k defining the “initial value” of the condensate, namely the code number associated to the information recorded at time t = 0. Since the requirement N[A(k)] − N[Ã(k)] = 0, for any k, does not fix the set {N[A(k)]}, for all k, also |0(N¢)> with N¢ = {N¢[A(k)]} and N¢[A(k)] − N¢[Ã(k)] = 0, for all k, is an available memory state: there thus exist infinitely many memory (vacuum) states, each one corresponding to a different code N. The degeneracy among the vacua |0(N)> (i.e. their common feature of being states of minimum energy) plays a crucial role in solving the problem of memory capacity. A huge number of sequentially recorded memories may coexist without destructive interference since infinitely many vacua |0(N)>, for all N, are independently accessible in the sequential recording process. Recording information of code N¢ does not necessarily produce destruction of previously printed information of different code N, contrarily to the non-dissipative case. In the dissipative case the “brain (ground) state” may be represented as the collection (or the superposition) of the full set of memory states |0(N)>, for all N. One may think of the brain as a complex system with a huge number of macroscopic states (the memory states). The dissipative dynamics introduces N-coded “replicas” of the system and information printing can be performed in each replica without destructive interference with previously recorded pieces of information in other replicas. In the non-

53

54

Giuseppe Vitiello

dissipative case the “N-freedom” is missing and consecutive information printing produces overprinting. The fact that the degenerate vacua are physically inequivalent, i.e., technically speaking, the non-existence in the infinite volume limit of unitary transformations which may transform one vacuum of code N into another one of code N¢, guarantees that the corresponding printed memories are indeed different or distinguishable memories (N is a good code) and that each information printing is also protected against interference from other information printing (absence of confusion among pieces of information). In the following section I will comment more on the formation of finite size domain memory states and on the possibility of “association” of memories.

6. Life-time and localizability of memory domains In the quantum model of brain memory recording is obtained by coherent condensation of the dipole wave quanta in the system ground state or vacuum. In the nondissipative case the memory states are thus stable states (infinitely long-lived states): there is no possibility to forget. On the contrary, in the dissipative case the memory states have finite (although long) life-times (Vitiello 1995). At some time t = t¢ the memory state |0(N)> is reduced to the “empty” vacuum |0(0)> where N(k) = 0 for all k: the information has been forgotten. At the time t = t¢ the state |0(0)> is available for recording a new information. It is interesting to observe that in order to not completely forget certain information, one needs to “restore” the N code (Vitiello 1995), namely to “refresh” the memory by brushing up the subject (external stimuli maintained memory (Sivakami and Srinivasan 1983)). Moreover, it can be seen (Alfinito and Vitiello 2000b) that a rich structure in the life-time behavior of single modes of momentum k is implied by the frequency time-dependence of the dwq. It happens that modes with larger momentum k live longer than modes with smaller momentum. It can be also shown that storing of memory modes of given momentum can occur only in a time span whose length is specific for that momentum. It happens that, for given k, the corresponding time span useful for memory recording (the ability of memory storing) grows as the number of links, say n, which the system is able to lace with the external world, grows: more the system is “open” to the external world (more are the links), longer is the time span useful to memorize (high ability of learning). Also notice that the ability in memory storing also turns out to depend on the brain internal parameters, so that these parameters may represent subjective attitudes. I observe that our model is not able to provide a dynamics for the variations of the link number n, thus one cannot say if and how and under which specific boundary conditions n increases or decreases in time. However, the processes of

Dissipative quantum brain dynamics

learning are in our model each other independent, so it is possible, for example, that the ability in information recording may be different under different circumstances, at different ages, and so on. In any case, a higher or lower degree of openness (measured by n) to the external world may produce a better or worse ability in learning, respectively (e.g. during the childhood or the older ages, respectively). It is to be remarked that a threshold exists for the k modes of the memory process, namely only modes with momentum larger than the threshold may be recorded. Such a kind of “sensibility” to external stimuli depends on the internal parameters. On the other hand, the threshold value may be lowered as the number of links with the external world grows. Another way of reading the existence of the threshold on k is that it excludes modes of wave-length greater than a cut-off value corresponding to (the inverse of) the threshold momentum value, for any given n at a given time t. This means that infinitely long wave-lengths (infinite volume limit) are actually precluded and thus transitions through different vacuum states (which would be unitarily inequivalent vacua in the infinite volume limit) at given t’s are possible. This opens the way to both the possibilities, of “association of memories” and of “confusion” of memories (see also ref. (Vitiello 1995)). I also remark that the existence of such a cut-off means that (coherent) domains of sizes less or equal to the cut-off are involved in the memory recording, and that such a cut-off shrinks in time for a given n. On the other hand, a grow of n opposes to such a shrinking. These cut-off changes correspondingly reflect on the memory domain sizes. In conclusion, we have a hierarchical organization in memories depending on their life-time. Memories with a specific spectrum of k mode components may coexist, some of them “dying” sooner, some other ones persisting longer in dependence of the number in their spectrum of the smaller or larger k components, respectively. On the other hand, since smaller or larger k modes correspond to larger or smaller waves lengths, respectively, the (coherent) memory domain sizes correspondingly are larger or smaller. The net effect is that more persistent memories (with a spectrum more populated by the higher k components) are also more “localized” than shorter term memories (with a spectrum more populated by the smaller k components). We have thus reached a “graded” non-locality for memories depending on the number of links n and on the spectrum of their k components, which is also related to their life-time or persistence. Since k is the momentum of the dwq A and of the à excitations, it is expected that, for given n, “more impressive” is the external stimulus (i.e. stronger is the coupling with the external world) greater is the number of high k momentum excitations produced in the brain and, consequently, more “focused” is the “locus” of the memory.

55

56

Giuseppe Vitiello

7.

Neural connectivity, physiological observations and further features of the model

Although at this stage our model does not give us quantitative predictions, nevertheless the qualitative behaviors and results above presented appear to fit well with physiological observations (Greenfield 1997a,b) which show the formation of connections among neurons as a consequence of the establishment of the links between the brain and the external world. More the brain relates to external objects, more neuronal connections will form. Connections appear thus more important in the brain functional development than the single neuron activity, as several times I have previously recalled. Note that here I am referring to functional or effective connectivity, as opposed to the structural or anatomical one (Greenfield 1997a,b). The last one can be described as quasi-stationary. The former one is on the contrary highly dynamic with modulation time-scales in the range of hundreds of milliseconds. Once these functional connections are formed, they are not necessarily fixed. On the contrary, the may quickly change in a short time and new configurations of connections may be formed extending over a domain including a larger or a smaller number of neurons. Such a picture may indeed find its description in our model: in this paper I have considered the dwq condensation produced under the action of the external stimuli. I have not considered the propagation of electromagnetic (em) excitations through the ordered domains where dwq condensation occurs. As shown elsewhere (see Vitiello 1995 and Del Giudice et al. 1986, 1988a) the em field propagates in a selffocusing fashion in ordered domains, thus producing an highly dynamic net of filaments. Outside these filaments or tubules the em field is zero, inside them is non-zero. Transverse forces thus originates a molecular coating of these tubules by attracting or repelling in a selective way (on the basis of a resonant frequency pattern, cf. Del Giudice et al. 1986, 1988a) biomolecules floating in the surroundings of the tubules. It has been proposed that such a mechanism may be at the basis of microtubules dynamic formation in the cell and thus it may explain their highly dynamic behavior through assembly and disassembly processes. Such a mechanism appears to be a good candidate also to model the highly dynamic neuronal connection assembly and disassembly. The prerequisite, in our model, for the connection formation is the dynamic generation of condensation (i.e. ordered) domains of dwq. As I have shown in the previous sections, we do have ordered domain formation in our model. Moreover, the size and the life-time of these domains appear to depend on the number of links that the brain sets with its environment and on internal parameters, which again is in agreement with the plasticity of the brain appearing in physiological observations. There is a further remarkable aspect in the occurrence of “finite size” domains. As mentioned, the effect of finite size of the system, namely of the domains where

Dissipative quantum brain dynamics

dwq condensation occurs, spoils unitary inequivalence among the vacua of a specific domain. However, the possibility of transitions among different vacua is a feature of the model which is not completely negative: smoothing out the exact unitary inequivalence among memory states has the advantage of allowing the familiar phenomenon of the “association” of memories: once transitions among different memory states are “slightly” allowed the possibility of associations (“following a path of memories”) becomes possible. Of course, these “transitions” should only be allowed up to a certain degree in order to avoid memory “confusion” and difficulties in the process of storing “distinct” memories. The opposite situation of strict inequivalence among different vacua (in the case of very large or infinite size domain) would correspond to the absence of any “transition” among memory states and thus to “fixation” or “trapping” in some specific memory state. In connection with the recall mechanism, I note (see Vitiello 1995) that the dwq acquires an effective non-zero mass due to the effects of the domain finite size. Such an effective mass will then act as a threshold in the excitation energy of dwq so that, in order to trigger the recall process an energy supply equal or greater than such a threshold is required. When the energy supply is lower than the required threshold a “difficulty in recalling” may be experienced. At the same time, however, the threshold may positively act as a “protection” against unwanted perturbations (including thermalization) and cooperate to the stability of the memory state. In the case of zero threshold (infinite size domain) any replication signal could excite the recalling and the brain would fall in a state of “continuous flow of memories”. I further observe that as an effect of the difference in the life-times of different k modes, the spectral structure of a specific memory may be “corrupted”, thus allowing for more or less severe memory “deformations”. This mechanism adds up to the memory decay implied by dissipation. Finally, it can be shown that dissipation and the frequency time-dependence imply that the evolution of the memory state is controlled by the entropy variations (Vitiello 1995; Alfinito and Vitiello 2000b): this feature reflects the mentioned irreversibility of time evolution, namely the choice of a privileged direction in time evolution (arrow of time). The stationary condition for the free energy functional leads then to recognize the memory state |0(N, t) > to be a finite temperature state (Takahashi and Umezawa 1975; Umezawa 1993), which opens the way to the possibility of thermodynamic considerations in the brain activity. In this connection, I observe that the “psychological arrow of time” which emerges in the dissipative brain dynamics turns out to be oriented in the same direction of the “thermodynamical arrow of time”, which points in the increasing entropy direction. It is interesting to note that both these arrows, the psychological one and the thermodynamical one, also point in the same direction of the “cosmological arrow of time”, defined by the expanding Universe direction (Alfinito et al. 1999; Alfinito and Vitiello 2000b) (see e.g.,

57

58

Giuseppe Vitiello

Hawking and Penrose 1996). It is remarkable that the dissipative quantum model of brain let us reach a conclusion on the psychological arrow of time which we commonly experience.

8. The Sosia One can show that the tilde-mode à allows self-interaction of the A system, thus playing a role in “self-recognition” processes (Vitiello 1995). On the other hand, the tilde-system also represents the environment effects and cannot be neglected since the brain is an open system. Therefore the tilde-modes can never be eliminated from the brain dynamics: the tilde-modes thus might play a role as well in the unconscious brain activity. This may provide an answer to the question “as whether symmetron modes would be required to account for unconscious brain activity” (Stuart et al. 1978, 1979). In other words, unconscious brain activity may be allowed by the presence of the tilde-modes in the brain dynamics. Moreover, the à system is the “mirror in time” image, or the “time-reversed copy” of the A system. It actually duplicates the A system: it is the system “Double”, its Sosia, as Plautus would call it (Plautus 189 B.C.), permanently joined (conjugate) to it. This fact, and the role of the à modes in the self-interaction processes, leads me to conjecture that the tilde-system is actually responsible for consciousness mechanisms (Vitiello 1995, 1998, 2000): consciousness as “time mirror”, as reflection in time which manifests as coupling or dialog (Desideri 1998) with the inseparable own Double (Vitiello 1998, 2000), listening to him and talking with him. Consciousness seems thus to emerge as a manifestation of the dissipative dynamics of the brain. In this way, consciousness appears to be not solely characterized by a subjective dynamics; its roots, on the contrary, seem to be grounded in the permanent “trade” of the subject brain with the external world: without the “objective” external world there would be no possibility for the brain to be an open system, and no à system would at all exist. Thus, the very same existence of the “external” world is the prerequisite for the possibility for the brain to build up its own “subjective simulation” of the external world, its own representation of the world. Moreover, the simulation of the environment in terms of à modes is modeled by the subject on his own image: the subjective representation of the external world, at same time, coincides with the self-representation (“identification”). I find it remarkable that such a view of the arising of consciousness finds contact points with the view expressed by Susan Greenfield (Greenfield 1997a,b) when she relates consciousness with the dynamic formation of neuronal connectivity as physiologically observed. As the brain develops and establishes more links with the external objects, extended domains of connected neurons are formed, as in fact our model predicts, and “hence, these experience-related connections account,



Dissipative quantum brain dynamics

to a certain measure, for your individuality, your particular fantasies, hopes, and prejudices” (Greenfield 1997a). However, observations also show that such connected assembly of neurons do not need to remain “intransigently hard wired” (Greenfield 1997a); on the contrary, plasticity of connectivity appears to characterize the brain functionality, which again finds a description in our model which excludes any rigid “fixation” or “trapping” in certain states, as we have previously seen. Such a plasticity also leaves open the possibility to our personal growth, to our capability to change our view of the world as we develop. According to Greenfield, what physiological observations can tell us about consciousness is that it is a “spatially multiple, yet temporally unitary, emergent property of non-specialized groups of neurons that are continuously variable with respect to an epicenter” (Greenfield 1997a). The features of the dissipative model discussed in this report are perfectly consistent with such a statement. It would be much interesting to consider from such a standpoint the consciousness problematic of the so-called dual-focus system in linguistic studies (Stamenov 1997), or also the “matching” mechanism in Globus (1999) analysis of consciousness. It is also remarkable that the process of (self-)identification is the direct consequence of the mathematical procedure of “doubling” the system: apparently, the intrinsic dissipative character of the brain dynamics excludes any model of consciousness centered exclusively on “first person” inner activity. It would be interesting to study puzzling phenomena such as, e.g., autism and coma states from such a perspective. A “second person”, the Double or Sosia, to dialog with seems to be crucially needed.

Note * I am glad to thank Kunio Yasue, Mari Jibu and all the Organizers of Tokyo ’99 for inviting me and giving me the opportunity to participate to this successful conference. I thank Gordon Globus and Maxim Stamenov for stimulating discussions. A special thank to Eleonora Alfinito for her precious collaboration.

References Alfinito, E. and Vitiello, G. (1999). Canonical quantization and expanding metrics. Physics Letters, A252, 5–10. Alfinito, E., Manka, R. and Vitiello, G. (2000a). Vacuum structure for expanding geometry. Classical and Quantum Gravity, 17, 93–111. Alfinito, E. and Vitiello, G. (2000b). Formation and life-time of memory domains in the dissipative quantum model of brain. International Journal of Modern Physics, B 14, 853–868. Amit, D. J. (1989). Modeling brain functions. Cambridge: Cambridge University Press. Anderson, P. W. (1984). Basic notions of condensed matter physics. Menlo Park: Benjamin.

59

60

Giuseppe Vitiello

Celeghini, E., Graziano, E. and Vitiello, G. (1990). Classical limit and spontaneous breakdown of symmetry as an environment effect in quantum field theory. Physics Letters, 145A, 1–6. Celeghini, E., Rasetti, M. and Vitiello, G. (1992). Quantum dissipation. Annals of Physics (N.Y.), 215, 156–170. Davydov, A. S. (1982). Biology and quantum mechanics. Oxford: Pergamom. Del Giudice, E., Doglia, S., Milani, M. and Vitiello, G. (1985). A quantum field theoretical approach to the collective behavior of biological systems. Nuclear Physics, B251 [FS 13], 375–400. Del Giudice, E., Doglia, S., Milani, M. and Vitiello, G. (1986). Electromagnetic field and spontaneous symmetry breakdown in biological matter. Nuclear Physics, B251 [FS 17], 185–199. Del Giudice, E., Doglia, S., Milani, M. and Vitiello, G. (1988a). Structures, correlations, and electromagnetic interactions in living matter: Theory and applications. In H. Froehlich (Ed.), Biological coherence and response to external stimuli. Berlin: Springer. Del Giudice, E., Preparata, G. and Vitiello, G. (1988b). Water as a free electron laser. Physical Review Letters, 61, 1085–1088. Del Giudice, E., Manka, R., Milani, M. and Vitiello, G. (1988c). Non-constant order parameter and vacuum evolution. Physics Letters, B 206, 661–664. Desideri, F. (1998). L’ascolto della coscienza. Milano: Feltrinelli. (In Italian) Froehlich, H. (1968). Long range coherence and energy storage in biological systems. Journal of Quantum Chemistry, 2, 641–649. Froehlich, H. (1988). Theoretical physics and biology. In H. Froehlich (Ed.), Biological coherence and response to external stimuli (1–24). Berlin: Springer. Globus, G. G. (1999). Private communication. Greenfield, S. A. (1997a). How might the brain generate consciousness? Communication and Cognition, 30, 285–300. Greenfield, S. A. (1997b). The brain: A guided tour. New York: Freeman. Haken, H. (1977). Synergetics. Berlin: Springer. Hawking, S. W. and Penrose, R. (1996). The nature of space and time. Princeton: Princeton University Press. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79, 2554–2558. Itzykson, C. and Zuber, J. (1980). Quantum field theory. New York: McGraw- Hill. Jibu, M., Hagan, S., Hameroff, S.R., Pribram, K.H. and Yasue, K. (1994). Quantum optical coherence in cytoskeletal microtubules: Implications for brain functions. BioSystems, 32, 195–209. Jibu, M. and Yasue, K. (1995). Quantum brain dynamics and consciousness. Amsterdam: John Benjamins. Jibu, M., Pribram, K. H. and Yasue, K. (1996). From conscious experience to memory storage and retrivial: The role of quantum brain dynamics and boson condensation of evanescent photons. International Journal of Modern Physics, B10, 1735–1754. Marshak, R. E. (1993). Conceptual foundations of modern particle physics. Singapore: World Scientific. Mezard, M., Parisi G. and Virasoro, M. (1987). Spin glass theory and beyond. Singapore: World Scientific. Pessa, E. and Vitiello, G. (1999). Quantum dissipation and neural net dynamics. Biolectrochemistry and Bioenergetics, 48, 339–342. Plautus, T. Maccius (189 B. C.). In C. Marchesi (1967), Storia della letteratura latina. Milano: Principato. (In Italian) Pribram, K. H. (1971). Languages of the brain. New Jersey: Englewood Cliffs. Pribram, K. H. (1991). Brain and perception. New Jersey: Lawrence Erlbaum. Prigogine, I. (1962). Introduction to nonequilibrium thermodynamics. New York: Wiley Interscience.



Dissipative quantum brain dynamics

Prigogine, I. and Nicolis, G. (1977). Self-organization in nonequilibrium systems: From dissipative structures to order through fluctuations. New York: Wiley Interscience. Ricciardi, L. M. and Umezawa, H. (1967). Brain physics and many-body problems. Kibernetik, 4, 44–48. Schrödinger, E. (1944). What is life? Cambridge: Cambridge University Press. Sivakami S. and Srinivasan, V. (1983). A model for memory. Journal of Theoretical Biology, 102, 287–294. Stamenov, M. I. (1997). Grammar, meaning and consciousness. What sentence structure can tell us about the structure of consciousness. In M.I. Stamenov (Ed.), Language structure, discourse and the access to consciousness (277–342) [Advances in Consciousness Research 12.] Amsterdam: John Benjamins. Stuart, C. I. J., Takahashi, Y. and Umezawa, H. (1978). On the stability and non-local properties of memory. Journal of Theoretical Biology, 71, 605–618. Stuart, C. I. J., Takahashi, Y. and Umezawa, H. (1979). Mixed system brain dynamics: Neural memory as a macroscopic ordered state. Foundation of Physics, 9, 301–327. Takahashi, Y. and Umezawa, H. (1975). Thermo field dynamics. Collective Phenomena, 2, 55–80. Umezawa, H., Matsumoto, H. and Tachiki, M. (1982). Thermo field dynamics and condensed states. Amsterdam: North-Holland. Umezawa, H. (1993). Advanced field theory: micro, macro and thermal physics. New York: American Institute of Physics. Vitiello, G. (1995). Dissipation and memory capacity in the quantum brain model. International Journal of Modern Physics, 9, 973–989. Vitiello, G. (1998). Dissipazione e coscienza. Atque, 16, 171–198. (In Italian) Vitiello, G. (2000). My Double unveiled. Introduction to the dissipative quantum model of brain. Amsterdam: John Benjamins.

61



What do neural nets and quantum theory tell us about mind and reality? Paul J. Werbos National Science Foundation, Arlington, USA

1.

Introduction

The organizer of this conference, Dr. Kunio Yasue, invited people from many disciplines to address certain basic questions which cut across these disciplines: “How can we develop a true science of consciousness? What is Mind?” This paper was invited to the session on quantum foundations, which was also asked to address: “What is Reality?” The literature on consciousness contains many discussions about what we can learn from modern neural network theory and quantum theory, in trying to answer these questions. However, those discussions do not always account for the most recent insights and developments in those fields. Even those authors who deeply understand all the relevant disciplines would find it difficult to write a paper which is intelligible to people in other disciplines, but also does justice to the real technical details. Because of this communications problem, I will write this paper in a relatively informal way. The bulk of the paper will be an edited transcript of the talk which I gave at the conference, with references added to provide at least some technical support. Section 3 will contain new thoughts, stimulated in part by discussions at the Quantum Mind conference in Arizona, later in 1999. The views expressed here are only my views, not the official views of NSF or of the US government.

2.

Transcript of talk

In his introduction, Dr. Yasue mentioned that Paul Werbos is a Program Director at the National Science Foundation, the primary agency of the US government for funding basic research across all disciplines. He studied physics under Dr. Julian

64

Paul J. Werbos

Schwinger, winner of the Nobel Prize for quantum electrodynamics along with Feynman and Tomonaga. He is best known for the original discovery of backpropagation, the most widely used algorithm in the field of artificial neural networks. Thank you, Kunio. I am very grateful to have a chance to speak to you here in Japan. Before I begin, I must make a couple of apologies. First, I am not really a professional physicist. I did have the good luck in graduate school in Harvard to study under Julian Schwinger who, as you say, was the co-inventor of the quantum field theory discussed by many speakers here. In the 1970s, when I studied under Schwinger, many people actually thought he was going crazy, because Schwinger did not like the second quantization, the quantum field theory. He felt there must be a better way to do it — and so, in the 1970s, he worked on a new way of doing quantum mechanics. He called it source theory (Schwinger 1968). He had the framework right at that time, but he did not yet have the details of how to apply it to high-energy physics. So when I was a student they said “This is crazy. The formalism is OK, but it’s not practical. It’s just metaphysics; don’t pay attention to it.” But in the last twenty years, I was very happy to find out that this source theory has been developed much further. It is now called the functional integral approach (Zinn-Justin 1996). It is a third quantization. It is a whole new way of doing quantum mechanics, and it changes many of the things we have heard here. Quantum field theory today is not what it was twenty years ago. I have not worked in physics myself since then, but, on my own, I have tried to use my scarce personal time to think of yet another way to do the quantum foundations. I have some wild and crazy ideas for a fourth quantization. I have a few papers on it, but I only have the mathematical framework (Werbos 1989, 1998a, 1999a). I think the framework makes sense, but much work is needed now to develop the practical details. I hope someone here is a physicist interested in working out some of the details, because I am not like Schwinger; I will not spend the next twenty years developing the practical details. I would be grateful for any collaborators for the next stage. But no one pays me to do physics. Actually, I work in the Engineering Directorate at NSF. So here I feel like a humble shoemaker asked to give a talk at the great temple; in one week, I will go back to making shoes — but the shoes we make are not exactly shoes. We help people develop cars which are cleaner and more efficient, airplanes which are safer and faster, new manufacturing systems, robots, control systems for electric power grids (Werbos 1999b,c). Carefully and slowly we develop real engineering things which must work. That is what they really pay me to do. So I will begin here by talking about the mind, first. The theme of this conference is “consciousness” — the science of consciousness — and that is what they pay me to do, to worry about intelligent systems and about how this relates to biology. And then I will talk about advanced quantum theory if there is time. I hope there

What do neural nets and quantum theory tell us about mind and reality?

will also be some time to talk about the connection from quantum theory to the brain. Maybe I should say just a few words about that now because I probably will run out of time. 2.1 Quantum theory and the brain At NSF, some people want to start a new funding initiative in quantum computing. This is an exciting field. Many people speculate that quantum mechanics can help us do better computing, that we can build a higher level of intelligence if we exploit quantum theory. Many people at this conference have said that with quantum theory, we can explain or produce a higher level of consciousness. I think this is probably true, but we have not proven it yet. No one has built a quantum system, or designed one which is well-defined, which would really generate such higher-order capabilities. There are theoretical concepts for how to use quantum theory to build an associative memory. I think that is what we just heard from Vitiello — some ideas on how to use quantum principles to build or explain associative memory. There is also a person named Grover, who is very famous in quantum computing, famous for his design of an algorithm to do associative memory. But there are two problems here. First, these designs are very theoretical. To create real, working physical systems is much harder than the theoretical physicists used to think. The theoretical physicists tend to work in the second quantization, in a world of pure probability amplitudes. But when you need it to work in real hardware, you need to worry about these horrible quantum thermodynamics issues, which means that you need to think about density matrices. Only recently have people begun to get ideas about hardware which seem to make sense in physical terms. There are ideas, but just beginning. (For some of the recent decisive work on hardware, you may search on names like Gershenfeld, Kimble, Preskill, Lloyd, Wineland and Kwiat on the index at xxx.lanl.gov.) Second, the more difficult problem is with algorithms. A memory is not a brain. Building an associative memory does not tell us how to build an intelligent system. There is a long distance from knowing how to do an associative memory to proving you can do brains. In many ways, associative memory is much easier than real intelligence. In the questions after that talk someone asked, “Are you minimizing energy or are you doing what a brain is doing?” Of course, it is not what a brain does! A brain is not a memory. A memory is a useful part of a brain but a brain is something much, much bigger. So now I will try to talk first about my ideas about consciousness and the mind, and then quantum theory, and we will see how far I get.

65

66

Paul J. Werbos

2.2 Consciousness or mind from a neural net perspective To begin with, what do we mean by the word “consciousness”? As people have said, there are many, many different definitions of consciousness. In a talk five years ago (Werbos 1997), I tried to discuss six of them: – – – – – –

Consciousness as Awareness; Subjective Sense of Existence; Consciousness as Intelligence; Consciousness vs. Unconsciousness; What About the Soul? Quantum Effects Relevant?

These are just six. I have heard many others at this conference. I do not want to argue about what is the best definition. These are all important concepts. Waking and sleeping states — they are very important. But in my talk I only want to talk about one concept. These are all big subjects, so I will focus on one question here — consciousness as intelligence. What is intelligence? What is mind? That is what I want to talk about. So now let us move to the first slide. If we focus on the idea of consciousness as intelligence, there are still many different points of view to sort out. There are actually three different concepts or views of intelligence, or of consciousness qua intelligence. The most common view I have heard lately is the binary view, illustrated on the upper left of the slide. People look at a computer design … or they look at a spider … and they ask, “Is it conscious? Or is it not?” They agree that humans are conscious or intelligent — I’m not sure why they all agree on that — but anyway, they all agree on that. They agree that rocks are not intelligent. And then … they worry. “Is this computer system really conscious or is it not? A spider — is it really conscious or is it not?” This question assumes that consciousness is a binary variable, that it is either “yes” or “no.” It reminds me of some high school students I knew, when they talked about sex appeal. They said “You have it or you don’t.” That’s it — it’s binary. Well, I’m not so sure it’s binary. There might be some matter of degree here. There is another view, that views consciousness or intelligence as a continuous variable. The stupidest form of this view is the idea of consciousness as IQ — I don’t believe in that, but there are other ways of thinking of intelligence as a continuous phenomenon. Allen Hobson spoke yesterday about wakeful consciousness as a graded phenomenon. His AIM model presented consciousness as a continuous variable. I am speaking about a different kind of consciousness, but the same principle may apply here. Intelligence may not be binary; it may be graded. Earlier, David Chalmers talked about panprotopsychism here. Well, there is a very old tradition in philosophy called panpsychism. Taoism was like this. They

What do neural nets and quantum theory tell us about mind and reality?

would say that intelligence is present in all things, but in varying degrees. A Taoist would say there is intelligence in the human, the spider, the rock, the tree, the water — they all have some intelligence. It is a question of how much. So I have a funny picture in my mind. I see a philosopher of the West staring at a spider, thinking “Is it conscious or is it not?” And I see an old Taoist master looking at this philosopher and saying: “Is this philosopher conscious or not? Is she aware of what she is looking at? She is looking at a spider. She is not looking at a binary variable.” The Taoist would say “Of course there is some feeling in the spider, but you should be aware of the spider and ask ‘What kind of consciousness does it have? What is the nature of its feeling? What does it feel like?’ But you should not worry about some binary question in words which make no sense.”

3 Views of Intelligence

Human IQ Rock

Multiple Designs/Levels

There is another group of people who believe in the continuous view — a much stranger and weirder group, not Taoists, but old-style behaviorists. The old-style behaviorists believed that all animals have essentially the same kind of learning. There was a doctrine which said that … first … intelligence is learning. That’s a good start. That’s not so bad. (Werbos 1994a: 3–5, 1994b: 682–683). But then they said… the learning curve is the same for humans, rats, all animals. The humans and rats respond to the same variables; they have the same kind of learning, but the human is a little faster. I think that one reason they believed this was that they could get money to study rats and say that this all applies to the humans.

67

68

Paul J. Werbos

But then they said the same thing for birds and snails, that snails are like humans … but they are like slow humans. Well … I do not agree with that theory. I think that the right way to think about intelligence — or about consciousness as intelligence — is what I show on the bottom part of the slide above. I think that intelligence is a staircase — a matter of discrete levels for the most part, levels and levels of higher and higher intelligence and consciousness. So we should not ask “Is it conscious or is it not?” We should ask “What is the level of consciousness or intelligence?” Now, why do I think this is the right way to think about intelligence? Consider the next slide. I believe that intelligence is a kind of staircase because this is what we actually see in nature. This is what is real. This is not imaginary philosophy, if you forgive the expression. This is what we really observe in nature. We see reptiles, birds, mammals … and there is also a kind of intelligence based on symbolic reasoning. That is what built Tokyo — humans using symbolic reasoning. Now … I could talk about this slide for a very long time. This is a very important slide. There are many ideas to think about here. First, I must make some small observation. Some of you may have seen maps of the brain of a rat. You will see that in the cortex of the rat there may be about seven areas for vision. And then you look at a monkey or a cat, and there are more areas. (Arbib 1995: 1025). You may say “Gee, they look very different.” But those maps are maps of the neocortex, the highest.. or at least the outermost.. part of the human brain. The six-layered cerebral cortex, the neocortex. But the important thing is that all mammals have this neocortex. Birds do not have that kind of neocortex. So in a sense all mammals have essentially the same wiring diagram. If you think of learning … if you think of Dr. Matsumoto’s “superalgorithm” … then the basic principles of learning are fairly uniform across the neocortex. Thus in some sense we may say that all mammals are essentially the same. I don’t have time to elaborate now. So now let me talk about strategy. How can we ever build a true “science of consciousness”? Some people in artificial intelligence (AI) said years ago: “Real intelligence is up here, at the symbolic level. So let us try to build an artificial human, by building a machine to do symbolic reasoning.” And sometimes they talk about Einstein, and how intelligent he was. “Let’s build an artificial Einstein.” I think this is the reason why classical AI failed to achieve its highest goals. Classical AI failed to produce true brain-like intelligent systems because they tried to do too much. They tried to go directly to the symbolic level, without doing the mammal level first. There are some people who want us to go directly to the quantum/psychic/ spiritual remote viewing level. I think that is even worse than trying to go directly to the symbolic level. It is good to think about these higher levels, because they are very important … but in order to develop a science we need to develop mathemati-

What do neural nets and quantum theory tell us about mind and reality?

cal models and principles that work. I think we need to develop the science of the mammal level first, and that will give us the insights we need for better understanding at the symbolic level and even at the levels beyond (the question mark on my slide).

Levels of Intelligence

? Symbolic

Human

Mammal Bird Reptile

Now if you are a mystic, you may wonder “What can the mammal brain tell us about the deeper human soul?” Well, that is a complex topic. But let me say briefly … there are some mystics who use an expression “As above, so below.” Before you can understand the higher levels, they say, you must firmly understand the lower level (what is right in front of you), and also understand the analogy between the levels. Thus I claim that the important opportunity, the real opportunity for the science of consciousness today, is to really understand first this mammal brain level, without the soul, the simple basic mammal brain … that level of intelligence … and to do this mathematically (i.e., to extract the underlying principles, not just the biochemical details) and then see what insights we get regarding the higher levels. So that is what I have worked for most of my life on, to try to understand the mammal level. But how can we understand a mammal brain? How can we understand intelligence, at the mammal brain level? Well, I would like to make an analogy, shown on the next slide.

69

70

Paul J. Werbos

WHAT IS A RADIO? HOW DOES IT WORK?

Answer 1: A box that makes sound when you plug it in and turn it on Answer 2: A device which receives and demodulates electromagnetic transmissions at a user selected frequency modulated by acoustic signals Answer 3: Design details which explain how (1) and (2) are both accomplished

Actually, I am taking this analogy from Charles Gross, a neuroscientist, a student of Karl Pribram’s. In my first course in neuroscience, on the first day, Charles Gross said: “Neuroscience today is like people studying a radio. They buy a thousand radios, to understand how they work. You buy a radio. You turn it on. You pull a tube out… and then the radio whines. You call the tube ‘the whine center.’” Then you take a new radio — throw out the old one into the trash — it was alive, but you throw it out — pull out a capacitor, and then you hear a scratch sound. You call the capacitor “the scratch center.” And then you have a map of the brain where you have the whine center, the scratch center and then you say “Aha, now I understand the radio.” But, you do not really understand the radio. There are different ways of understanding what a radio is and how it works. There are different ways to answer the question “What is a radio?”. At one level of answer, you say “A radio is a box that makes sound when you plug it in and turn it on.” This is like the Turing test for consciousness. It is a descriptive test. But engineers do not like that kind of definition so much. Then there is what we would call a functional definition: A radio is a device which receives and demodulates electromagnetic transmissions at a user-selected frequency modulated by acoustic signals. I can almost hear some people saying “Isn’t that too complicated?” Maybe it is complicated, but this is what a radio is, in functional terms. But… for a science … for engineering … we want

What do neural nets and quantum theory tell us about mind and reality?

something even more. We want the design details which explain how these characteristics are accomplished, and how they can be replicated … and that is very complicated. It has to be complicated. I do think it is possible to develop an understanding of consciousness and learning which is simple in the same way that general relativity is simple. Now some people will be very disappointed at a theory which is only as simple as general relativity … but I think it is very exciting that some of us now see a way to produce such a theory. By the way, I have one last point to make about this slide. To understand a radio in functional terms, you do not need to know where every screw and bolt is. You don’t need all of those details. So I’m not talking about knowing every screw and every bolt in the brain. So now … how can we produce a design-level mathematical understanding of intelligence at the level of the mammal brain, that kind of intelligence? See the next slide.

Neural Nets Across Disciplines



Engineering: Will it work? Mathematics understandable, generic?



Psychology: Connectionist cognitive science, animal learning, folk psychology



Neuroscience: computational neuroscience



AI: agents, games (backgammon, go), etc.



LIS and CRI

How can we do it? Well of course, the brain is made up of neural networks. And there are many neural network models already in use. We have heard about many of them here. What is very scary is that the three communities using neural net models do not talk to each other as much as they should. The research is very fragmented today. There are people in neuroscience who have computational neuroscience models, which are designed to represent known neural circuits. There

71

72

Paul J. Werbos

are people in psychology who have connectionist cognitive science models. And there are people in engineering who build artificial neural nets, where all they care about is “Does it work?” These people find it hard to understand each other. I have seen Bernie Widrow and Steve Grossberg scream at each other, because they do not really appreciate each other’s work … because they have different criteria for what is real work and what is bullshit. They look at the other person’s work and they think that it is bullshit, because they are using a different criterion for what is good work. So Steve Grossberg is mainly asking these questions — “Does it fit the biological circuit? Does it explain some psychological behavior?” (Grossberg is a powerful advocate of neural network research which unifies various disciplines, but these two tests have been the main drivers of his work.) The engineers, by contrast are asking “Does it work? Why does it work? What are the engineering principles involved? Does it really optimize performance?” Engineers have learned how necessary derivative calculations are to high-level general-purpose functionality; these calculations, in turn, require some use of backpropagation as part of the larger neural net designs. Now — to really understand the brain, intelligence in the mammal brain — I think we must combine all three validation criteria. A valid model of intelligence in the brain must fit the biological data — though it doesn’t have to explain every last synapse; however, it must also fit with what we know of psychology; and it also must work, because the brain is a working system — a highly effective, functional system. It must meet all three criteria together. So because of this idea, I helped NSF set up a new initiative a few years ago, which would allow people to get funding for this kind of cross-cutting work (among other things). It funded $20 million per year until 1999 and was called Learning and Intelligent Systems (LIS). From my point of view, the idea was to fund research to combine these different criteria together — but one criterion is functionality. Where I work, in the engineering directorate, we try to build things which work. Now since these communities do not talk to each other, some of you might not know who I am. In the engineering community, at least in the United States, almost everyone thinks of me as “He is that backpropagation person. He is that person who developed an algorithm called backpropagation back in 1974” (in my Harvard PhD thesis). That thesis is reprinted in its entirety in Werbos (1994a). Backpropagation is now used in 80% or more of the working applications of artificial neural nets. There are many artificial neural nets used in academia, in research papers, but for things that actually work, that are functional, solving realworld problems, 80% are based on backpropagation. You should be warned, however, that many of the popularized treatments of backpropagation oversimplify the method, and do not convey how powerful, general and flexible it really is. Until recently, the need for backpropagation in engineering designs was a major reason for the disconnect with biology; there were no proven biological mechanisms to

What do neural nets and quantum theory tell us about mind and reality?

explain how the brain itself might perform any form of backpropagation. But recent biological research has begun to fill in that particular gap. (See Bliss et al on reverse NMDA synapse, Spruston on membrane backflows, etc.) Werbos (1994a) also begins with a chapter on why I think we are ready for a Newton-like revolution in the science of consciousness. The time has come. We have a new kind of derivative. We have new mathematics. We have new connections with Karl Pribram’s kind of work. (See Werbos 1994a, 1996, 1998b.) We are ready now. And in this book I talk about that. Also, for those people interested in Taoism and Buddhism, in Chapter 10, I discuss the connection with those ways of thinking. So now let me get back to the bigger question: If we want to understand the mammal brain in functional terms, first we must say what is the function. If it is not just associative memory, what is the mammal brain doing, in functional terms?

Reinforcement Action Sensory Input

The Brain As a Whole System Is an Intelligent Controller On this next slide, I am simply saying that the brain as a whole system is what we call an intelligent controller, in engineering. The purpose of the whole system — the purpose of any computing system — is to calculate its outputs. The outputs of the brain are actions — what biologists call “squeezing and squirting”. That is the purpose of this system, in the physical brain. And so we need to develop a mathematics of intelligent control by neural networks. Notice that I am not talking about “control of the brain”; I am talking about how the brain generates what engineers call “control signals”, the signals which come out of the brain and decide on the level of “squeezing and squirting”.

73

74

Paul J. Werbos

Let me say one other thing. Once I heard a mystic who said “You guys are all crazy. You must learn to appreciate your true self. Your true self”, he said, “is much bigger than the brain and the body.” And then I asked, “Well, then, what is the brain?” He said, “The brain — it has its role — all it is is a low level system, just to control the muscles and the glands of the body.” I said, “OK, I can live with that.” The brain is a controller. Now let us try to understand how such a controller can work. When I took over the NSF program in neuroengineering 10 years ago, immediately I asked: “What do we know about neural networks for control?” We tried to survey all the ideas. We held a workshop in 1988 in New Hampshire on neurocontrol. And I invented this new word “neurocontrol.” (More precisely: Allon Guez, an engineer from Drexel, coined this term for an unpublished small IEEE tutorial, and I adapted it for this use. Since then, unfortunately, some folks in biology have used the term “neural control” for a variety of different pursuits which do not even include engineering functionality.)

NSF Workshop Neurocontrol 1988

Control Theory

NeuroControl

NeuroEngineering

Miller, Sutton, Werbos, MIT Press, 1990

In this workshop, we brought together real control theorists from engineering who know the mathematics of control — how to make control systems that work. Brain systems are not general complex systems. They are a special type of system designed by nature to work. And so we need to use the mathematics of control systems that work. That is a very special mathematics. But we also need to know about neural networks. At this workshop we had psychologists and neuroscientists

What do neural nets and quantum theory tell us about mind and reality?

and Grossberg people. And one thing we found out: most of the neural network models out there have no hope to approximate the kind of power we see in the mammal brain. For example, there were many control models based on some old ideas from David Marr about the cerebellum. There were many models based on the idea of learning a mapping from sensory coordinates to motor coordinates. Those kind of biological models are exactly like some simple models from control theory, a class of models which are very well understood — they work very well for certain simple problems — but experiments have proven that even the lower level of human motor control is much more powerful than any system like that. (See the discussions of direct inverse control in Miller et al. 1990 by myself, Jordan and Kawato. See also Werbos 1996: 273–274, 1994b: 698, 1999b: 360–361.) And so in this workshop, we created this new field of neurocontrol as defined here. This slide gives a definition of neurocontrol, this word I made up. It is the subset of control theory and neural nets. We started that field. In this workshop, we found that there is only one class of neural network design, from engineering or psychology or biology or anywhere else, which has a hope of capturing the kind of intelligence we see in the mammal brain. This is a class of designs which some people call “reinforcement learning systems” (RLS), illustrated on the next slide.

Reinforcement Learning Systems (RLS)

External Environmentor “Plant”

U(t) X(t) sensor inputs

“utility” or “reward” or “reinforcement”

RLS

u(t) actions

RLS may have internal dynamics and “memory” of earlier times

If you are a psychologist, this phrase “reinforcement learning” will instantly remind you of many bad old things. So I have to warn you … I am not speaking

75

76

Paul J. Werbos

about Skinner-type reinforcement. Also, the idea shown in this slide is somewhat simplified. This is a good starting point, but we have modified the model to account for more complicated ideas from biology and engineering. But I do not have much time to give you the complicated part today; I have to give you the simple starting point. The idea in reinforcement learning systems (RLS) is to design an intelligent controller. Any RLS has sensor inputs. It has action or control outputs. It receives a signal of “utility” or reward. This is like pain or pleasure, perhaps. The goal is to build a system which can learn to maximize this reward signal over time. So my claim is: the mammal brain is like — something like — a reinforcement learning system. And now I must say something very important. The mind is not only the intelligence. The intelligence is trying to maximize this signal (U), but this signal is not trivial. Yes, it includes pleasure and pain, but it also includes what Dr. Matsumoto was talking about — “linkage drives”, imprinting, some kind of deep affect. The system here is actually very complex. It’s also an important part of the mind. But I do not have time to talk about it today. Instead, I will give you a commercial. Karl Pribram’s edited book, Brain and Values, talks a lot about this part of the mind. (See Werbos 1998b and other chapters in the same book.) Today I will only talk about the intelligence part. If we imagine that the brain is an RLS, or something like an RLS, what does that tell us about its design? How can RLS systems actually be designed and understood, in functional mathematical terms?

Maximizing utility over time Model of reality

Utility function U

Dynamic programming J(x(t)) = Max 〈U(x(t), u(t)) + J(x(t+1))〉/(1+r)u(t)

Secondary, or strategic utility function J

What do neural nets and quantum theory tell us about mind and reality?

The slide above describes a starting point for answering these questions. In 1968, I published an article in the journal Cybernetica (Werbos 1968), arguing that we could build reinforcement learning systems by approximating a method in control theory called dynamic programming. The brain cannot use exact dynamic programming; it is too complex for the brain. It would take a brain larger than the size of the universe to use dynamic programming to solve most everyday problems. But the idea behind the method is very interesting. In dynamic programming, we input this utility function U, and we solve for another function called J. After that, you maximize J in the short term. So U would correspond to things like pain and pleasure; J would correspond to things like learned hopes and fears. So if we build a machine based on this principle, we are building a machine that has one component which learns hopes and fears, and another part which responds to hopes and fears. With all due respect to David Chalmers, I do not think it is a “hard problem” to see the connection between this kind of design and our subjective experience. The hard problem is to make this kind of design work, and work out the details. (Note: we do have many working systems now based on these principles, but we have only just begun the resulting paradigm shift in engineering. See Werbos 1999b,c.) Now actually, there are many, many levels of design for reinforcement learning systems. There is a whole staircase of general-purpose designs, of ever greater complexity and capability. I really do not have time to explain them all now. There

4 Tests For 1st-Order Model of Intelligence In the Brain



An “Emotional” System [Values]



An “Expectations” System(SysID)



An Action/Motor System



ENGINEERING FUNCTIONALITY

77

78

Paul J. Werbos

is one class of design I developed back in 1971 now called “Model-Based Adaptive Critic” (MBAC). There is a new level I developed just in 1998, based on listening to Karl Pribram and changing my model, to account for the things I felt were missing after I talked to Karl. And this is still only the mammal brain. Beyond that I have some ideas — theoretical ideas — not mathematics — about what lies beyond. The ideas for the Model-Based Adaptive Critic were described in great detail in a book called The Handbook of Intelligent Control (White and Sofge 1992) and there were some applications that have been developed. In the last five years, we have discovered that these are very powerful systems. For example, the design shown in the slide above is a system I proposed in 1972, in my Harvard PhD thesis proposal. This design was based on trying to translate Freud’s ideas about “psychic energy” and learning into mathematics — and that’s where backpropagation really came from. The story of this is in Werbos (1994a), with some additional details in Anderson and Rosenfeld. We have recently found out that a new version of this design gives us a form of adaptive control more stable than anything else which exists now in adaptive control theory, in the linear case (Werbos 1999c).

Level 3 Model-Based Adaptive Critic ¥ J(t+1) Critic R¥ (t+1) X(t) Model R(t) u(t) Action

Even this old design from 1972 meets certain tests for a brain-like intelligent system, shown on the next slide. Five years ago, that old design was the only model of neural networks which anyone had ever implemented which meets all four tests shown here. It has an emotional or value system, a test which Dr. Matsumoto has emphasized.

What do neural nets and quantum theory tell us about mind and reality?

An intelligent control system is not a brain-like system if it does not have a value system! It also had a prediction or expectation system, which Dr. Matsumoto has also talked about. And it had engineering functionality — as a general-purpose learning system. It was the first model to meet all four standards. This is not just theory! The McDonnell-Douglas people applied an early version of this to solve a problem called making high-quality carbon-carbon composite parts (White and Sofge 1992). These composite parts are half the cost of modern aircraft. People spend billions of dollars making these parts like cookies. PhDs baking cookies in an oven, and burning most of them. It’s very expensive. McDonnell-Douglas developed a new continuous production process, but they could not control that process well enough with classical control theory, ordinary neural nets, or anything else — but these adaptive critics were able to solve this problem, and now they can produce continuous parts. This was a big breakthrough. (Not long after that, however, White and Sofge, who developed that work at McDonnellDouglas, moved to MIT; Boeing acquired McDonnell, and White found greater funding in the semiconductor area.) There are many other applications I don’t have time to discuss, in aerospace, in the automotive sector … Ford Motor Company has said (Business Week, Sept. 1998) that by the year 2001, every Ford car will have a neural network controller to meet air quality standards, using some algorithms that I developed… so these are working systems; it is not all theory. Let me finish up with some citations. For the mammal level of intelligence, Karl Pribram’s books — I have some papers in there. There is also a book called Dealing with Complexity (cited in Werbos 1998b) where I discuss a new “three-brain model” based on conversations with Karl. For practical engineering applications, there are some web sites (Werbos 1999c), which include a free long paper on stability theory from the viewpoint of classical control theory. There is a paper on applications (Werbos 1999b). And then there are some papers on consciousness and on quantum theory. To go beyond the MBAC type of model I talked about before, and to account for new things I have learned from Karl, the new model has certain characteristics. It involves neural networks which input from a field or physical networks or grids rather than just vectors. (Patent pending). It includes ways to organize a hierarchical decision system, based on a new generalization of Bellman’s equation in dynamic programming. Dr. Matsumoto talked about a hierarchical system here today. Karl Pribram discussed this in his book with Miller and Galanter on Tasks and the organization of behavior. Now there is a mathematical implementation of Karl’s ideas, and a new form of dynamic programming to implement these ideas in a learning system. We also have some things called imagination networks … there are many new things I cannot show you for reasons of time.

79

80

Paul J. Werbos

2.3 Additional comments on quantum theory and the mind Now: two slides on quantum theory, and some comments on mind and reality.

THE SECOND QUANTIZATION (II)

OBSERVED OUTCOME

MEASUREMENT FORMALISM

ψ (t+ , FS)

ψ = iH ψ

EXPERIMENTAL SETUP

ENCODING

ψ (t _ , FS)

This slide depicts quantum field theory, in the second quantization. This is the quantum field theory which most people work with today. They use a wave function, which is a function of a very complex space called Fock space. There is a kind of “Schroedinger equation” — not the old Schroedinger equation — which evolves over time, and there is a measurement formalism. The standard ideology, the standard form of quantum mechanics, says that you need a conscious observer, a metaphysical observer, but there were new experiments done by Mandel and Ou, reported in Scientific American in 1992 (June) which showed that you can get measurement effects without a conscious observer. So there is empirical evidence that we need a quantum theory without observers. This is experiment — this is not philosophy. Where can we get such a quantum theory? I cannot explain quantum theory in one minute! But I can give some citations. Werbos (1998a: Section 6) includes three alternatives to the usual formulation of the functional integral approach — one a slight reformulation of Schwinger’s ideas, to make them more compact and parsimonious, but another one very crazy and heretical — providing a more formal basis for revisiting the possibility of realism,

What do neural nets and quantum theory tell us about mind and reality?

drawing on some of the old ideas of Einstein and De Broglie. Werbos (1999b) provides some of the conceptual background; Section 6 of that paper also talks about the three alternatives, and possible testable implications. (Section 3 of this paper will add a new idea on those lines.) For example, there is a possibility that quarks could be bosons … there is a way you could do it. It sounds crazy. But I think I know how.

QED

/ ➡

REMOTE VIEWING



Quantum effects Are Not Enough



Additional Force Fields?



But if so, where is signal processing?



A radical chasm -- extreme choices

One last slide. Many people at this conference have expressed hope that quantum mechanics might explain things like remote viewing or like the collective unconscious of Jung — wild, crazy things. I would like to point out that no form of quantum mechanics can explain something like remote viewing. It doesn’t matter whether you take Bohmian or my kind or Schwinger’s kind or Copenhagen … because all these different forms of quantum mechanics produce about the same quantum electrodynamics … they yield the same predictions, essentially, for the case of quantum electrodynamics (QED). If you consider electrodynamics, that is not enough to generate remote viewing. We know what is possible with QED. The world has spent billions of dollars trying to use QED in the military to see things far away. We cannot do it. So if you want to explain strange things like remote viewing, the only way is by assuming strange force fields and strange signal processing. You have a choice. There is a great chasm. It is a binary choice. You cannot do it a fuzzy way. Either you give up on these phenomena — you give up on all that stuff — or

81

82

Paul J. Werbos

else you have to open yourself up to really crazy things, much more than just quantum theory. Crazy things like letting me stay here … and I thank Kunio for allowing such a crazy thing.

3.

Recent extensions

This section will not give more detailed explanations of the ideas discussed above; see the references for such explanations. Instead, it will give a condensed summary of some new thinking, stimulated by discussions at this conference and at the Quantum Mind conference in Arizona. 3.1 Comments on consciousness qua wakefulness or awareness Because wakefulness and awareness are major aspects of brain functioning they are, of course, addressed in the models I mentioned above. In one of Pribram’s recent conferences, there was a debate between Pribram, McClelland, Alkon and myself on the functional significance of sleep states. From my earliest papers, I agreed with LaBerge that dreams provide a simulation capability, essential to the training of any imaginative intelligent controller. Working RLS systems have demonstrated this kind of capability. Additional states are required to facilitate memory consolidation or generalization from memory — a topic related to what is called “memory-based learning” or “syncretism” on the engineering side; McClelland has argued that this involves a transfer from hippocampus to neocortex during dreams, but Karl and I argued that it may instead involve a harmonization between different types of cell within these two structures, during other kinds of sleep states. A key technical point is that local and global representations both exist within both organs. Furthermore, dreams and the hippocampus have long been known to have other functions beyond this hypothesized memory function. Regarding awareness and attention — I thank Bernie Baars for drawing my attention to some of the recent literature by authors like himself and Legothetes, which I need to study further. Attention is clearly much more than a matter of importance weighting or “salience”, as in the older models. In my view, it is the key mechanism for “labeling” the variables monitored by major fields in the neocortex; for an example of how important this might be, see the paper by Olhausen and Koch in Arbib. More precisely, this kind of object “labeling” is the kind of machinery needed to use multiplexing to implement the “ObjectNet” design (patent pending) discussed in Werbos (1999c). Any efficient multiplexing system results in synchronized “object binding”, without any need for reverberatory attractors and other such mechanisms popular in neuroscience today; the challenge for design (or functional understanding) is not with the binding per se, but the management and

What do neural nets and quantum theory tell us about mind and reality?

choice of what is bound to. Current evidence (see papers in Arbib 1995) suggests that the pulvinar plays a crucial role in this function. 3.2 Discussions at the Arizona conference I am very grateful to Stuart Hameroff and the Arizona group for inviting me to speak at that conference, despite my known skepticism about ORCH as such. At Arizona, I argued that true quantum computing effects probably are not relevant to a functional understanding of the brain. This does not mean that quantum mechanics as such is irrelevant. Quantum mechanics is important to understanding how molecules work, just as it is important to understanding how quantum dots and Josephson junctions can be used to implement classical NOT gates and AND gates, etc. But we would call that “quantum devices”, not “quantum computing”, in modern terminology. If a computer is based on quantum devices and ordinary field effects (such as those Pribram has often discussed), this is still quite consistent with the class of quasi-Turing-machine model we are now working with to understand the mammal brain level of intelligence. But for true quantum computing, as now defined, there must be some exploitation of coherence or quantum entanglement effects to serve a systems-level computational purpose. Many people have already talked about the difficult, unproven physics of trying to imagine how brains could create and maintain quantum entanglement, but very little attention has gone into the even more serious issue of trying to imagine what kind of computational purpose might be served by such a system in the brain. As an honest skeptic, perhaps my first duty is to issue a challenge to the quantum brain believers — to give an example of what they might try to prove, to overcome my skepticism. From all I have read and thought about, I can only imagine two ways that a “quantum computing” capability in the brain might really affect general-purpose intelligence. One would be the evolution of a “quantum associative memory” neuron. Could one really train a single neuron to learn simple functions like XOR or Minsky’s old parity mapping challenge? These are not “natural” problems — but if an individual neuron really had the ability to use molecular quantum computing to achieve associative memory, it should have the ability to learn such relations. If it does not… then what are the hypothesized quantum effects within the cell doing anyway? A second possibility would be that of a “superfast recurrent network” (SFRN), an alternative approach to quantum computing (a form of quantum neural network) proposed in Werbos (1997); however, that hypothetical possibility has yet to be fully understood in engineering terms, let alone mapped into biology. Crucial to the idea of an SFRN is the old insight, originally due to myself (Werbos 1973, 1989) and DeBeauregard, that the paradoxes of quantum theory can

83

84

Paul J. Werbos

be understood as the result of causality running backwards through time at the microscopic, quantum level. (This is similar in spirit to Cramer’s later “transactional interpretation”, but Cramer invokes nonlocality, which is unnecessary here.) Penrose cited us both in Shadows of the Mind, and Hameroff showed a slide from Penrose conveying the idea very vividly. Various people went on to argue that new evidence (from Libet, Radin and Bierman) shows that the brain can respond ¼ of a second before a stimulus, and that something like an SFRN might be present in the brain. Parts of this evidence were surprisingly convincing to me, personally, and they posed more acutely the need to revisit the concept of SFRN and backwards causality. Mari Jibu also pointedly challenged us to explain more precisely how we think the interface actually works between “microscopic” time symmetry and the macroscopic arrow of time. As a caveat, Josephson reminded people that my negative comments pertain only to the brain — not to the “soul”, a subject of great interest to many but beyond the scope of the present discussion. 3.3 Revisions of my views of quantum effects Word limits here require that I must assume the reader has full knowledge of the references. The views here are not only personal but highly tentative. Many issues which seem real, in debates on quantum theory, disappear when one considers recent experiments. (In addition to the quantum computing work mentioned above, Y. Shih and K. Alley of Maryland have important results.) For example, one may worry about what happens after two measurements, A(t) and B(s), at times s and t, at the discontinuity where s = t. But real measurements take some time; when one approaches such a discontinuity, one predicts the result simply by representing the polarizers or whatever in a more complete fashion, as potentials or particles, affecting the Schroedinger equation, and chucking out the metaphysical observer formalism. This is like the original Von Neuman-Wigner approach discussed by Stapp at Arizona. This is what actually works. As a practical matter, one always expects to get the right result if one applies the measurement and setup formalisms only to the ultimate, asymptotic, commutative inputs and outputs of an experiment; the measurement formalism may sometimes work in describing what happens in the middle, but there is no general guarantee. Both the functional integral approach, and the variations which I have proposed (Werbos 1998a, 1999a), assume an underlying symmetry in time at the microscopic level (leaving aside the superweak interactions). In answer to Mari Jibu’s question, I would argue that all the usual experiments in quantum theory can be reduced to something I call “the standard paradigm.” In this paradigm, everything is ultimately reduced to a scattering experiment. The inputs are represented by some set of measurement operators, and by the actual values of the

What do neural nets and quantum theory tell us about mind and reality?

corresponding variables. (For example, the experimenter may control the momentum of every incoming particle.) The outputs are represented by another set of measurement operators, but the experimenter cannot decide the values of those variables; he may only observe them. Thus there is a clear-cut asymmetry between the input situation and the output situation. In practice (in my definition of “standard paradigm”), the outgoing measurement operators all commute with each other; in fact, they are really nothing but particle counters, which measure particles with energy E>>kT, where T is the temperature of the counter. (Polarizers and such may be considered as internal parts of the experiment.) The functional integral approaches and the second quantization essentially agree completely, for experiments which can be reduced to the standard paradigm. We cannot do the usual Bell’s Theorem experiments in reverse time, because these counters do not emit energetic particles in reverse time. Why not, if physics is symmetric in time at the microscopic level? Actually, this question is mathematically almost equivalent to the classical question about what happens to a rock on the floor. Why do we not see rocks flying up from the floor, following a time-reversed movie of how they fall to the floor? The answer is simple: there is only a tiny probability that the atoms under the rock will happen to move in the same direction (up) and push the rock up. For similar reasons, it is rare that an E >> kT counter would emit a particle in reverse time. The puzzling thing is that we ever see such an event in forwards time; this otherwise improbable thing is due to the experimenter exploiting the availability of time-forwards free energy, which ultimately comes from sunlight pouring down on earth — a macroscopic boundary condition. For experiments within the scope of the standard paradigm, backwards time communication of macroscopic information is impossible. It is impossible, in part, because it would allow a violation of Eberhart’s Theorem on the impossibility of communicating information faster than the speed of light (FTL). Eberhart’s Theorem does not depend on conventional wisdoms about causality and such; it only depends on the basic assumption that equal time commutators are zero. The concept of backwards causality and equilibration across space-time may provide a useful understanding of what is possible with quantum computing within the standard paradigm; thus it may still permit development of some kind of useful SFRN, as a way of speeding up certain very general computations. However, such designs could all be reformulated (albeit awkwardly) within the usual formalisms of traditional quantum computing, rooted in the second quantization. There is no possibility of communicating macroscopic information back through time. There are two possible loopholes here which merit further thought. First, what about “stochastic infrared quantum computing?” What if one output channel ends in a controlled polarizer, follower by an E OS/OP = SB/SA = OR/OS = OV/OW PS/OP = AB/SA = SR/OS = VW/OW PS/OS = AB/SB = SR/OR = VW/OV

/ = sin θ = / (SB)(SR) = (AB)(OR) SR( )/AB( ) = OR( )/SB( ) (Xtan θ)/(Ysin θ) = (X/cos θ)/(Y)

Concluding remarks Psychological information processing on the ratio scale of discrete variables has been presented in both a real and a complex Hilbert space. In the case of continuous variables, it has been established in holonomic brain theory by Pribram with Yasue and Jibu (1991: Appendices). The former depends on the form of a reciprocal matrix in terms of psychological information and the latter on the form of a differential equation in terms of dendritic networks. Both theories have an exact correspondence to the formalism of quantum mechanics: one is the behavior of particles (Heisenberg) and the other is the oscillation of waves (Schroedinger). For example, the eigenvector (v) of the reciprocal matrix just corresponds to the neural wave function (Ψ), as we saw in the eigenvalue problem.4

371

372 Eiichi Okamoto

We can extend the structure of the ratio scale to interval and ordinal scale in the same way in terms of the principal component analysis, as shown in the treatment of the reciprocal matrix above. The geometric representation of the interval scale corresponding to ours for the ratio scale is given in the dual scaling by Figure 4. The reciprocal matrix has already been applied to the so-called AHP (analytic hierarchical processing) by Saaty (1977: 235), but in that case, the values of Vij are restricted (to 1, 2, …, 9 and their inverses), resulting in some discrepancy with the optimal estimation. The theory, nevertheless, contains a layered structure that can evaluate the plural features of individual objects as sub-matrices and has also been applied to the author’s theory (Okamoto 1997: 218). In the procedure of AHP, the measure of the geometric mean (GM) is used to compute the grouping data. It is interesting that the GM has the same value as the eigenvalue (EV) when the number of variables is equal to or less than three (Okamoto 1997:203). When it is more than three, the method of estimation proposed in this paper takes priority and programs to compute GM, EV, LS (least square method), etc., on a HP-48G (Okamoto 1997: 223) are available from the author. The logistic function (Pij), of which the kernel is the ratio (v-)scale (Vij), is the universal key to the behavioral sciences that can open every gate of information enigma. Alternatively, the (Pij) is the outcome of a Fourier transformation and the (Vij) is the outcome of the inverse Fourier transformation in Hilbert space. We can extend this function to the theory of item characteristics in test scores, the theory of fuzzy systems, the theory of chaos, and so on. In the area of psychology, we can provide a computational theory of memory traces as an application of the logistic function in the article (Okamoto 2000) and presented the outline at the 27th International Congress of Psychology in Stockholm.

Notes 1. Functional example of alternative Fourier transformation = H(ω) = Ú h(t) exp{−ψωt} dt = λ/[λ + λψ]: logistic function F[Vij] F¢[Pij] = h(t) = Ú H(ω) exp{ψωt} dω = λ exp{−λt}: exponential distribution where ψ = ÷(−1): imaginary number and ω = argument of vector. 2. The eigenvector is equal to the geometric mean of the row values in a reciprocal matrix at all times even for raw data, when (n £ 3) variables, but it is not always equal when (n > 3). eigenvector

i \ j reciprocal matrix a b˘ È1 Í Vij = 1/ a 1 c ˙ Í ˙ ÍÎ1/ b 1/ c 1˙˚

eigenvector

È (ab) (1/3) ˘ Í ˙ v = Í (c / a) (1/3) ˙ Í(1/ bc) (1/3) ˙ ÍÎ ˙˚

eigenvalue

l = [(ac)(2/3) + b (2/3)]/ (abc) (2/3)

This is the standard solution of the eigenvalue problem: Vijv = λv When (Vij) is transformed to (Wij), (λ) reduces to (n); the number of objects.

Psychological information processing in a complex Hilbert space 373

3. Jacobian matrix: Fisher information in statistics

[∂Z / ∂X1  ∂Z / ∂X k ]

È ∂X / ∂Y  ∂X / ∂Y ˘ = [∂Z / ∂Y  ∂Z / ∂Y ] n 1 m 1 Í 1 1 ˙ ∂ X / ∂ Y i j Í ˙ Í ∂X k / ∂Y1  ∂X k / ∂Ym ˙ Î ˚˚

log likelihood function: Z = log f (x, q) = 1(x, q) This structure corresponds to tensor analysis and Markov chain matrix in the learning process. 4. The ß model of the learning process by Luce (1959): synonymous to the delta rule in the PDP model. P(t+1) = 1/[1 + exp{a + bt} = 1/[1 + ß(0)V(t)] = 1/[1 + e(a)e(b) + b(t−1) = 1/[1 + ß(1)V(t−1)] = 1/[1 + e(a)(e(b))(t) = 1/[1 + ß(t)V(0)] 5. exp{ΣkW(ki)U(k) + a}: This kind of function is usually adopted in the PDP model of neural network theory that conveniently uses the same notation as (Pij); Rumelhart 1996: 550. log Vij = log[(1-Pij)/Pij] = {ΣkW(ki)U(k) + a} : logit transform Vij = [(1-Pij)/Pij] = exp{ΣkW(ki)U(k) + a} : inverse logit transform

374 Eiichi Okamoto

Appendix 1: Logistic function as a negative feedback mechanism in an automatic control system

P X

+

X: input Y: output

A Y



At point P, there is a feedback from the output to the input in the negative phase.

(main block)

B

When A = I & B = Vij , the negative feedback is reduced to the logistic function. I: unit matrix

(sub-block)

Figure A1.Feedback circuit The equation of the negative feedback Y = PA = (X − BY)A = AX − ABY Y(1+AB) = AX Y/X = A/(1 + AB) = L L = 1/[1 + Vij] = Pij : Universal formula of the logistic function

Appendix 2: The stereographic projection on the Riemann sphere (cf. Fig. 3) The vector NZ is projected on the equatorial plane (P).

.

N

N: the north Pole, S: the south Pole O: the center of the earth, O Æ Ω : Real axis, O Æ Ψ : Imaginary axis Figure 3 Figure A2 (O Æ Z) ¤ (N Æ Z) (O Æ U) ¤ (N Æ U¢) This relationship corresponds to the ζ function which means the conformal mapping on the the Riemann sphere.



.U´ O. .U . S

.Z (P)

Figure A2.Riemann Sphere

Ψ

Psychological information processing in a complex Hilbert space 375

Appendix 3: Basic vectors and parallelogram (AOBC) in the complex space Ψ: Imaginary (b) B (a)

C

θ

O

A

: standard object : comparable object : inner product (a) : inverse inner product (b) : vector product (area of AOBC)

Point A(1, 0) Point B(1, tan θ) Vector OA = 1 Vector OB = (cos θ)-1 Vector AB = tan θ

Ω: Real

Figure A3.Inner product and vector product in a parallelogram

Appendix 4: The necessary and sufficient conditions of a pre-Hilbert space OA: a = (a1, a2) = OB: b = (b1, b2) = OC: c = (a1 + b1, a2 + b2) = + AB: d = (a1 − b1, a2 − b2) = − (a, b) = a1 b1 + a2 b2 : inner (dot) product (a Ÿ b) = a1 b2 − a2 b1 : vector product = a1 a2 = D(a, b) : determinant Norms: = ÷( , ), = ÷( , ) + = ÷[( + )( + )] − = ÷[( − )( − )] (ΨΩ) = + 2 + − 2 = 2[

2

+

Ψ: Imaginary C B (✩+★)



(✩-★) ✩ A O Figure A4.Diagonals and sides in a parallelogram

Ω: Real

2

] : The condition of complex pre-Hilbert space

Here the symbols ( and ) are arbitrary coordinates for the vectors without an orthogonality condition imposed.



376 Eiichi Okamoto

References Chino, N. (1998). Hilbert space theory in psychology. Bulletin of Aichi Gakuen University, 28, 45–65. Luce, R. D. (1959). Individual choice behavior. New York: Wiley. Nishisato, S. (1982). Quantification of qualitative data. Tokyo: Asakura Shoten. (In Japanese) Okamoto, E. (1997). The logic of current psychology. Tokyo: Kaigai-Boeki. (In Japanese) Okamoto, E. (1999) The logic of current psychology (XVII). Journal of KGWU, 10, 65–100. Okamoto, E. (2000) The logic of current psychology (XVIII). Journal of KGWU, 11, 1–37. Pribram, K. H. (1991) Brain and perception. New Jersey: Lawrence Erlbaum. Rumelhart, D. E. (1996) Backpropagation. In P. Smolensky et al. (Eds), Mathematical perspectives on neural networks. New Jersey: Lawrence Earlbaum. Saaty, T. L. (1977) A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15, 234–281.

Tokyo ’99 Memorial Pictures 377

Opening Address by Dr. Tarcisio Della Senta, Director of the Institute of Advanced Studies, the United Nations University

United Nations University Headquarters Ryouichi Kasahara, Official Photographer, Tokyo ’99



378 Tokyo ’99 Memorial Pictures

Participants in the Main Auditorium

Closing Address by Dr. Hans van Ginkel, Rector of the United Nations University



Name index A Alkon2, 6 B Baars82, 191 Bartlett197 Bohm138 Bohr139 Brouwer89 C Caianiello45 Chalmers66 Crick24 D Davydov43 De Beauregard83 de Broglie81 Del Giudice48 Della Senta377 Descartes194 E Eccles8 Einstein33, 81, 139 F Feynman64 Freud7, 78, 315 Froehlich43, 45 G Gabor3 Goedel89 Greenfield58 Gross70 H Hameroff83

Hebb6 Heidegger137, 140 Heisenberg3 Hilbert89 Hobson66 Hopfield48 Husserl95 J James95, 145, 152, 333 Jibu9, 51, 84, 86, 371 Josephson84 K Kant197 L Lashely43 Lucas89 M Marr75 Marshak45 Mozart149 P Penrose8, 84, 89, 110 Piaget197 Planck47 Preparata48 Pribram23, 25, 43, 73, 78, 365, 371 Ricciardi9, 43, 48, 50 S Schroedinger44, 47 Schwinger64 Shannon1 Sherrington2 Socrates327 Stuart9, 43



380 Name index

T Takahashi9, 43 Tomonaga64 Turing92 U Umezawa9, 13, 43, 45, 48, 50, 137 V van Ginkel379 Vitiello65

W Weaver1 Wolfram254 Y Yasue9, 51, 63, 371



Subject index A abstraction350 acoustic image282 action pattern244 action tendency148 active uncoupling98 adaptive mechanism316 affection325 affective valence327 algorithm92 alpha wave306, 342 amnestic syndrome42 analytical hierarchical processing372 anticipation233 anxiety316, 319 apparent motion131 approach range287 arousal129, 325 arrow of time57, 141 artificial creature231 artificial intelligence (AI)232 artificial neural net72 associated memory65 association54, 201 association of memory55, 57 associative memory151 associative recall174 attention124, 183, 191, 333 attention mechanism118 attention system296 attentional mechanism333, 338 auditory nerve fiber282 autopoietic247 awareness2, 19, 66, 127, 129, 338 B back propagation351 backpropagation72 balance169 Bays’ theorem364

Bellman’s equation79 Bell’s theorem85 binding5, 167, 229 binding at a cognitive level167 binding mechanism129 binding problem133, 168, 171, 207 binocular rivalry261 biological significance169 biomagnetometer343 blindsight307 body-soul dualism360 borderer207 Bose gas20 boson condensation9 bottom-up fixation factor158 Brownian motion16, 26, 28 C cascade machine115 category theory357 cellular automata253 central executive129 certainty233 chaotic performance234, 240 chemical network28 chemical synapse25 circular machine115 classical AI68 classical contextuality175 classical field14 classical theory of pain (CTP)194 closure180 cognitive control104 cognitive integration98 coherence83, 198 coherence length19 coherent condensation54 coincidence detection283 coincidence detector207 collapse141

382 Subject index

collective mode45, 49 collective phenomena48 complementarity359 complex system49, 109 complimentality95 computational model316 conception150 conceptual closure180 concrescence175 condensation20, 49 condensed context152 conflict318 confusion of memory55 connection weight349 connectionist model248 conscious affection328 conscious entity358 conscious experience1, 109, 173, 183, 196, 203 conscious level processing248 conscious mind125 conscious perception168, 262 consciousness58, 59, 63, 66, 92, 114, 124, 127, 138, 145, 155, 180, 199, 203, 207, 231, 243, 248, 301, 304, 333, 338, 347, 349, 357 consciousness based architecture (CBA)232 consciousness sheaf357 consciousness-based architecture (CBA)234 consonant bias274 constructivism197 constructivist197 constructivist model of pain202 contextuality173, 179, 180 continuum145 control policy31 control theory369 controlled diffusion16 correlation function304 cortical oscillation261, 270 corticon50 cosmological arrow of time57 critical anxiety320 critical temperature20 cross-correlation function223

cross-link process39 cytoskeletal network16 D Darwinism212 decision center210 decision-making module160 decoherence141 deep structure5 defense mechanism315 degree of certainty240 degree of consciousness234 degree of pleasure236, 240 dendritic arborization2 dendritic membrane9 dendritic microprocess25 dendritic network2, 364, 371 dendritic spine15, 24 dendro-dendritic synapse plasticity24 depressive psychosis328 descriptive test70 diachronous binding167 differentially modulated103 diffusion constant27, 28 dipole rotational symmetry51 dipole wave quantum54 direct priming229 dissipation49 dissipative brain dynamics57 dissipative dynamics58 dissipative model59 dissipative quantum brain dynamics138, 142 dissipative structure44 dissipative system53 distribution pattern32 domain structure19 domain-specific working memory133 Doppler effect282 dorsal stream133 double 59 doubled mode53 doubles ontology137 downwards causation95, 100, 102, 104, 105 drift force21 duplication53

Subject index 383

dynamic boundary condition208 dynamic programming79 dynamical attractor218, 289 dynamical discription46 dynamical map218, 221, 229 dynamical observable137 dynamical perspective46 dynamical quantum interaction47 dynamical skelton104 dynamically distributed processing199 dynamically ordered structure18 dynamically ordered water10 dynamics46 dysbinding167, 170, 171 E Eberhart’s theorem85 echo image282 echolocation282 edge of chaos174 EEG97 ego315 Eigen value problem34 electric dipole moment19 electrical ephapse25 electroencephalogram301 electroencephalographic activity342 electroencephalography261 electromagnetic field17, 18, 20 elementary cellular automata253 embodied experience106 embodiment233 emergent causation96 emergent dynamical pattern96 emergent phenomena104 emergent property48 emergent system202 emotion174, 203, 325 emotional value233 emotionality234 environmental membrane organizer18 epileptic activity101 episodic memory98 equilibrium distribution33, 36, 38 evanescent photon20 evanescent wave mode18 event-related desynchronization262

event-related potential307 executive system155 expectancy148 explicit memory217 exploration290 extended working memory structure136 external potential30 external potential flow37 external reality141 F familiarity147 feedback process233, 239 feedforward neural circuit115 fence structure16 ferromagnet49 field-like processing3 finite size57 finite temperature state57 firing pattern221, 297 firing rate328 firing rule209 first person59 first person experience109 first person perspective233 first-person105 first-person consciousness240 first-return map102 fleeting perception174 fluid masaic model15 focalization333 Fock space80 focus147, 152 Fokker-Planck equation32 formalist89, 90 Fourier coefficient365 Fourier transform364 fractal neural network349, 352 free will155, 244, 247 frequency modulation282 fringe145, 151 fringe experience152 functional integral86 functional integral approach64 functional magnetic resonance imaging (fMRI)127, 326

384 Subject index

functor360 fundamental oscillation27, 29 G Gabor function5 gap junction15 general anesthetics21 generalized time space358 geometrical representation366 global workspace theory128 goal approach cell290 goal position map291 goal-directed attention191 Goedel’s theorem90, 251 graded non-locality55 gravity169 ground state54 H halting problem92, 251 head direction cell290 head direction map291 Hebb rule351 Hebbian rule218, 291 Hebb’s rule6 hidden attention119 hierarchical structure114 Hilbert space3, 179, 363 Hodgkin and Huxley model283 holography4, 43 holonomic brain theory371 horizontal communication359 horizontal hierarchy145 human inequality329 human intelligence232 human mind315 humour171 Husserlian phenomenology231 I identification58, 227 imitation244 implicit memory218, 228 impulse35 impurity41 indeterminancy3 indirect priming229

inductive probability319 infinity252 information1, 3 initialness effect277 inner perimembranous region17 intellectual activity315 intelligence66, 68, 76, 248 intention156 intentionality233, 240 intercellular liquid17 interconnected system173 intercortical coherence302 intercortical correlation301 intercortical interaction301 interference159 interneuron219 inter-sensory binding167, 170 inter-sensory conflict170, 171 intra-sensory binding167 intrinsic contextuality175, 179, 180 intuitionist89, 90 inverse Fourier transform364 ionic channel distribution26, 34, 38 irony171 irreversible time-evolution53 J Japanese language273 junctional graded potential27 K Khepera robot237 knowing147 Kolmogorov’s axiom175 L language-based awareness135 large-scale cognitive integration99 lateral diffusion15 learning54 learning process291 level of consciousness68 lexicality effect278 liar paradox91, 179 limit-cycle221 linear machine115 lipid bilayer26

Subject index 385

listening span test341 living present95, 104 local environmental membrane organizer18 local field potential (LFP)96 local membrane organizer21 locus56 logistic function364 long range correlation45, 48, 49 long term depression (LDP)123 long term potentiation (LTP)123 long-distance phase-synchrony99 longitudinal mode20 long-range correlation wave19 long-term memory (LTM)24, 38, 40, 151

metacognitive strategy selection184 meta-cognitive system337 meta-knowledge189 microconsciousness159 mind64, 76, 92, 94, 110 model-based adaptive critic (MBAC)78 monitoring mechanism338 mora bias274 mora constitution277 mora-timed language278 morphism360 motion aftereffect131 motion sickness167, 170 motor system295 multicellular organism18

M macroscopic configuration49 macroscopic order parameter115 macroscopic quantum system46 macroscopic system179 magnetic resonance imaging (MRI)195 magnetoencephalography (MEG)127, 261, 343 mania328 many-body system45, 50 mathematical proof93 meaning252 meaningfulness169 measurement84, 177 measurement problem139 measuring apparatus125 membrane organizer17 membrane phospholipid oscillation24 membrane potential121 membrane protein26 membrane skeletal network16 memory37 memory capacity51 memory state53 memory storing54 memory system338 memory-based learning82 metacognitive goal184 metacognitive knowledge184

N Nambu-Goldstone boson49, 51, 140 Nambu-Goldstone mode19, 20 naturalistic perspective46 navigation289 negative feedback373 network model290 neural correate95 neural correlate of consciousness (NCC)95, 129, 261, 270 neural network243, 349 neural network complex116 neural network model282 neural network theory63 neural state105 neural wave generator365 neuroanl junction27 neurocontrol75 neuromodulator15 neuronal connection17 neuronal junction25, 37 neuronal membrane25 neuronal membrane manifold29 neuronal network37 neuronal phase space101 neuronal plasticity24, 39 neuronal synchronization207 neuronoid207 neurophenomenological stance105 neurotransmitter15

386 Subject index

Newtonian paradigm254 non-Cartesian machine232 non-intellectual activity324 non-intellectual function315 nonlinear model306 nonlocal cybernetics139 non-locality43, 52 non-monotone dynamics349 non-monotone neural network352, 354 normal wave mode18 nucleus145 Nyquist circle369 O object binding82 observation125 observation-oriented computation model252 ondulatory collective motion27 ongoing system215 open autopoietic system137 open system49, 52 optical interference252 oral communication171 order equation34, 38 order parameter49, 118 ordered domain formation56 ordering information49 otherness234, 240 outer perimembranous region17 overall integration97 P pain193, 202, 327 pain center193 pain-related pattern196 panpsychism66 particle physics115 particle-wave dualism359 pattern formation26 Pavlovian conditioning2 peak alpha frequenchy342, 347 perception150 perception-stabilizing module160 perceptual awareness21, 38 perceptual binding96, 98

perceptual fundamental process27 perceptual reversal156 perimenbranous region9 personality model315, 324 perspective188 perspective change184, 188, 191 perspective-perception reversibility157 perturbation156, 173, 180 phase synchrony96, 98 phase transition49, 51 phase wave controlled diffusion27 phase-locking97 phase-resetting98 phase-scattering98 phenomenological perspective46 phonological bias274 photon312 photoreceptor312 Piano arithmetic91 place cell290 planning244 Platonic mind110 Platonic truth90 Platonist89, 90 pleasantness325, 328 point attractor222 population pressure33 position recognition map291 positron emission tomography (PET)127, 326, 345 postmodernism137 post-synaptic coincidence207 pressure reversal22 pre-synaptic coincidence207 priming227, 228 primitive membrane organizer18 probability178 process-specific working memory133 projection neuron219 proof91 psychic energy78 psychoanalysis315 Q QBD17, 22, 140 QFT49, 50 QM50

Subject index 387

qualia202, 251 qualia problem253 quantum associative memory83 quantum attunement140 quantum brain dynamics (QBD)13, 137 quantum brain model43 quantum computing65, 83 quantum dynamical origin48 quantum effect252 quantum electrodynamics (QED)81 quantum entanglement83 quantum field theory13, 14, 43, 64, 86 quantum fluctuation125 quantum holography4 quantum information processing6 quantum machine175, 178 quantum macroscopic domain9 quantum mechanics14, 52, 113, 175, 371 quantum mechanics(QM)44 quantum neurophysics139 quantum of information3 quantum phenomena180 quantum process125 quantum state125 quantum theory63, 80, 139 quantum Turing machine 251 quantum-like neural process8 quasi particle14 R randomized sampling test312 rapid serial visual presentation333 reading span test341 real motion131 reasoning perspective183 reciprocal matrix363 recognition process300 recurrent circuit115 recurrent neural network351 reference oscillation27 refresh54 reinforcement learning75, 300 relation147 remembering1 repression321

response probability364 restore54 Riemann sphere363 rightness148 robot231 rotational diffusion15 rover207 S schema201 Schroedinger equation84, 179 Schroedinger-like wave equation7 science of meaning203 searchlight metaphor127 sea-sickness170 second person59 second quantization64 selective construction183 selective encoding183 selective lerning1 self203, 239 self-acceptance degree321 self-cognition130 self-consciousness129, 135 self-monitoring mechanism334, 347 self-organization196 self-referential paradox251 self-representation58 sensory experience180 sensory-stimulus evaluation160 serial processing114 sheaf358 sheafification process359 shema197 short term memory338 short-term memory (STM)24, 38, 151 situatedness141 slowly varying graded potential30, 35 source theory64 spatial cognition151 spatial orientation167, 169 spatio-temporal binding132 specious present95 spectral representation4 speech error278 spike121 spin178

388 Subject index

spine-originated dendritic potential8 spontaneous breakdown of symmetry45, 49, 50, 51 spontaneous symmetry breaking9 stability52 standard paradigm84 statement of arithmetic89 stationary distribtuion33 statistical mechanism47 Stern-Gerlach experiment179 stream of thought152 strongly contextual174 subcritical phase179 subliminal perception307 subliminal state119 subsumption architecture (SSA)232 super-ego315 superfast recurrent network (SFRN)83 superposition53, 179 super-solenoid axiomatic system252 supracritical phase179 sweep rate282, 285 syllabic structure277 symmetron51 synaptic strength220 synaptodendritic microprocess2 synchronicity252 synchronization96, 124, 261, 262 synchronous binding167 synchrony96 synchrotron principle120 syncretism82

time reversal141 time-reversal symmetry52 time-reversed copy58 time-reversed double138 time-reversed mirror image53, 138 toothache199 top-down attention factor158 topology37 transient integration96 transmembrane protein15 tunnel photon20 tunnel photon condensation21 tunneling wave mode18 Turing machine110, 251, 253 Turing test70 two-way causation105

T task290 task dependent map293 terminal range286 thalamocortical process5 thermo dynamical arrow of time57 thermofield dynamics137 third-person105 thought pattern244 tight junction8, 25 tilde conjugation140 tilde conjugation rule139 tilde-conjugate142 time mirror58

V vacuum state49, 54, 138 value169 value judgement system296 variant perspective100 ventral stream133 verbal communication167 verbal slip273 vertical359 vertical hierarchy145 virtual photon20 visual attention338 visual awareness129, 134, 261 visual consciousness129

U unconscious affection328 unconscious decision160 unconscious inference157 unconscious material359 unconscious perception307, 313 unconscious process156 unconscious processing127 unicellular organism17, 27 unitary inequivalent52 unitary inequivalent vacuum52 unity of consciouisness96 unpleasantness325, 328 unstable periodic orbit (UPO)104 upwards causation95, 96, 102



Subject index 389

volition155, 156 voluntary attention147 von Neumann theorem50 W water17 water molecule17 wave function139 wavelet3 weakly contextual174

will156 will effect159, 161 will-perception interaction161 word onset278 working memory128, 134, 247, 341, 347 working memory metaphor127 worldview179

In the series ADVANCES IN CONSCIOUSNESS RESEARCH (AiCR) the following titles have been published thus far or are scheduled for publication: 1. GLOBUS, Gordon G.: The Postmodern Brain. 1995. 2. ELLIS, Ralph D.: Questioning Consciousness. The interplay of imagery, cognition, and emotion in the human brain. 1995. 3. JIBU, Mari and Kunio YASUE: Quantum Brain Dynamics and Consciousness. An introduction. 1995. 4. HARDCASTLE, Valerie Gray: Locating Consciousness. 1995. 5. STUBENBERG, Leopold: Consciousness and Qualia. 1998. 6. GENNARO, Rocco J.: Consciousness and Self-Consciousness. A defense of the higher-order thought theory of consciousness. 1996. 7. MAC CORMAC, Earl and Maxim I. STAMENOV (eds): Fractals of Brain, Fractals of Mind. In search of a symmetry bond. 1996. 8. GROSSENBACHER, Peter G. (ed.): Finding Consciousness in the Brain. A neurocognitive approach. 2001. 9. Ó NUALLÁIN, Seán, Paul MC KEVITT and Eoghan MAC AOGÁIN (eds): Two Sciences of Mind. Readings in cognitive science and consciousness. 1997. 10. NEWTON, Natika: Foundations of Understanding. 1996. 11. PYLKKÖ, Pauli: The Aconceptual Mind. Heideggerian themes in holistic naturalism. 1998. 12. STAMENOV, Maxim I. (ed.): Language Structure, Discourse and the Access to Consciousness. 1997. 13. VELMANS, Max (ed.): Investigating Phenomenal Consciousness. Methodologies and Maps. 2000. 14. SHEETS-JOHNSTONE, Maxine: The Primacy of Movement. 1999. 15. CHALLIS, Bradford H. and Boris M. VELICHKOVSKY (eds.): Stratification in Cognition and Consciousness. 1999. 16. ELLIS, Ralph D. and Natika NEWTON (eds.): The Caldron of Consciousness. Motivation, affect and self-organization – An anthology. 2000. 17. HUTTO, Daniel D.: The Presence of Mind. 1999. 18. PALMER, Gary B. and Debra J. OCCHI (eds.): Languages of Sentiment. Cultural constructions of emotional substrates. 1999. 19. DAUTENHAHN, Kerstin (ed.): Human Cognition and Social Agent Technology. 2000. 20. KUNZENDORF, Robert G. and Benjamin WALLACE (eds.): Individual Differences in Conscious Experience. 2000. 21. HUTTO, Daniel D.: Beyond Physicalism. 2000. 22. ROSSETTI, Yves and Antti REVONSUO (eds.): Beyond Dissociation. Interaction between dissociated implicit and explicit processing. 2000. 23. ZAHAVI, Dan (ed.): Exploring the Self. Philosophical and psychopathological perspectives on self-experience. 2000. 24. ROVEE-COLLIER, Carolyn, Harlene HAYNE and Michael COLOMBO: The Development of Implicit and Explicit Memory. 2000. 25. BACHMANN, Talis: Microgenetic Approach to the Conscious Mind. 2000. 26. Ó NUALLÁIN, Seán (ed.): Spatial Cognition. Selected papers from Mind III, Annual Conference of the Cognitive Science Society of Ireland, 1998. 2000. 27. McMILLAN, John and Grant R. GILLETT: Consciousness and Intentionality. 2001.

28. ZACHAR, Peter: Psychological Concepts and Biological Psychiatry. A philosophical analysis. 2000. 29. VAN LOOCKE, Philip (ed.): The Physical Nature of Consciousness. 2001. 30. BROOK, Andrew and Richard C. DeVIDI (eds.): Self-reference and Self-awareness. 2001. 31. RAKOVER, Sam S. and Baruch CAHLON: Face Recognition. Cognitive and computational processes. 2001. 32. VITIELLO, Giuseppe: My Double Unveiled. The dissipative quantum model of the brain. 2001. 33. YASUE, Kunio, Mari JIBU and Tarcisio DELLA SENTA (eds.): No Matter, Never Mind. Proceedings of Toward a Science of Consciousness: Fundamental Approaches, Tokyo, 1999. 2002. 34. FETZER, James H.(ed.): Consciousness Evolving. n.y.p. 35. Mc KEVITT, Paul, Seán Ó NUALLÁIN and Conn Mulvihill (eds.): Language, Vision, and Music. Selected papers from the 8th International Workshop on the Cognitive Science of Natural Language Processing, Galway, 1999. n.y.p. 36. PERRY, Elaine, Heather ASHTON and Allan YOUNG (eds.): Neurochemistry of Consciousness. Neurotransmitters in mind. 2001. 37. PYLKKÄNEN, Paavo and Tere VADÉN (eds.): Dimensions of Conscious Experience. 2001. 38. SALZARULO, Piero and Gianluca FICCA (eds.): Awakening and Sleep-Wake Cycle Across Development. n.y.p. 39. BARTSCH, Renate: Consciousness Emerging. The dynamics of perception, imagination, action, memory, thought, and language. n.y.p. 40. MANDLER, George: Consciousness Recovered. Psychological functions and origins of conscious thought. n.y.p. 41. ALBERTAZZI, Liliana (ed.): Unfolding Perceptual Continua. n.y.p. 42. STAMENOV, Maxim I. and Vittorio GALLESE (eds.): Mirror Neurons and the Evolution of Brain and Language. n.y.p. 43. DEPRAZ, Natalie, Francisco VARELA and Pierre VERMERSCH.: On Becoming Aware. On Becoming Aware. n.y.p. 44. MOORE, Simon and Mike OAKSFORD (eds.): Emotional Cognition. From brain to behaviour. n.y.p.