Quantitative Physiology: Systems Approach 9813340320, 9789813340329

Stephen Hawking says that the 21st century will be the century of complexity and indeed now systems biology or medicine

150 47

English Pages 258 [246] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Acknowledgements
Contents
Acronyms
Part I Applied Methodology
1 Introduction to Quantitative Physiology
1.1 Understanding Physiology
1.2 Towards Quantitative Science
1.3 From Genome to Physiome
1.4 Dealing with Complexity
1.5 Why It Is Timely to Study Quantitative Physiology
1.5.1 Multi-Omic Revolution in Biology
1.5.2 Big Data and Personalised Medicine
1.5.3 Genetic Editing and Synthetic Biology
1.6 Questions
References
2 Systems and Modelling
2.1 Modelling Process
2.2 Physiological Organ Systems
2.3 Equation Models
2.4 Using ODEs in Modelling Physiology
2.4.1 Modelling Oscillations
2.4.2 Linear Stability Analysis
2.4.3 Solving ODEs with the δ-Function
2.5 Conservation Laws in Physiology
2.5.1 Conservation of Momentum and Energy
2.5.2 Boxing With and without Gloves
2.5.3 Rotational Movement
2.6 Questions
References
3 Introduction to Basic Modelling
3.1 Building a Simple Mathematical Model
3.1.1 Model of Falling Flea
3.1.2 Scaling Arguments
3.1.3 Example: How High Can an Animal Jump?
3.1.4 Example: How Fast Can We Walk before Breaking into a Run?
3.2 Models That Involve Metabolic Rate
3.2.1 Modelling Metabolic Rate
3.2.2 Example: Why Do Large Birds Find It Harder to Fly?
3.2.3 Ludwig von Bertalanffy's Growth Model
3.3 Questions
Reference
4 Modelling Resources
4.1 Open Courses
4.2 Modelling Software
4.3 Model Repositories
4.4 Questions
References
Part II Basic Case Studies
5 Modelling Gene Expression
5.1 Modelling Transcriptional Regulation and Simple Networks
5.1.1 Basic Notions and Equations
5.1.2 Equations for Transcriptional Regulation
5.1.3 Examples of Some Common Genetic Networks
5.1.3.1 Autorepressor
5.1.3.2 Repressilator
5.2 Simultaneous Regulation by Inhibition and Activation
5.3 Autorepressor with Delay
5.4 Bistable Genetic Switch
5.5 Questions
References
6 Metabolic Network
6.1 Metabolism and Network
6.2 Constructing Metabolic Network
6.3 Flux Balance Analysis
6.4 Myocardial Metabolic Network
6.5 Questions
References
7 Calcium Signalling
7.1 Functions of Calcium
7.2 Calcium Oscillations
7.3 Calcium Waves
7.4 Questions
References
8 Modelling Neural Activity
8.1 Introduction to Brain Research
8.2 The Hodgkin-Huxley Model of Neuron Firing
8.3 The FitzHugh-Nagumo Model: A Model of the HH Model
8.3.1 Analysis of Phase Plane with Case Ia=0
8.3.2 Case Ia>0 and Conditions to Observe a Limit Cycle
8.4 Questions
References
9 Blood Dynamics
9.1 Blood Hydrodynamics
9.1.1 Basic Equations
9.1.2 Poiseuille's Law
9.2 Properties of Blood and ESR
9.3 Elasticity of Blood Vessels
9.4 The Pulse Wave
9.5 Bernoulli's Equation and What Happened to Arturo Toscanini in 1954
9.6 The Korotkoff Sounds
9.7 Questions
Reference
10 Bone and Body Mechanics
10.1 Elastic Deformations and the Hooke's Law
10.2 Why Long Bones Are Hollow or Bending of Bones
10.3 Viscoelasticity of Bones
10.4 Questions
Reference
Part III Complex Applications
11 Constructive Effects of Noise
11.1 Influence of Stochasticity
11.2 Review of Noise-Induced Effects
11.3 New Mechanisms of Noise-Induced Effects
11.4 Noise-Induced Effects
11.4.1 Stochastic Resonance in Bone Remodelling as a Tool to Prevent Bone Loss in Osteopenic Conditions
11.4.2 Transitions in the Presence of Additive Noise and On-Off Intermittency
11.4.3 Phase Transitions Induced by Additive Noise
11.4.3.1 Second-Order Phase Transitions and Noise-Induced Pattern Formation
11.4.3.2 First-Order Phase Transitions
11.4.4 Noise-Induced Excitability
11.4.4.1 The Neural Lattice
11.4.4.2 Noise-Induced Phase Transition to Excitability
11.4.4.3 Stochastic Resonance in NIE
11.4.4.4 Wave Propagation in NIE
11.5 Doubly Stochastic Effects
11.5.1 Doubly Stochastic Resonance
11.5.2 A Simple Electronic Circuit Model for Doubly Stochastic Resonance
11.5.3 Doubly Stochastic Coherence: Periodicity via Noise-Induced Symmetry in Bistable Neural Models
11.6 New Effects in Noise-Induced Propagation
11.6.1 Noise-Induced Propagation in Monostable Media
11.6.2 Noise-Induced Propagation and Frequency Selection of Bichromatic Signals in Bistable Media
11.7 Noise-Induced Resonant Effects and Resonant Effects in the Presence of Noise
11.7.1 Vibrational Resonance in a Noise-Induced Structure
11.7.2 System Size Resonance in Coupled Noisy Systems
11.7.3 Coherence Resonance and Polymodality in Inhibitory Coupled Excitable Oscillators
11.8 Applications and Open Questions
11.9 Questions
References
12 Complex and Surprising Dynamics in Gene Regulatory Networks
12.1 Nonlinear Dynamics in Synthetic Biology
12.2 Clustering and Oscillation Death in Genetic Networks
12.2.1 The Repressilator with Quorum Sensing Coupling
12.2.2 The Dynamical Regimes for a Minimal System of Repressilators Coupled via Phase-Repulsive Quorum Sensing
12.3 Systems Size Effects in Coupled Genetic Networks
12.3.1 Clustering and Enhanced Complexity of the Inhomogeneous Regimes
12.3.2 Clustering Due to Regular Oscillations in Cell Colonies
12.3.3 Parameter Heterogeneity on the Regular-Attractor Regime
12.3.4 Irregular and Chaotic Self-Oscillations in Colonies of Identical Cells
12.4 The Constructive Role of Noise in Genetic Networks
12.4.1 Noise-Induced Oscillations in Circadian Gene Networks
12.4.2 Noise-Induced Synchronisation and Rhythms
12.4.2.1 The Mean Field Synchronisation Under Constant Light Conditions
12.4.2.2 The Mean Field Under Stochastic Light Conditions—Stochastic Coherence
12.5 Speed Dependent Cellular Decision Making (SdCDM) in Noisy Genetic Networks
12.5.1 Speed Dependent Cellular Decision Making in a Small Genetic Switch
12.5.1.1 Bifurcations Under the Effect of External Signals with Different Growth Times
12.5.1.2 Effect of Different Stochastic Growth Speeds of External Signals
12.5.2 Speed Dependent Cellular Decision Making in Large Genetic Networks
12.5.2.1 The Role of Growth Speed in Decision Making
12.6 What Is a Genetic Intelligence?
12.6.1 Supervised Learning
12.6.1.1 Supervised Learning in Artificial Intelligence: Students, Teachers, and Classification
12.6.1.2 Supervised Learning in Synthetic Biology: Student Cells and Teacher Cells
12.6.1.3 Mathematical Modelling of a Biological Student-Teacher Network
12.6.2 Associative Learning
12.6.2.1 Building an Associative Perceptron with Synthetic Gene Networks
12.6.3 Classification of Complex Inputs
12.6.4 Applications and Implications of Bio-Artificial Intelligence
12.7 Effect of Noise in Intelligent Cellular Decision Making
12.7.1 Stochastic Resonance in an Intracellular Associative Genetic Perceptron
12.7.2 Stochastic Resonance in Classifying Genetic Perceptron
12.7.2.1 Analysis
12.7.2.2 Simulation Results
12.7.2.3 Additional Topics
12.8 Questions
References
13 Modelling Complex Phenomena in Physiology
13.1 Cortical Spreading Depression (CSD)
13.1.1 What Is CSD
13.1.2 Models of CSD
13.1.3 Applications of CSD Models
13.1.4 Questions
13.2 Heart Physiome
13.2.1 Cardiovascular System
13.2.2 Heart Physiome
13.2.3 Multi-Level Modelling
13.2.4 Questions
13.3 Modelling of Kidney Autoregulation
13.3.1 Renal Physiology
13.3.1.1 Nephron as Functional Unit
13.3.1.2 Mechanisms of Autoregulation
13.3.1.3 Nephron-Vascular Tree
13.3.2 Experimental Observations
13.3.3 Model of Nephron Autoregulation
13.3.3.1 Formulation of the Model
13.3.3.2 Equations
13.3.3.3 Functions
13.3.3.4 Parameters
13.3.3.5 Simulations
13.3.4 Questions
13.4 Brain Project
13.4.1 Mystery of Brain
13.4.2 Brain Projects
13.4.3 Brain Simulation
13.4.4 Mammalian Brain as a Network of Networks
13.4.4.1 Brain as a Neural Attractor Network
13.4.4.2 Glial Network and Glial-Neural Interactions
13.4.4.3 Possible Intracellular Intelligence in Brain
13.4.4.4 Medical Applications
13.4.4.5 Open Questions in Dynamics of Brain Super-Network
13.4.5 Calculation of Integrated Information
13.4.6 Astrocytes and Integrated Information Theoryof Consciousness
13.4.6.1 Integrated Information Theory
13.4.6.2 Neuron-Astrocyte Model
13.4.6.3 Integrated Information Generated by Astrocytes
13.4.7 Questions
References
Recommend Papers

Quantitative Physiology: Systems Approach
 9813340320, 9789813340329

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Shangbin Chen Alexey Zaikin

Quantitative Physiology Systems Approach

Quantitative Physiology

Shangbin Chen • Alexey Zaikin

Quantitative Physiology Systems Approach

Shangbin Chen School of Engineering Sciences Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology Hubei, China

Alexey Zaikin Department of Mathematics and Institute for Women’s Health University College London London, UK

ISBN 978-981-33-4032-9 ISBN 978-981-33-4033-6 (eBook) https://doi.org/10.1007/978-981-33-4033-6 © Huazhong University of Science and Technology Press 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

v

Wave of cortical spreading depression and its equation

Dedicated to Nature

Foreword

Physiology is one of the oldest sciences, may be after astronomy the second oldest one. All old high civilisations made substantial contributions to its development. The ancient Egyptians documented about 1700 bc in the so-called surgical papyrus that human brain contains both tissue and fluid. The Chinese identified special pathways in the body, called meridians, and used them for acupuncture since about 2000 bc. Later, the Greek Hippocrates (ca. 460–370 bc) described hydrocephalus as the result of a pathological accumulation of water-like fluid inside the head. It took then more than 1000 years till further developments in Europe set in, to emphasise here Leonardo da Vinci and Andreas Vesalius in the fifteenth and sixteenth century, respectively. However, it needed further effort to come to a quantitative description of physiology, as firstly done by William Harvey (1578–1657) and Hermann von Helmholtz (1821–1894) for making already rather precise measurements of basic processes as blood flow and nerve dynamics. But, Quantitative Physiology needed more than measurements to become a science, namely mathematical modelling to describe the underlying mechanisms. A milestone was the first theory of nerve and muscle action potential expressed in biophysical models by the often forgotten German physiologist Julius Bernstein in 1912, which was 40 years later somewhat extended by the British Alan L. Hodgkin and Andrew F. Huxley in 1952. Since then, almost a revolution in developing high-precision measurements and mathematical modelling has started and is going on. In this textbook, the two very active researchers in Quantitative Physiology, Shangbin Chen and Alexey Zaikin, very successfully and originally bring some order in this “zoo of models and their treatment” to become understandable for newcomers in the field. They start in the first part by presenting the challenges and basic methodology of modelling, including available resources and software. Then, in the second part, they describe modelling of important physiological subsystems as genetic and metabolic networks, calcium signalling, neural activity, blood dynamics, and bone mechanics. In the third part, special and very successful applications to rather complex physiological processes, such as cellular decision making, cortical spreading depression, heart physiome, kidney regulation, heart physiome, mind, and consciousness, are discussed. The general approach is illustrated by paradigmatic examples; in particular to emphasise are the questions formulated at the end of each chapter, which should excite the reader for further thinking about related problems and guiding to further reading. This very well-written textbook by Shangbin Chen and Alexey Zaikin is a systematic presentation of the strongly evolving field of Quantitative Physiology. It provides the basic principles of this difficult kind of modelling as well as the treatment of the corresponding equations. This book is the outcome of a joint lecture given to students of biomedical engineering at the undergraduate elite School of Engineering Sciences, Huazhong University of Science and Technology. It is a very useful introduction for starters in the field, but it also

ix

x

Foreword

provides important information and suggestions for researchers in physiology and complex systems science as well as for a broad range of specialists in bioengineering, biology, computer science, and others. Wuhan, China Berlin and Potsdam, Germany

Jürgen Kurths

Preface

Stephen Hawking says that the next twenty-first century will be the century of complexity, and indeed now Systems Biology or Medicine means dealing with complexity. Both genome and physiome have been emerged in studying complex physiological systems. Computational and mathematical modelling has been regarded as an efficient tool to boost understanding about the living systems in normal or pathophysiological states. Quantitative Physiology, defined as the quantitative description, modelling, and computational study of physiology, is an interdisciplinary field of systems biology. Many universities have founded the course on Quantitative Physiology. However, there has not been a textbook on this topic. The need has become the first driving force for us to publish this book. The book is mainly based on lectures Quantitative Physiology and Biomathematics given at Huazhong University of Science and Technology (HUST) and University College London (UCL). The book is divided into three major parts: Applied Methodology, Basic Case Studies and Complex Applications. This is ABC of Quantitative Physiology. The aim of this book is real problem solving on complex applications, but we need to lead the students to learn applied methods and basic cases. The applied methodology part encompasses brief introduction to Quantitative Physiology, systems and modelling, basic modelling, and modelling resources. The basic case studies consist of several important topics, such as gene expression modelling, metabolic network, calcium signalling, modelling neural activity, blood dynamics, and bone and body mechanics. The part of complex applications comprises three chapters on constructive effects of noise, dynamics in gene regulatory networks, and modelling complex phenomena in physiology. Physiology includes a wide range of topics and problems. This textbook can only cover a small part of contents. We recommend “3M rule” to the students: learn modelling from open courses, refer to applicable models from repository, and learn from modellers of some research groups. Never underestimate the students’ potential as an active learner. They can learn a lot by themselves. No doubt, we will introduce Systems Approach in Quantitative Physiology: treat the physiological system as a whole and apply the fundamental laws of physics, mathematics, and information technique. On the other hand, we suppose that how to think about problems is the most important thinking. Thus, we raise Critical Thinking as an educational ideal for Quantitative Physiology. Robert Ennis defined Critical Thinking as reflective and reasonable thinking that is focused on deciding what to believe or do. We and the students should not only learn the knowledge of Quantitative Physiology, but also develop the skills of modelling and the spirits of Critical Thinking. We hope the students (and the readers) can think critically when it is appropriate to do so, and do so well. This textbook should be a model to train the students’ Critical Thinking on problem solving. We hope to motivate the students for success in modelling.

xi

xii

Preface

In this book, theory and practice are combined. MATLAB® is extensively used for demonstration and training. To benefit from this book, the readers are expected to have a background in general physiology and college mathematics. In addition to serving as a textbook, this book can also be used as a reference for those who are interested in systems approach on physiology. Wuhan, China London, UK

Shangbin Chen Alexey Zaikin

Acknowledgements

We would like to thank our families, colleagues, and students for their generous support. To publish this textbook is really a group work. We are particularly grateful to Professors Qingming Luo and Qian Liu who initiated the course of Quantitative Physiology in Huazhong University of Science and Technology (HUST) in 2006. They have led and taught us a lot about this promising field. We would also like to acknowledge Professors Jürgen Kurths and Ling Fu who are heads of the School of Engineering Sciences. They have offered us much help and support. Professors Shaoqun Zeng and Hui Gong have given us invaluable advice and encouragement. Dr. Xinglong Wu helped a lot on the using of LaTex. Graduates Yue Luo and Langzhou Liu and Undergraduate Zhengchao Luo joined the editing of the text. Undergraduate Jin Qian drew the figure for the conceptual framework of Quantitative Physiology. Xiaoxiao Wu painted the wave figure of cortical spreading depression for the cover. We would also like to thank the publishing team from the HUST Press and Springer, including Chong Yuan, Yubin Yang, Xinqi Jiang, Yankui Zuo and the anonymous reviewers and editors. The publishing of this book was supported by the Textbook and Teaching Research funding of Huazhong University of Science and Technology. Wuhan, China London, UK

Shangbin Chen Alexey Zaikin

xiii

Contents

Part I Applied Methodology 1

Introduction to Quantitative Physiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Understanding Physiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Towards Quantitative Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 From Genome to Physiome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Dealing with Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Why It Is Timely to Study Quantitative Physiology . . . . . . . . . . . . . . . . . . . . 1.5.1 Multi-Omic Revolution in Biology . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Big Data and Personalised Medicine . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.3 Genetic Editing and Synthetic Biology . . . . . . . . . . . . . . . . . . . . . . . 1.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 4 5 6 6 6 7 8 8 8

2

Systems and Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Modelling Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Physiological Organ Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Equation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Using ODEs in Modelling Physiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Modelling Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Linear Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Solving ODEs with the δ-Function . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Conservation Laws in Physiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Conservation of Momentum and Energy . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Boxing With and without Gloves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Rotational Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11 11 13 14 16 16 16 17 18 18 19 20 20 21

3

Introduction to Basic Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Building a Simple Mathematical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Model of Falling Flea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Scaling Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Example: How High Can an Animal Jump? . . . . . . . . . . . . . . . . . . . 3.1.4 Example: How Fast Can We Walk before Breaking into a Run? . . 3.2 Models That Involve Metabolic Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Modelling Metabolic Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Example: Why Do Large Birds Find It Harder to Fly? . . . . . . . . . . 3.2.3 Ludwig von Bertalanffy’s Growth Model . . . . . . . . . . . . . . . . . . . . 3.3 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23 23 23 25 25 25 26 26 27 28 29 29

xv

xvi

4

Contents

Modelling Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Open Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Modelling Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Model Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Part II

31 31 31 34 35 35

Basic Case Studies

5

Modelling Gene Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Modelling Transcriptional Regulation and Simple Networks . . . . . . . . . . . . . 5.1.1 Basic Notions and Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Equations for Transcriptional Regulation . . . . . . . . . . . . . . . . . . . . . 5.1.3 Examples of Some Common Genetic Networks . . . . . . . . . . . . . . . . 5.2 Simultaneous Regulation by Inhibition and Activation . . . . . . . . . . . . . . . . . . 5.3 Autorepressor with Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Bistable Genetic Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39 39 39 39 41 42 43 44 44 45

6

Metabolic Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Metabolism and Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Constructing Metabolic Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Flux Balance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Myocardial Metabolic Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47 47 49 50 51 51 52

7

Calcium Signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Functions of Calcium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Calcium Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Calcium Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53 53 54 59 59 60

8

Modelling Neural Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction to Brain Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 The Hodgkin-Huxley Model of Neuron Firing . . . . . . . . . . . . . . . . . . . . . . . . 8.3 The FitzHugh-Nagumo Model: A Model of the HH Model . . . . . . . . . . . . . . 8.3.1 Analysis of Phase Plane with Case Ia = 0 . . . . . . . . . . . . . . . . . . . . 8.3.2 Case Ia > 0 and Conditions to Observe a Limit Cycle . . . . . . . . . . 8.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61 61 62 63 63 64 65 66

9

Blood Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Blood Hydrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Basic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Poiseuille’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Properties of Blood and ESR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Elasticity of Blood Vessels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 The Pulse Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Bernoulli’s Equation and What Happened to Arturo Toscanini in 1954 . . . . 9.6 The Korotkoff Sounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67 67 67 67 68 69 69 70 71 71 72

Contents

xvii

10 Bone and Body Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Elastic Deformations and the Hooke’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Why Long Bones Are Hollow or Bending of Bones . . . . . . . . . . . . . . . . . . . . 10.3 Viscoelasticity of Bones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part III

Complex Applications

11 Constructive Effects of Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Influence of Stochasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Review of Noise-Induced Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 New Mechanisms of Noise-Induced Effects . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Noise-Induced Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Stochastic Resonance in Bone Remodelling as a Tool to Prevent Bone Loss in Osteopenic Conditions . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 Transitions in the Presence of Additive Noise and On-Off Intermittency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.3 Phase Transitions Induced by Additive Noise . . . . . . . . . . . . . . . . . . 11.4.4 Noise-Induced Excitability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Doubly Stochastic Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Doubly Stochastic Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.2 A Simple Electronic Circuit Model for Doubly Stochastic Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.3 Doubly Stochastic Coherence: Periodicity via Noise-Induced Symmetry in Bistable Neural Models . . . . . . . . . . . . . . . . . . . . . . . . 11.6 New Effects in Noise-Induced Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Noise-Induced Propagation in Monostable Media . . . . . . . . . . . . . . 11.6.2 Noise-Induced Propagation and Frequency Selection of Bichromatic Signals in Bistable Media . . . . . . . . . . . . . . . . . . . . . . . 11.7 Noise-Induced Resonant Effects and Resonant Effects in the Presence of Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7.1 Vibrational Resonance in a Noise-Induced Structure . . . . . . . . . . . . 11.7.2 System Size Resonance in Coupled Noisy Systems . . . . . . . . . . . . . 11.7.3 Coherence Resonance and Polymodality in Inhibitory Coupled Excitable Oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.8 Applications and Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

73 73 74 77 83 83

Complex and Surprising Dynamics in Gene Regulatory Networks . . . . . . . . . . 12.1 Nonlinear Dynamics in Synthetic Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Clustering and Oscillation Death in Genetic Networks . . . . . . . . . . . . . . . . . 12.2.1 The Repressilator with Quorum Sensing Coupling . . . . . . . . . . . . . 12.2.2 The Dynamical Regimes for a Minimal System of Repressilators Coupled via Phase-Repulsive Quorum Sensing . . . . . . . . . . . . . . . . 12.3 Systems Size Effects in Coupled Genetic Networks . . . . . . . . . . . . . . . . . . . . 12.3.1 Clustering and Enhanced Complexity of the Inhomogeneous Regimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Clustering Due to Regular Oscillations in Cell Colonies . . . . . . . . . 12.3.3 Parameter Heterogeneity on the Regular-Attractor Regime . . . . . . 12.3.4 Irregular and Chaotic Self-Oscillations in Colonies of Identical Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87 87 89 91 93 93 98 103 109 113 113 117 120 125 125 128 129 129 133 136 140 141 141 147 147 148 148 150 152 153 154 155 155

xviii

Contents

12.4 The Constructive Role of Noise in Genetic Networks . . . . . . . . . . . . . . . . . . . 12.4.1 Noise-Induced Oscillations in Circadian Gene Networks . . . . . . . . 12.4.2 Noise-Induced Synchronisation and Rhythms . . . . . . . . . . . . . . . . . . 12.5 Speed Dependent Cellular Decision Making (SdCDM) in Noisy Genetic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Speed Dependent Cellular Decision Making in a Small Genetic Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5.2 Speed Dependent Cellular Decision Making in Large Genetic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 What Is a Genetic Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.2 Associative Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.3 Classification of Complex Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6.4 Applications and Implications of Bio-Artificial Intelligence . . . . . . 12.7 Effect of Noise in Intelligent Cellular Decision Making . . . . . . . . . . . . . . . . . 12.7.1 Stochastic Resonance in an Intracellular Associative Genetic Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7.2 Stochastic Resonance in Classifying Genetic Perceptron . . . . . . . . 12.8 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Modelling Complex Phenomena in Physiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Cortical Spreading Depression (CSD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 What Is CSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.2 Models of CSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.3 Applications of CSD Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Heart Physiome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Cardiovascular System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2 Heart Physiome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.3 Multi-Level Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Modelling of Kidney Autoregulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 Renal Physiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Experimental Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3 Model of Nephron Autoregulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.4 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Brain Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Mystery of Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2 Brain Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3 Brain Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.4 Mammalian Brain as a Network of Networks . . . . . . . . . . . . . . . . . . 13.4.5 Calculation of Integrated Information . . . . . . . . . . . . . . . . . . . . . . . . 13.4.6 Astrocytes and Integrated Information Theory of Consciousness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.7 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

157 157 158 160 161 162 164 164 166 166 169 169 169 174 183 183 189 189 189 189 192 196 197 197 198 199 201 202 202 204 205 209 209 209 210 212 215 223 224 233 233

Acronyms

3M AcCoA ADP AI AIDS AP ATP BMI BRAIN CA CICR

CNS CR CSD CVD DFBA DNA DSC DSE EC coupling ECF ECG ECS EEG ER ETC FA FBA

The modelling, model, and modeller are introduced in this book of Quantitative Physiology Acetyl-CoA: It is an intermediary molecule that participates in many biochemical reactions in carbohydrates, fatty acids, and amino acids metabolism Adenosine diphosphate: It is an important organic compound in metabolism and is essential to the flow of energy in living cells Artificial intelligence: It is sometimes called machine intelligence, in contrast to the human intelligence Acquired immunodeficiency syndrome: It is a transmissible disease caused by the human immunodeficiency virus (HIV) Action potential: An action potential is a rapid rise and subsequent fall in membrane potential of a neuron Adenosine triphosphate: The ubiquitous molecule necessary for intracellular energy storage and transfer Body mass index: It is a measure of body fat based on height and weight that applies to adult men and women Brain Research through Advancing Innovative Neurotechnologies: The BRAIN Initiative launched in April 2013 is focused on revolutionising our understanding of the human brain Cellular automaton: It is a specifically shaped group of model cells known for evolving through multiple and discrete time steps according to a rule set depending on neighbouring cell states Calcium-induced calcium release: The autocatalytic release of Ca 2+ from the endoplasmic or sarcoplasmic reticulum through I P3 receptors or ryanodine receptors. CICR causes the fast release of large amounts of Ca 2+ from internal stores and is the basis for Ca 2+ oscillations and waves in a wide variety of cell types Central nervous system: It is the part of the nervous system consisting of the brain and spinal cord Coherence resonance: It refers to a phenomenon in which addition of certain amount of external noise in excitable system makes its oscillatory responses most coherent Cortical spreading depression: It is characterised by the propagation of depolarisation waves across the grey matter at a velocity of 2–5 mm/min Cardiovascular disease: It is a class of diseases that involve the heart or blood vessels Dynamic flux balance analysis: It is the dynamic extension of flux balance analysis (FBA) Deoxyribonucleic acid: It is a molecule comprised of two chains that coil around each other to form a double helix carrying the genetic information Doubly stochastic coherence Doubly stochastic effects Excitation-contraction coupling: It describes a series of events, from the production of an electrical impulse (action potential) to the contraction of muscles Extracellular fluid: The portion of the body fluid comprises the interstitial fluid and blood plasma Electrocardiogram (or EKG): The record is produced by electrocardiography to represent the heart’s electrical action Extracellular space: It is usually taken to be outside the plasma membranes and occupied by fluid Electroencephalography: It is an electrophysiological monitoring method to record electrical activity of the brain Endoplasmic reticulum: An internal cellular compartment in non-muscle cells acting as an important Ca 2+ store. The analogous compartment in muscle cells is termed the sarcoplasmic reticulum (SR) Electron transport chain Fatty acid: It is the building block of the fat in our bodies and in the food we eat Flux balance analysis: It is a widely used approach for studying biochemical networks xix

xx

FHC FHN GFP GRN Glu Gly HBP HGP

HH model II IP3 ICS iPS ISIH IUPS Lac LC MFT MOMA NADH NADPH NIE NIT NSR ODE PC PDE PDF PE PNS Pyr RD RFP SCN SdCDM SERCA SGN

Acronyms

Familial hypertrophic cardiomyopathy: It is a heart condition characterised by thickening (hypertrophy) of the heart (cardiac) muscle FitzHugh-Nagumo model: It is named after Richard FitzHugh and Jin-Ichi Nagumo for describing a prototype of an excitable system (e.g., a neuron) Green fluorescent protein: A protein, originally derived from a jellyfish, that exhibits bright green fluorescence when exposed to blue or ultraviolet light Gene regulatory network or genetic regulatory network: It is a collection of regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins Glucose: Glucose is a simple sugar with the molecular formula C6 H12 O6 Glycogen: It is a multibranched polysaccharide of glucose that serves as a form of energy storage in organisms Human Brain Project: It is a European Commission Future and Emerging Technologies Flagship started on 1 October 2013 Human Genome Project: It is an international project with the goal of determining the sequence of nucleotide base pairs that make up human DNA and of identifying and mapping all genes of the human genome from both a physical and a functional standpoint The Hodgkin-Huxley model: It is a mathematical model that describes how action potentials in neurons are initiated and propagated Integrated information: It is a measure of the degree to which the components of a system are working together to produce outputs Inositol 1,4,5-trisphosphate: A second messenger responsible for the release of intracellular Ca 2+ from internal stores, through I P3 receptors Intracellular space: It is taken to be inside the cell Induced pluripotent stem cells: They are a type of pluripotent stem cell that can be generated directly from adult cells Interspike interval histogram The International Union of Physiological Societies Lactate (or Lactic acid): It has the molecular formula CH3 CH (OH )CO2 H Limit cycle Mean field theory: It studies the behaviour of large and complex stochastic models by using a simpler model Minimisation of metabolic adjustment: It is used as an objective function for FBA Nicotinamide adenine dinucleotide hydride Nicotinamide adenine dinucleotide phosphate Noise-induced excitability Noise-induced transition National Simulation Resource Ordinary differential equation: It is a differential equation containing one or more functions of one independent variable and its derivatives Phosphocreatine: It is a phosphorylated creatine molecule that serves as a rapidly mobilisable reserve of high-energy phosphates in skeletal muscle and the brain Partial differential equation: It is a differential equation that contains beforehand unknown multivariable functions and their partial derivatives Probability distribution function Potential energy Peripheral nervous system Pyruvate: It is a key intermediate in several metabolic pathways throughout the cell Reaction-diffusion: A reaction-diffusion system consists of the diffusion of material and the production of that material by reaction Red fluorescent protein The suprachiasmatic nuclei Speed dependent cellular decision making Sarcoplasmic/endoplasmic reticulum Ca 2+ ATPase: A Ca 2+ ATPase pump that transports Ca 2+ up its concentration gradient from the cytoplasm to the ER/SR Synthetic gene network

Acronyms

SNR SR SR TCA cycle

TF TGF UCS VGCC VR WHO

xxi

Signal to noise ratio Sarcoplasmic reticulum: An internal cellular compartment in muscle cells that functions as an important Ca 2+ store. The analogous compartment in non-muscle cells is called the endoplasmic reticulum (ER) Stochastic resonance: It is a phenomenon where a signal can be boosted by adding white noise to the signal Tricarboxylic acid cycle or the Krebs cycle: It is a series of chemical reactions used by all aerobic organisms to generate energy through the oxidation of acetyl-CoA into carbon dioxide and chemical energy in the form of guanosine triphosphate (GTP) Transcription factor: It is a protein that binds to specific DNA sequences, thereby controlling the rate of transcription of genetic information from DNA to messenger RNA Tubuloglomerular feedback Ultimate compressive stress Voltage-gated Ca 2+ channels: Membrane Ca 2+ channels that open in response to depolarisation of the cell membrane Vibrational resonance World Health Organization: It is a specialised agency of the United Nations to direct international health

Applied Methodology

The first part of this book is devoted to introduce the applied methodology of Quantitative Physiology but starts with the discussion of the following questions: What is Quantitative Physiology? Why is it so important to study it now? How to deal with the complexity of a human body? Which methodology can be used to deal with this complexity? How to construct a simple model to explain experimental observations? Which modelling resources can be used to achieve aims of modelling?

1

Introduction to Quantitative Physiology Shangbin Chen and Alexey Zaikin

It is the crossover between disciplines where advances are possible in science. —Max Delbrück

1.1

Understanding Physiology

What is life [1]? This could be an obvious but profound question. In 2005, the Science magazine raised 125 big questions [2] that include: What is the biological basis of consciousness? Why do humans have so few genes? How are memories stored and retrieved? To what extent are genetic variation and personal health linked? How much can human life span be extended? What controls organ regeneration? Particularly,14 out of the top 25 questions are all about living systems. If you want to know more “what, why, and how” on life, you may need to study physiology. Physiology is a branch of biology. It is the scientific study of normal mechanisms and their interactions of a living system. The term physiology derives from Ancient Greek physis and logia that mean “nature, origin”, and “study of”, respectively. So, physiology is the study of nature of life [3]. There are 3 essential characteristics of life: metabolism, excitability, and reproduction [4]. Physiology aims to know how biomolecules and cells, tissues and organs, organ systems and organisms carry out the chemical or physical functions that exist in a living system. No doubt, physiology is the basis of medicine. Usually, physiology is perceived as a key discipline of biology and medicine. Besides human physiology, physiology may be divided into animal physiology, plant physiology, microbial physiology, and so on. Sometimes, the pathological study may be considered as physiology. Physiology is usually considered as experimental science. Both acute and chronic physiological experiments are routinely designed and performed to study the mechanisms of a living system. There are 3 different hierarchical levels for understanding physiology: molecular and cellular level, organ level, and organism level. Also, there are two opposite approaches, i.e., reductionism and holism, used in physiological research. Now, there is a tendency towards integrative science [5]. The rapid advances of technology allow us to collect different kinds of structural and functional data of organisms. Both high-throughput genome sequencing and high resolu-

tion bio-imaging are revolutionising the research of physiology in the Big Data era. So far, physiology has been data science. Except for real wet physiological experiments, there have been more and more modelling and computational work on physiology. Here, we would like to use Quantitative Physiology to cover this idea. Quantitative Physiology is the quantitative description, modelling and computational study of physiology, which is an increasingly important branch of systems biology [6]. It will take the power of physics, mathematics, information technology, etc. to implement quantitative, testable, and predictable research on physiology. Sometimes, mathematical physiology [7] or computational physiology may be used to stress the theoretical or computational nature of physiology. In fact, mathematical modelling, computation, and visualisation will be applied together in Quantitative Physiology, which aims to boost understanding the function in living systems. Computational modelling of physiology has undergone rapid development in recent 10 years. The PubMed (https:// www.ncbi.nlm.nih.gov/pubmed) searching with the term “Physiology AND (Modelling OR Modeling)” on December 13, 2019 yielded over 127,000 papers, with more than 80,000 from this decade. The paper count in each year is shown in Fig. 1.1. Why We Need Computational Modelling Simply, computational modelling is an optimal description of physiology system. In 1952, Alan L. Hodgkin and Andrew F. Huxley published the most successful model (HH model) of membrane ionic currents for simulating action potential [8]. In 1963, John C. Eccles, Hodgkin, and Huxley were jointly awarded the Nobel Prize in Physiology or Medicine “for their discoveries concerning the ionic mechanisms involved in excitation and inhibition in the peripheral and central portions of the nerve cell membrane”. HH model has succeeded in explaining how different combinations of ion channels underlie diverse electrical activity of excitable cells, especially in nervous system. HH model will be introduced in Chap. 8 with more details.

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_1

3

4

1 Introduction to Quantitative Physiology

Fig. 1.1 Paper count from PubMed searching with “Physiology AND (Modelling OR Modeling)”

Computational modelling will merge the discrete puzzles to make a big picture. Inspired by the idea “All biology is computational biology” [9], we may get the similar idea that all physiology is computational physiology. We will start with simple toy mathematical models using scaling arguments and discuss the questions: How high can an animal jump? How fast can we walk before breaking into run? What is the minimal nerve speed required to make it possible for an animal to balance? What is the simplest universal model for growth of a multicellular organism? These questions may warm up our modelling thinking. We hope the modelling and computational methods will turn conventional physiology into quantitative science. How to Perform Computational Modelling This is the thrust of our course of Quantitative Physiology. We will learn how to do it step by step. Firstly, we will interpret several features of Quantitative Physiology.

1.2

Towards Quantitative Science

When we are talking about physiology nowadays, we may use more and more data or variables. The term Quantity is the quantitative attribute existing in a range of magnitudes, which can be quantified by measurement. A quantity can be expressed as the combination of a magnitude expressed by a number and a unit. The seven basic physical quantities are distance, mass, time, electric current, temperature, amount of substance and luminous intensity. Could you use some of the quantities to describe the physiological states

of yourself (such as height, weight, and cardiac cycle period)? Here, we will introduce 2 stories in the history of physiology. We may perceive the power of quantitative study. In the seventeenth century, William Harvey was the first to form the concept of blood circulation through simple quantification and calculation [3]. He estimated that the human heart’s blood output per beat is about two ounces (1 ounce is 28.35 g), and the heart beats 72 times per minute. It is easy to conclude that about 540 pounds (1 pound is 453.6 g) of blood is discharged from the heart into the aorta every hour. This weight of blood pumped by the heart in a single hour far exceeds the whole weight of a normal body! Harvey’s calculation forced himself to examine an anatomical route that would allow the blood to recirculate rather than be consumed in the tissues and be synthesised continuously by liver. Harvey recognised that the same amount of blood goes through the heart. After proposing this hypothesis, he spent 9 years experimenting and observing and mastering the details of the blood circulation. This work established the foundation of modern cardiovascular physiology. Even in the mid of nineteenth century, most people believed that nerve signals passed along nerves immeasurably fast. In 1849, Hermann von Helmholtz measured the speed at which the signal is carried along a nerve fibre. He used a recently dissected sciatic nerve of a frog and the calf muscle to which it attached. He used a galvanometer as a sensitive timing device, attaching a mirror to the needle to reflect a light beam across the room to a scale which gave much greater sensitivity. Helmholtz reported transmission speeds in the range of 24.6–38.4 m/s. This measured speed of nerve impulse conduction is far below the conduction speed of the current,

1.3 From Genome to Physiome

indicating that the conduction of nerve impulses is different from the conduction of current in metal wires, revolutionising the hypothesis that the conduction of nerve impulses is equated with current conduction. This work contributes to the membrane theory of modern electrical physiology. Modern techniques have enabled various precise measurements on physiology. In fact, more and more anatomical, physiological, and pathological data are collected to support precision medicine. In order to get statistical significance and confidence, cohort study is also performed to acquire Big Data. The following is one example from Prof. Hu’s group of Harvard University [10]. We prospectively observed 37,698 men from the Health Professionals Follow-up Study (1986–2008) and 83,644 women from the Nurses’ Health Study (1980–2008) who were free of cardiovascular disease (CVD) and cancer at baseline. Diet was assessed by validated food frequency questionnaires and updated every 4 years.

We can imagine how much data will be acquired during the above research. Numerous quantitative data covering biological molecules, cells, tissues, organs, organisms, and whole body are accumulated. A lot of databases on physiology are provided. Statistical analysis of data science will be highly demanded for physiology. Here, we would like to say Quantitative Physiology. However, computational physiology is also a good description of modern physiology. The fusion of mathematical, statistical, and computational methods would turn physiology into a quantitative science.

1.3

From Genome to Physiome

In 2003, the completion of the Human Genome Project (HGP) was announced effectively with the publication of

5

the first complete human genome sequence. A gene is the molecular unit of heredity and is a region of DNA that encodes function. A genome is the genetic material of an organism. The genome includes both the genes (the coding regions), the non-coding DNA, and the genetic material of the mitochondria and chloroplasts. Due to the tight correlation between genes and functions, physiological genomics and functional genomics are hot topics in modern biology. Among the long list of “ome” and “omics”, we should pay more attention to Physiome and Physiomics. Physiome comes from “physio-” (nature) and “-ome” (as a whole). The physiome is the quantitative and integrated description of the physiological dynamics and functional behaviour of the physiological (normal) and pathophysiological states of an individual or species. The physiome describes the physiological dynamics of the normal intact organism and is built upon information and structure (genome, proteome, and morphome). Obviously, Quantitative Physiology matches very well with the definition of Physiome [11]. In 1993, the concept of a physiome project was firstly presented to the International Union of Physiological Sciences (IUPS) by its Commission on Bioengineering in Physiology. A workshop on designing the Physiome Project was held in 1997. At its world congress in 2001, the IUPS designated the project as a major focus in explain how each and every component in the body functions as part of the integrated whole for the next decade. The project is led by the Physiome Commission of the IUPS. So, it is called as the IUPS Physiome Project. There is more information in the website (http:// physiomeproject.org/). In addition, the National Simulation Resource in Department of Bioengineering at the University of Washington has set up the NSR Physiome Project (http:// www.physiome.org/, see Fig. 1.2).

Fig. 1.2 Snapshot of the NSR physiome project at University of Washington

6

1 Introduction to Quantitative Physiology

The NSR Physiome Project is a worldwide effort to collect, archive, develop, implement, validate and disseminate quantitative information, and integrative models for different multi-level physiological systems. The six goals are as follows: 1. “To develop and database observations of physiological phenomenon and interpret these in terms of mechanism (a fundamentally reductionist goal). 2. To integrate experimental information into quantitative descriptions of the functioning of humans and other organisms (modern integrative biology glued together via modeling). 3. To disseminate experimental data and integrative models for teaching and research. 4. To foster collaboration amongst investigators worldwide, to speed up the discovery of how biological systems work. 5. To determine the most effective targets (molecules or systems) for therapy, either pharmaceutic or genomic. 6. To provide information for the design tissue-engineered, biocompatible implants.” Clearly, both the IUPS and NSR Physiome Projects are dedicated to understand the integrative function of cells, organs, and organisms, and to provide crucial information for translational medicine.

The mechanisms to link molecular and cellular events with physiological function and behaviour are associated with wide ranges of space and time scales. These typically involve protein folding, genetic expression, metabolic networks, cell signalling networks, and whole body neural signalling. It is not easy to perform multi-scale modelling and data analysis to investigate the function in living systems, organisms, organ systems, organs, cells, and biomolecules carrying out the chemical or physical functions that exist in a living system. Systems thinking will be central to an understanding of physiological functioning for its integrated nature with other disciplines such as physics, chemistry and information science. Holism addresses that the whole is more than the sum of its parts. The aim is to develop models that can be interconnected to form effective representations of complex systems. Just as Schrödinger’s question and answer [1]: “How can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry? The obvious inability of present-day physics and chemistry to account for such events is no reason at all for doubting that they can be accounted for by those sciences.” Yes, the future of Quantitative Physiology will be bright [14].

1.5 1.4

Dealing with Complexity

Stephen Hawking says that the next twenty-first century will be the century of complexity and indeed now Systems Biology or Medicine means dealing with complexity [6]. This reality is that a huge amount of biological or physiological data, emerging from different -omics sources on very different scales, from genome to physiome, exceeds our abilities to analyse this data. A conceptual framework of Quantitative Physiology is shown in Fig. 1.3. Effective progress in research can be obtained only by merging experimental data mining with modelling and Big Data analysis in the framework of Quantitative Physiology. Recent advances in applied mathematics and nonlinear dynamics as well as in the development of new machine-learning algorithms, such as deep learning methods, have enabled understanding real mechanisms underlying observed complex phenomena [12]. On the other hand, computational modelling is cheaper and faster than real experiments. Both Editors of this textbook have joined the project “Digital Personalised Medicine of Healthy Ageing (DPM-AGEING): network analysis of Big Multi-omics data to search for new diagnostic, prognostic and therapeutic targets” [13]. This project is a good example of Quantitative Physiology.

Why It Is Timely to Study Quantitative Physiology

A study of physiology was always one of the main interests of scientists but only recently systems science approaches have been translated into this field. In systems sciences, such as systems biology or recently introduced systems medicine, one is interested not in the separate study of particular systems, but in the development of general approaches and laws valid for all systems under consideration. Probably, one of the first men thinking along this approach was the Italian polymath Leonardo da Vinci around 1490. According to the Encyclopaedia Britannica, “He believed that the workings of the human body to be an analogy for the workings of the universe”. A development of Quantitative Physiology was exponential since that time, and now it is very timely to study this field because of several events recently occurred. Let us discuss it in detail.

1.5.1

Multi-Omic Revolution in Biology

We live in the unique time: when exponential progress in the development of computer and artificial intelligence coincides with the crucial discoveries in Quantitative Physiology. Practically we can say that we observe a real data revolution

1.5 Why It Is Timely to Study Quantitative Physiology

7

Fig. 1.3 Conceptual framework of Quantitative Physiology. Computational modelling and statistical analysis will cover multi-scales from biomolecules to whole organisms. Credit to Jin Qian

in modern biology. After discovery of the DNA structure in 1953 by James Watson and Francis Crick, probably, the most significant discovery was a sequencing of the human genome in the frame of the Human Genome Project in 2003. Getting complete information about the human genome, a sequence of four nucleotides, packed in diploid chromosomes has resulted in surprising discoveries. Neither the total size of the human genome (3.2Gb) nor the number of genes (around 20, 000) was the largest among the animals and this was certainly in contradiction to the fact that humans with their brains are the most complex evolution creatures. Some amoeba or protozoan are believed to have a genome of the size 670 Gb, and a water flea possesses about 31, 000 genes. On the other hand, it was found that only 2% of the human genome is responsible for protein coding regions, i.e., for genes. The rest 98% were initially believed to an evolutionary rubbish accumulated throughout the years. However, sequencing the human genome has shown that this is not true, and, at the same time, explained how so small number of genes could encode so complicated human beings. Surprisingly, it was found that this 98% of the human DNA play an important role in how the information is read from protein encoding genes and it appeared that there are many complex mechanisms regulating the read-out of this information. Especially to mention here, it was found that non-coding regions can code miRNA molecules which regulate the read-out of protein encoding genes by RNA interference regulation. Additionally the gene expression is

mediated by alternative splicing and by methylation of a genome. A methylation may include DNA methylation or histone methylation and change in the methylation level or change in the methylation profile has been associated with different diseases or ageing. A current situation is that now a tremendous amount of data can be obtained to characterise a physiological state.

1.5.2

Big Data and Personalised Medicine

Each patient can be characterised by a set of highdimensional data, including his/her genome sequence, DNA methylation profile, proteome or metabolome. In addition to epidemiological data, these high-dimensional data are called Big Data because they are very high dimensional and because they include data of very different types such as continuous or categorical data. An advantage is that analysis of Big Data forming personal patient profile enables us to develop treatment or make a diagnosis based on personal features of this patient and make treatment more effective. This approach is an advantage of personalised or precision medicine. However, Big Data analysis requires also development of sophisticated analysis methodology such as methods of approximate Bayesian computation using sequential Monte Carlo methods [15] or applications of artificial neural networks. Since patient data can be longitudinal, i.e., serial in time, especially we should mention successful application

8

1 Introduction to Quantitative Physiology

Fig. 1.4 The ABC structure of Quantitative Physiology

of recurrent neural networks to analyse Big Data, and, in particular, recently developed long short time memory neural networks [16], or neural network with gate recurrent units [17]. Here it is important to note that, in fact, all these networks are particular cases of fully connected neural networks and, hence, the crucial difference is just in the depth of the fully connected area.

1.5.3

Genetic Editing and Synthetic Biology

Synthetic biology is a developing field which utilises biology, mathematics, computation, and engineering to construct synthetic genetic networks using engineering principles. A genetic network consists of a number of genes and additional molecular parts such as promoters and operators, where protein production is regulated through nonlinear positive and/or inhibitive feedback. Synthetic genetic networks serve two purposes: as stripped-down networks which mimic existing complex natural pathways or as nanorobots which perform controlled, predictable functions. Nowadays, a study of synthetic biology is progressing very fast. Since its inception in 2000, with the development of two fundamental simple networks, the toggle switch [18], and the repressilator [19], there have been a vast number of proof-of-principle synthetic networks developed and modelled. These include transcriptional, metabolic, coupled and synchronised oscillators [20–24], networks with both oscillator and toggle switch functionality [25], calculators [26], pattern formation inducers [27], learning systems [28], optogenetic devices [29–31], and logic gates and memory circuits [32–38]. As further developments are made towards the construction of robust and predictable genetic networks, it becomes clear that synthetic genetic networks have the potential to impact a lot of applications in the biomedical, therapeutic, diagnostic, bioremediation, energy-generation and industrial fields [39–42]. This will be enabled when simple synthetic networks start to be assembled, coupled together and with natural networks [35] in increasingly large and complex

structures, in the same way that complex electrical circuits are put together from simple and basic electrical components [39, 43]. In line with engineering principles, this calls for robust, programmable, standard, and predictable assemblies which avoid collateral cross-talk with cellular parts. At last, this book is organised as three major sections: Applied methodology, Basic case studies, and Complex applications. By this way, we want to address the ABC structure of this textbook to lower the entry barrier of Quantitative Physiology (Fig. 1.4).

1.6

Questions

1. What is life? This is a book written by the famous physicist Schrödinger [1]. An interesting question is raised as “why are atoms so small?”. In other words, “Why must our bodies be so large compared with the atom?” Share your critical thinking on this topic. 2. There is a joke. “Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab theory and practice are combined nothing works and no one knows why.” a. How do you think about this joke? b. If theory and practice are merged in Quantitative Physiology, and then? 3. Homeostasis is a central important concept in physiology. There are several physiological variables that are keeping nearly constant during normal condition. a. Could you provide any example of homeostasis? b. What is the mechanism to keep the above-mentioned homeostasis? 4. Please do academic search on “Progress of Physiome” and summarise in a brief report.

References 1. Schrödinger E. What is life? Cambridge: University Press; 1943. 2. The 125th Anniversary Issue. Science. 2005;309(5731):78–102.

References 3. Silverthorn DU. Human physiology: an integrated approach. San Francisco: Pearson/Benjamin Cummings; 2009. 4. Zhu D, Wang T. Physiology (in Chinese). 8th ed. Beijing: People’s Medical Publishing House; 2013. 5. Feher JJ. Quantitative human physiology: an introduction. Waltham: Academic; 2017. 6. Chen S, Zaikin A. Editorial: multiscale modeling of rhythm, pattern and information generation: from Genome to Physiome. Front Physiol. 2020;11:1–2. 7. Keener J, Sneyd J. Mathematical physiology. New York: Springer; 2008 8. Hodgkin AL, Huxley AF. A qualitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–44. 9. Markowetz F. All biology is computational biology. PLoS Biol. 2017;15:e2002050. 10. Pan A, Sun Q, Bernstein AM, et al. Red meat consumption and mortality: results from 2 prospective cohort studies. Arch Intern Med. 2012;172(7):555–63. 11. Crampin EJ, Halstead M, Hunter P, et al. Computational physiology and the physiome project. Exp Physiol. 2004;89(1):1–26. 12. Gavaghan D, Garny A, Maini PK, et al. Mathematical models in physiology. Philos Trans R Soc A: Math Phys Eng Sci. 2006;364(1842):1099–106. 13. Whitwell HJ, Bacalini MG, Blyuss O, et al. The human body as a super network: digital methods to analyze the propagation of aging. Front Aging Neurosci. 2020;12:136. 14. Zheng X. Quantitative physiology (in Chinese). Hangzhou: Zhejiang University Press; 2013. 15. Toni T, Welch D, Strelkowa N, Ipsen A, Stumpf MPH. Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J R Soc Interface. 2009;6:187–202. 16. Lipton ZC, Kale DC, Elkan C, Wetzel R. Learning to diagnose with LSTM recurrent neural networks. Conference paper at ICLR 2016; 2017. 17. Cho K, van Merrienboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Empirical methods in natural language processing (EMNLP); 2014. pp. 1724–4. 18. Gardner TS, Cantor CR, Collins JJ. Construction of a genetic toggle switch in Escherichia coli. Nature. 2000;403:339–342. 19. Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature. 2000;403(6767):335. 20. Stricker J, Cookson S, Bennett MR, Mather WH, Tsimring LS, Hasty J. A fast, robust and tunable synthetic gene oscillator. Nature. 2008;456(7221):516. 21. Tigges M, Marquez-Lago TT, Stelling J, Fussenegger M. A tunable synthetic mammalian oscillator. Nature. 2009;457(7227):309. 22. Danino T, Mondragón-Palomino O, Tsimring L, Hasty J. A synchronized quorum of genetic clocks. Nature. 2010;463(7279):326. 23. Fung E, Wong WW, Suen JK, Bulter T, Lee S, Liao JC. A synthetic gene–metabolic oscillator. Nature. 2005;435(7038):118.

9 24. Kim J, Winfree E. Synthetic in vitro transcriptional oscillators. Mol Syst Biol. 2011;7(1):465. 25. Atkinson MR, Savageau MA, Myers JT, Ninfa AJ. Development of genetic circuitry exhibiting toggle switch or oscillatory behavior in Escherichia Coli. Cell. 2003;113(5):597–607. 26. Friedland AE, Lu TK, Wang X, Shi D, Church G, Collins JJ. Synthetic gene networks that count. science. 2009;324(5931):1199– 1202. 27. Basu S, Gerchman Y, Collins CH, Arnold FH, Weiss R. A synthetic multicellular system for programmed pattern formation. Nature. 2005;434(7037):1130. 28. Fernando CT, Liekens AML, Bingle LEH, Beck C, Lenser T, Stekel DJ, Rowe JE. Molecular circuits for associative learning in singlecelled organisms. J R Soc Interface. 2008;6(34):463–469. 29. Levskaya A, Chevalier AA, Tabor JJ, Simpson ZB, Lavery LA, Levy M, Davidson EA, Scouras A, Ellington AD, Marcotte EM, et al. Synthetic biology: engineering Escherichia coli to see light. Nature. 2005;438(7067):441. 30. Ye H, Baba MDE, Peng R, Fussenegger M. A synthetic optogenetic transcription device enhances blood-glucose homeostasis in mice. Science. 2011;332(6037):1565–8. 31. Tabor JJ, Levskaya A, Voigt CA. Multichromatic control of gene expression in Escherichia coli. J Mol Biol. 2011;405(2):315–24. 32. Anderson JC, Voigt CA, Arkin AP. Environmental signal integration by a modular AND gate. Mol Syst Biol. 2007;3(1):133. 33. Tamsir A, Tabor JJ, Voigt CA. Robust multicellular computing using genetically encoded nor gates and chemical ‘wires’. Nature. 2011;469(7329):212. 34. Goñi-Moreno A, Amos M. A reconfigurable NAND/NOR genetic logic gate. BMC Syst Biol. 2012;6(1):126. 35. Kobayashi H, Kaern M, Araki M, Chung K, Gardner TS, Cantor CR, Collins JJ. Programmable cells: interfacing natural and engineered gene networks. Proc Natl Acad Sci. 2004;101(22):8414–9. 36. Bonnet J, Yin P, Ortiz ME, Subsoontorn P, Endy D. Amplifying genetic logic gates. Science. 2013;340(6132):599–603. 37. Siuti P, Yazbek J, Lu TK. Synthetic circuits integrating logic and memory in living cells. Nat Biotechnol. 2013;31(5):448. 38. Ajo-Franklin CM, Drubin DA, Eskin JA, Gee EPS, Landgraf D, Phillips I, Silver PA. Rational design of memory in eukaryotic cells. Genes Dev. 2007;21(18):2271–76. 39. Lu TK, Khalil AS, Collins JJ. Next-generation synthetic gene networks. Nat Biotechnol. 2009;27(12):1139. 40. Khalil AS, Collins JJ. Synthetic biology: applications come of age. Nat Rev Genet. 2010;11(5):367. 41. Ruder WC, Lu T, Collins JJ. Synthetic biology moving into the clinic. Science. 2011;333(6047):1248–52. 42. Weber W, Fussenegger M. Emerging biomedical applications of synthetic biology. Nat Rev Genet. 2012;13(1):21. 43. Andrianantoandro E, Basu S, Karig DK, Weiss R. Synthetic biology: new engineering rules for an emerging discipline. Mol Syst Biol. 2006;2(1):2006–28.

2

Systems and Modelling Shangbin Chen and Alexey Zaikin

I can never satisfy myself until I can make a mechanical model of a thing. If I can make a mechanical model, I can understand it. —Lord Kelvin

2.1

Modelling Process

Leroy Hood ever said [1] “It is likely that systems biology will be one of the major driving forces in biology of the 21st century.” Systems biology is the computational and mathematical modelling of complex biological systems. Following the concept of systems biology, we should have systems thinking in Quantitative Physiology. We may study any part of the human body (or an organism) as a system, but we must be very careful about emergent properties arising from complex interactions among the different parts. Quote by Aristotle: “The whole is greater than the sum of its parts.” A system is a group of interacting or interdependent items forming a unified whole [2]. Every system is delineated by its spatial and temporal boundaries, surrounded and influenced by its environment, described by its structure and purpose, and expressed in its functioning (from Wikipedia). In other words, a system is the collection of interacted objects or processes. Since a system has its boundary, the entities outside the boundary (i.e., the surroundings) are reflected as environment. In thermodynamics [3], there are three kinds of systems defined by the interaction with the environment. An open system can exchange both matter and energy with its environment. A closed system exchanges only energy but not matter with its surroundings. An isolated system exchanges neither matter nor energy with its environment. It is clear that the system of an entire human body is an open system to the living environment. Then, you may understand “what an organism feeds upon is negative entropy” [4]. Why? In physiology, we usually describe 10 major organ systems that comprise a group of organs and tissues [5]. These physiological systems of the human body are circulatory, digestive, endocrine, immune, integumentary, musculoskeletal, nervous, reproductive, respiratory, and urinary system. For example, the heart, vessels, and blood form the cardiovascular system. The cardiovascular and lymphatic systems both belong to the circulatory system. The brain and the spinal

cord form the central nervous system (CNS). How about the peripheral nervous system (PNS)? Each cell can also be a system which is the basic structural and functional unit of all living organisms. The membrane of a cell can be a system which performs important role in ion transport. In fact, even the ions inside a neuron can also form a system, which keep the resting potential and osmotic balance. Thus, the term of system can be anything we wish to discuss in Quantitative Physiology. But how to study a system? Here are some ways we can follow up. There are a lot of in vivo acute or chronic experiments in physiology. There are also a lot of in vitro experiments with tissue or cell samples. We classify all these ways as experiment with the actual system. However, some other ways still exist for investigating a system. Do you remember the model of DNA double helixes? This kind of physical model can facilitate our understanding of the structure of DNA (see Fig. 2.1). Here, we need to define the term of model. A model is a description or representation of a target system, which is one tool to facilitate the discussion on system. The abovementioned physical model is a physical representation in three dimensions of an object. Besides physical model, a mathematical model will be a key point which we will focus on it. Some of mathematical models can be solved analytically, but some of them are too complex to do so. Simulation based on computers may be the way to study it. With the two terms of system and model, Averill M. Law and W. David Kelton summarised a comprehensive map on the ways to study a system [6], including physiological system (see Fig. 2.2). Mainly, there are three types of problems in Quantitative Physiology, i.e., to analyse, synthesise, and control a physiological system. Walter J. Karplus designed a simple input/output system to describe the uses of models in scientific applications, Fig. 2.3 [7]. Here, E, S, and R mean excitation, system, and response, respectively.

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_2

11

12

2 Systems and Modelling

Fig. 2.1 A DNA double helix model

Fig. 2.2 Schematic drawing on different ways to study a physiological system (adapted from Law and Kelton [6])

Fig. 2.3 Problem type-based classification of uses of scientific model (adapted from Karplus [7])

Table 2.1 Four forms of scientific models Form of model Physical model Conceptual model Diagrammatic model Mathematical model

Description A physical representation of an object A natural language description of system Graphical show of the objects and processes Algebraic or differential equations based model

In the framework of Quantitative Physiology, we may pay more attention on quantity, modelling, computation and simulation [8]. We must keep in mind that not all scientific models are physical, and mathematical models. Generally, there are four forms of models (Table 2.1), i.e., physical, conceptual, diagrammatic and mathematical models. Since it is easy to understand physical and conceptual models, we will address more on diagrammatic and mathematical models in the following part. The diagram model is a graphical show of the studied objects and processes. The block diagram and flowcharting are very popular in different disciplines. Here, we will use the simple block diagram model to present the conceptual model of calcium-induced calcium release (CICR). CICR is a

Example DNA double helix model Internal environment The pathway of calcium-induced calcium release BMI = mass(kg)/(height (m)2 )

biological process whereby calcium is able to trigger calcium release from intracellular Ca 2+ stores (e.g., endoplasmic reticulum or sarcoplasmic reticulum) [5]. The process of CICR is shown as three successive steps in Fig. 2.4. CICR is crucial for excitation–contraction coupling in cardiac muscle, and it is a widely occurring cellular signalling process in many non-muscle cells, such as astrocytes and pancreatic beta cells. In addition, Forrester diagram will be introduced as another type of diagram model (see the example in Fig. 13.19 in Chap. 13). For mathematical models, there are different classifications according to the system concept [2]. Compartment model describes the measurable flow (e.g., drug) between biological compartments (e.g., organ). Single and multiple

2.2 Physiological Organ Systems

13

Fig. 2.4 A diagrammatic model is used to show three steps of calcium-induced calcium release (CICR). ER is endoplasmic reticulum. SR, sarcoplasmic reticulum

Ca2+ enters intracellular space

Ca2+ influx triggers specific channels on ER or SR to open

Ca2+ releases from ER or SR into intracellular space

compartment models are widely used in physiology. Transport model quantifies material and energy distribution in continuous physical space. The equation of continuity is an example. Particle model describes the behaviour of individual physiological objects compared to particles (e.g., individual biomolecule in random walk). Finite state automaton shows an object with only a few, finite number of states or conditions (e.g., cellular automaton). Complex system can contain elements of several or all types of models! Also, we will use different types of mathematical equations to describe physiology in this chapter. A model is a good tool to solve a problem. But how to build a model (modelling) could be a first big problem. There is no “secret formula” to guarantee our accomplishment in modelling process. However, we may still follow some criteria. Like George Polya’s four steps to solving mathematical problems [2]: (1) understand the problem, (2) devise a plan for solving the problem, (3) execute the plan, and (4) check the correctness of the answer, we may construct a model as follows: (1) forming a conceptual or diagrammatic model with hypotheses, (2) building mathematical models with equations, (3) computing or simulating the model to derive consequences, and (4) validating or updating the model for further applications.

Early in 1996, Richard Levins argued that there must be trade-off between Realism, Precision, and Generality in mathematical models of biological systems. It means that no model can satisfy all the three properties at the same time. For example, we may need to know the details of each neuron in a human brain (Realism), and then we may simulate the brain with real neural activity (Precision). But we may fail to extend this model to another human brain (Generality). In practical modelling, we may refer to the form of problem solving based on critical thinking [9]: (1) identify and analyse the problem, (2) clarify the inter-relationships between different variables or processes, (3) collect the observation data, (4) implement computational model, (5) validate the model by checking the predictions, (6) consider alternative assumption and model, and (7) make overall judgement for the model.

2.2

Physiological Organ Systems

There are 10 major physiological organ systems of the human body and are listed in Table 2.2. Each system has its unique organs (tissues) and functions. Here, we summarise the primary function, representative conceptual model, and mathematical model for the 10 physiological systems.

Table 2.2 Function and model concept of the ten physiological organ systems Organ system Circulatory system Digestive system Endocrine system Immune system Integumentary system Musculoskeletal system Nervous system Reproductive system Respiratory system Urinary system

Main functions

Conceptual model Transportation between tissues and environmental interfaces Systemic circulation Digestion of food and absorption of nutrients Diffusion Secreting hormones to target distant organs Second messenger Removal of microbes and other foreign materials AIDS Protection from external environment Viscoelastic Support body and movement Elastic

Schenzle model Constitutive equation δ = Yε

Sensory input and integration; command and control Pass life on to the next generation Regulation of blood gases and exchange of gas with the air Regulation of water and solutes, removal of waters

HH model Michaelis-Menten equation Laplace’s equation Countercurrent multiplier model

Action potential Menstrual cycle Surface tension Osmosis

Mathematical model Lumped-parameter model Fick’s law De Young-Keizer model

14

The circulatory system is composed of the cardiovascular and lymphatic systems. We usually address the cardiovascular system, which comprises the heart, vessels, and blood. It transfers nutrients and gasses to cells and tissues throughout the body by the circulation of blood. This is accomplished by a series of coordinated contraction and relaxation of heart. The lumped-parameter model is often used to study systemic circulation. The digestive system includes the mouth, stomach, intestines, rectum, and the accessory structures, such as the teeth, tongue, liver, and pancreas. It breaks down food into smaller molecules to provide energy for the body. The digestion of food and absorption of nutrients are related with materials transport, and Fick’s law will be used to describe diffusion. The endocrine system mainly includes the pituitary gland, pineal gland, thymus, ovaries, testes, and thyroid gland. It secretes hormones and regulates vital processes in the body including growth, homoeostasis, metabolism, and sexual development. Hormones may influence physiological processes via the second messenger, such as I P3 and Ca 2+ . The De Young–Keizer model was used to study the hormonestimulated cytosolic Ca 2+ oscillation (see Chap. 7). The immune system is comprised of the lymph vessels, lymph nodes, thymus, spleen, and tonsils. The system produces and circulates immune cells called lymphocytes. It protects against a wide variety of pathogens, from viruses to parasitic worms. The Schenzle model was used to describe the dynamical mechanisms underlying the course of the human immunodeficiency virus (HIV) infection and the progression of acquired immunodeficiency syndrome (AIDS). The integumentary system comprises the skin and its appendages, such as hair, nails, and sweat glands. It protects the internal structures of the body from external damage by preventing dehydration and abrasion, stores fat, regulates temperature, and produces vitamins and hormones. We may need constitutive equation-based viscoelastic model to reflect the cushion of skin. In the musculoskeletal system, there are 206 bones, 639 muscles, cartilage, joints, ligaments, and tendons that support the body and enable the body to move. To study the force generated by muscles and bones, we may use the conceptual model of elastic and then the mathematical model of Hooke’s law σ = Y  (see Chap. 10). The nervous system includes the central nervous system (CNS) and the peripheral nervous system (PNS). CNS consists of the brain and spinal cord, and PNS is made of sensory neurons, ganglia, and nerves that connect to one another and to CNS. The nervous system monitors and coordinates internal organ function and responds to changes in the external environment. The famous HH model is used to simulate action potential (see Chap. 8).

2 Systems and Modelling

The reproductive system has significant differences in male and female. The male structures include the testes, scrotum, penis, vas deferens, and prostate. The female structures include the ovaries, uterus, vagina, and mammary glands. The system produces sex cells and ensures the growth and development of offspring. It may use Michaelis-Menten equation to simulate menstrual cycle for the female. The respiratory system consists of the lungs, nose, trachea, and bronchi. It provides the body with oxygen via a gas exchange between air from the outside environment and gases in the blood. In alveolar ventilation, the Laplace’s equation is used to quantify the effect of the surface tension. The urinary system includes the kidneys, urinary bladder, urethra, and ureters. It maintains water, electrolytes, and pH balance in the body and removes wastes. During urine production, the quantity of osmosis will be addressed in countercurrent multiplier model (see Chap. 13). In fact, all the ten organ systems are coordinated to work as one unit. Each system interacts with the other directly or indirectly. This is the basis to keep the normal function of human body.

2.3

Equation Models

A key step for modelling is quantitative formulation. Physiological systems are also physical systems that exist in threedimensional space and are subject to fundamental physical laws and process. The total amount of mass or energy of a closed system (isolated system) will remain constant over time. We will model the interactions between physiological structures and physical forces. There are four types of equations used for mathematical modelling: algebraic equation, finite difference equation, ordinary differential equation, and partial differential equation. Here are the corresponding examples. Body Mass Index The body mass index (BMI) or Quetelet index is a value derived from the mass (weight) and height of an individual. The BMI is defined as the body mass divided by the square of the body height and is universally expressed in units of kg/m2 , resulting from mass in kilograms and height in metres, i.e., BMI = HMass eight 2 . The mathematical model of BMI is very simple algebraic equation. The BMI is an attempt to quantify the amount of tissue mass (muscle, fat, and bone) in an individual and then categorise that person as underweight, normal weight, overweight, or obese based on that value. However, there is some debate about where on the BMI scale, the dividing lines between categories should be placed. Commonly accepted BMI ranges are underweight: under 18.5 kg/m2 ; normal weight: 18.5 to 25; overweight: 25 to 30; and obese: over 30.

2.3 Equation Models

15

People of Asian descent have different associations between BMI, percentage of body fat, and health risks than those of European descent, with a higher risk of type 2 diabetes and cardiovascular disease at BMIs lower than the World Health Organization (WHO) cut-off point for overweight, 25 kg/m2 , although the cut-off for observed risk varies among different Asian populations.

dynamic equilibrium. In equilibrium, we will know d[A] d[B] = = 0. dt dt Then, we will get [A] k2 = . [B] k1

Person’s Knowledge Cited from the book [2], the net rate of change in a person’s knowledge is a balance of learning and forgetting. If the rate of learning increases linearly with the square root of age, the rate of forgetting increases proportionally with the square of age. We may write a finite difference equation that describes the amount of knowledge of a person has as he/she ages as √ K(y+1) = Ky + a y − by 2 . Here, K, y, and a, b are the amount of knowledge, age, and two parameters. Suppose the amount knowledge of a person peaks at 64 years, it is possible to solve the two coefficient parameters (i.e., a and b). It is easy to know the age at which his or her knowledge level starts to decline. Can you calculate what is the maximum amount of knowledge of a person achieved in one year? What insight can you get from this model?

This implies that for a chemical reaction mixture in equilibrium, the ratio between the concentration of reactants and products is constant. Thus, the law of mass action is related with both kinetic rate and equilibrium state of composition in a reaction mixture. Random Walk A random walk is known as a stochastic or random process, which describes a path that consists of a succession of random steps. In fact, the Brownian motion and diffusion had been studied with the conceptual model of random walk. The term random walk was firstly introduced by Karl Pearson in 1905. We will start to discuss one-dimensional random walk. The random walk is set on the integer number line, which starts at 0, and each step moves +1 (forward) or −1 (backward) with equal probability. After n steps, the probability of m steps in forward direction should be

Law of Mass Action There are over thousands of biochemical reactions in the living human body. In chemistry, the law of mass action is the proposition that the rate of a chemical reaction is directly proportional to the product of the activities or concentrations of the reactants. Taking the simplest irreversible chemical

 n 1 n! P (m, n) = . 2 m!(n − m)!

k

reaction: A → B as an example, we will get the decay process as d[A] = −k[A]. dt This is an ordinary differential equation (ODE) for describing kinetics of chemical reaction. Here, k is the reaction rate

The displacement S should be S = 2m − n. If n is fixed, what is the maximum of P (m, n)? With the approximation n! ≈ (n/e)n ,

k1

constant. For the reversible chemical reactions: A  B, we k2

we will get

may get two ODEs: d[A] = −k1 [A] + k2 [B] dt d[B] = −k2 [B] + k1 [A]. dt

P (S) = (2πn)−1/2 e−S



−∞ ∞

−∞

2

/(2n)

.

Then, we can derive the expectation of displacement, expectation of square of displacement, and its square root as the following:

Also, k1 and k2 are the reaction rate constants. These ODEs can explain and predict behaviours of solutions in  ∞  < S >= SP (S)dS = < S 2 >=

P ∗ (m, n) = (2πn)−1/2



S(2πn)−1/2 e−S

2

/(2n)

−∞

 S 2 P (S)dS = √



S 2 (2πn)−1/2 e−S

−∞

< S2 > =



n.

2

dS = 0

/(2n)

dS = n

16

2 Systems and Modelling

If we create a C(x, t) = (4πDt)−1/2 e−x of P (S), we will get

2

/(4Dt)

like the form

friction in the solution. For spherical particle, there is α = 6πμr,

2

∂C(x, t) ∂ C(x, t) =D . ∂t ∂x 2

where μ is the viscosity for the solution and r is the radius of the particle.

In fact, the above equation is Fick’s second law. Also, this is a simple partial differential equation. Furthermore, we can get 



< x >= −∞

xC(x, t)dx =



2Dt.

Thanks to the great work of Albert Einstein and Paul Langevin, we have known D=

kT . α

Here, D is the diffusion coefficient, k is Boltzmann’s constant, T is the absolute temperature,and α is the viscosity

λ2 + ω2 = 0,

λ2 = −ω2 ,

Linear Stability Analysis

Since stability plays an important role in the dynamics of a system, it is useful to be able to identify stability and classify equilibrium points based on their stability. Suppose that we have a one-dimensional autonomous dynamical system, x˙ = f (x), (2.1) ∗

2.4.1

Modelling Oscillations

Let us consider some examples how to use ordinary differential equations (ODEs) in modelling. Oscillations without damping, i.e., a periodic process, can be modelled with an equation of a harmonic oscillator: d 2x + ω2 x = 0, dt 2

x(0) = x0 ,

x˙ = v0 ,

where x0 and v0 are initial conditions, and a parameter w is a circular frequency. Substituting x ∝ expλt , we get

parts of constants, respectively. Hence, Re(x) = (AR + R I I B A cosωt + B  sin ωt = √ ) cos ωt + (BA − A ) sin ωt =  B A2 + B 2 ( √A2 +B 2 cosωt + √A2 +B 2 sin ωt). So, the solution is

  −1 2 2 C = A + B , φ = sin √

here, T = 2π/ω is a period of oscillations and φ an initial phase.

2.4.2

Using ODEs in Modelling Physiology

λ = ±iω, x(t) = A expiωt +B exp−iωt ,

where A and B are constants of integration. Using the Euler formula: expiωt = cos ωt + i sin ωt, we get x = A(cos ωt+i sin ωt) + B(cos ωt − i sin ωt) == (AR + iAI )(cos ωt + i sin ωt) + (B R + iB I )(cos ωt − i sin ωt), where subscripts R and I stand for real and imaginary x(t) = C sin(ωt + φ),

2.4

with a condition of equilibrium f (x ) = 0. Then, a condition of stability reads out f  (x ∗ ) < 0 and is illustrated in Fig. 2.5. For a multidimensional system, described by

A A2 + B 2

 ;

x˙ = f(x),

(2.2)

written in vector form x = (x1 , x2 , . . . , xn ) and having an equilibrium point x∗ , we have f(x∗ ) = 0. Taking a multivariate Taylor expansion of the right-hand side, we get  ∂f  ∗ x˙ = f(x ) + (x − x∗ ) + . . . = J|x∗ δx + . . . , (2.3) ∂x x∗ where J is the Jacobian matrix: ⎧ ∂f ∂f 1 1 ⎪ ⎪ ∂x1 ∂x2 . . . ⎪ ⎪ ∂f ∂f 2 2 r ⎨ ∂x1 ∂x2 . . . J= .. .. 2⎪ ⎪ . . ⎪ ⎪ ⎩ ∂fn ∂fn . . . ∂x1 ∂x2

∂f1 ∂xn ∂f2 ∂xn

⎫ ⎪ ⎪ ⎪ ⎪ ⎬

.. ⎪ , .⎪ ⎪ ⎭ ∂fn ⎪

∂xn

(2.4)

2.4 Using ODEs in Modelling Physiology

17 f(x)

stable

stable unstable

unstable

x

unstable

unstable

Fig. 2.5 For a 1D system x˙ = f (x), a condition of stability of the equilibrium point x ∗ is f  (x ∗ ) < 0

Fig. 2.6 Classification of equilibrium points for a 2D system. Two eigenvalues are shown in the built-in plots by two small solid circles. Note the line of the Andronov-Hopf bifurcation, when the onset of oscillations is observed

and δx = x − x∗ . So, we have an equation for perturbations: d(δx) = J|x∗ δx. dt

(2.5)

The eigenvalues of the Jacobian matrix are, in general, complex numbers, and the equilibrium point x∗ is stable if all the eigenvalues have negative real parts. For a two-dimensional system, two eigenvalues λ1 and λ2 are the roots of the characteristic equation λ2 − τ λ + Δ = 0,

τ = 0 in upward direction for positive Δ = 0, a system undergoes the Andronov–Hopf bifurcation.

(2.6)

and equilibrium points can be classified according to the diagram presented in Fig. 2.6. Note that crossing the line

2.4.3

Solving ODEs with the δ-Function

To model discontinuity in the solution, it is convenient to use the Heaviside step function:  Θ(t) =

1, if t ≥ 0 0, if t < 0.

The Heaviside function can also be used to “cut out” the function f (t) from t = τ1 to t = τ2 by taking f (t)[Θ(t − τ1 ) − Θ(t − τ2 )].

18

2 Systems and Modelling

Another important function is the Dirac delta function (δ-function), defined as δ(t) = lim

a→0

Note that, rigorously speaking, the δ-function is not a usual function but a distribution or a generalised function (see Fig. 2.7). It has the following properties:

1 2 √ exp−(t/a) . |a| π δ(t) =



dΘ(t) , dt

∞ −∞

 f (t)δ(t)dt = f (0),



−∞

δ(t)dt = 1.

(2.7)

Let us consider how to solve the ODE with a δ-function: 

x + x = δ(t).

According to the properties of the δ-function in Eq. 2.7, ∞ we get for δ : φ(t) → φ(0) as −∞ φ(t)δ(t)dt = φ(0). We : φ(t) →  ∞ can treat x(t) in the same way, getting  ∞ x(t)  x(t)φ(t)dt, and x (t) : φ(t) → x (t)φ(t)dt = −∞ −∞ ∞ − −∞ x(t)φ  (t)dt, where we have used integration by parts. ∞ Multiplying Eq. 2.8 by φ(t) and integrating −∞ dt, we get

(2.8)

For t < 0, a solution is x = C1 exp−t , whereas for t > 0, a solution is x = C2 exp−t . We should find a relation between C1 and C2 . The equation includes generalised functions, which can act on some function φ(t), so let us consider φ(t)—any function, such that for t → ±∞: φ(t) → 0 and φ(t)f (t) → 0 ∀ f (t).  − 

0 −∞

∞ −∞

x(t)φ  (t)dt +

C1 exp−t φ(t)dt − 







0 −∞



∞ −∞

x(t)φ(t)dt = φ(0)

C1 exp−t φ  (t)dt +





C2 exp−t φ(t)dt

0

C2 exp−t φ  (t)dt = φ(0).

0

Integrating by parts and taking into account the properties of φ(t), we get

Add to x(τ )− a value of the “jump” =

−C1 + C2 = 1, or C2 = 1 + C1 . So, the solution is x = C1 exp−t for t < 0 and x = (C1 + 1) exp−t for t > 0. From this, we can develop a simple and practical recipe how to solve a differential equation with the Dirac δ-function. To solve the equation Ax  + Bx = Cδ(t − τ ), divide it by A x +

B C x = δ(t − τ ), A A

and identify a point with potential discontinuity in the solution t = τ . Then, solve it for t < τ x +

B x = 0. A

x +

B x = 0, A

C , A

and solve for t > τ

x(τ )+ = x(τ )− +

C . A

Note that another mathematically more convenient way to solve ODEs of this type would be to apply the Laplace transformation, which is beyond the scope of this book.

2.5

Conservation Laws in Physiology

2.5.1

Conservation of Momentum and Energy

Conservation of a Momentum A momentum is equal to → − → → p = m− v , where m is the mass and − v is the velocity. Here, we can examine Newton’s second law of motion → − d− p → = F dt

(2.9)

2.5 Conservation Laws in Physiology

19

Fig. 2.7 Schematic plot of the Dirac δ-function that is equal to 0 everywhere except t = 0. At the same time, an area under the function is equal to 1

1.2 1.0 0.8 0.6 0.4 0.2 0.0 –0.2 –2

or

→ − d− v → m = F dt

(2.10)

A=

− − → − →− → → F d→ r = F→ v dt = m− v d− v.

(2.11)

The change in kinetic energy W → → → → dW = A = m− v d− v =− v d− p,

(2.12)

→ → → without force d − p = 0 and dW = − v md − v = 0, so  W = const = m

m −v d − → → v = 2



→ d− v2=

mv 2 . 2 (2.13)

0

vf =

vf ist,i 1+

Boxing With and without Gloves

Let us illustrate the application of the conservation laws for an example of body’s mechanical motion. Let us consider boxing with and without gloves (Fig. 2.8). Boxing with gloves is inelastic collision. Using the law of impulse conservation, we can write marm vf ist,i marm vf ist,f = + mhead vf , 2 2

(2.14)

2

2mhead marm

≈ 0.236vf ist,i ,

(2.15)

using approximate values marm = 0.05mb and mhead = 0.081mb , where mb is the mass of the body. Here, the indexes are “f”—final and “i”—initial. Boxing without gloves is an elastic collision, and we need one more conservation law as we get one more variable— the speed of the fist after the punch. We use conservation of kinetic energy and get marmvf ist,i marm vf ist,f = + mhead vhead,f (2.16) 2 2 marm  vf ist,i 2 marm  vf ist,f 2 mhead 2 = + vhead,f . 2 2 2 2 2 (2.17) This gives vhead,f =

2.5.2

1

using vf ist,f = vhead,f = vf and vhead,i = 0. Note that we v used f 2ist,i as a velocity of the arm’s centre of mass. We get

− → − → → If F = 0, then ddtp = 0 and − p = const, which means that for a closed system, the total momentum is conserved.

Conservation of Kinetic Energy Let A be a mechanical work, i.e., the amount of energy transferred or changed by a force acting through a distance. Then,

–1

vf ist,f = 0.382vf ist,i , 1 + mmhead arm

(2.18)

if the same values are used. Notice that this value is 1.6 times larger than the one with gloves. Boxing without gloves leads to broken knuckles or even can lead to death! Notice also that a good boxer will have large speed of the fist, and the mass of the head should be much less than the mass of the arm. In reality, however, boxing in gloves led to some number of deaths at the ring because boxers started to punch much stronger than without gloves, not being afraid to break knuckles, and, hence, enhancing vf ist,i .

20

2 Systems and Modelling

Fig. 2.8 (a) With gloves due to inelastic collision, after the punch, the fist moves together with the head. (b) Without gloves due to elastic collision, after the punch, the fist moves differently from the head. Adapted from [10]

Fig. 2.9 A force applied at some angle θ with respect to the radius vector from the rotation axis to the point where the force is applied leads to the rotational movement

2.5.3

Rotational Movement

In some considerations, e.g., considering bone mechanics, we will use equations describing a torque and an area moment of inertia. This should not be confused with a moment of inertia used to describe a rotational movement. In Fig. 2.9, the torque → → − τ about some axis − z is defined as → − → − → τ =− r × F , τz = rF sin θ. A torque leads to a change in the angle α and angular frequency Ω = dα dt , giving

τ =I

dΩ , dt

where I is moment of inertia. moment of inertia is  The  → − 2 2 defined as I = m r = ρ( r )r dV , where r is the i i i distance from axis.

2.6

Questions

1. Draw a diagram model for the regulation of Ca 2+ ion concentration in human blood [2]. The proper concentration of blood calcium ions (Ca 2+ ) is essential

References

21

for the normal functioning of muscle contraction, nerve signalling, and blood clotting. Too high or too low levels of Ca 2+ rapidly result in death. Usually, blood Ca 2+ concentrations are regulated within narrow limits (9–11 mg Ca 2+ / 100 ml blood). The homoeostasis is balanced by the production and absorption of Ca 2+ . The rate of production of calcitonin in the thyroid gland increases as blood Ca 2+ increases above the mentioned normal operating limit. A high concentration of calcitonin increases the rate of Ca 2+ deposition in bones. On the other hand, low levels of blood Ca 2+ cause the rate of production of parathyroid hormone (PTH) in the parathyroid glands (located adjacent to the thyroid gland) to increase. PTH affects two different processes that increase Ca 2+ : blood reabsorption from the kidneys and the stimulation of osteoclast cells in bones to decompose the bone matrix (releasing Ca 2+ into the blood). 2. Construct a model on population growth with constant and varying rate of growth. 3. Based on the mass action law, use computational k1

k2

simulation to solve kinetics of reactions A → B  k3

C + D. Suppose that the initial concentrations of A, B, C, and D are 10, 20, 0, and 0, and the reaction rate constants k1, k2, and k3 are 4, 2, and 1. 4. Apply the assumption of 1D random walk and use the computational model to simulate it. You need to validate the expectation of the displacement and the expectation of the square of displacement. Construct a 2D random walk simulation. 5. A spherical protein molecule with a radius of 2 nm is in aqueous solution, which has a viscosity coefficient of 0.001 kg/(m · s). Please calculate the one-dimensional average diffusion distance of the protein after 1 h. 6. Calculate the possible equilibrium point and check its stability for the following van del Pol oscillator (b >0): dx = x − x 3 /3 − y dt dy = bx. dt

7. Numerically find equilibrium points and analyse their stability for the FitzHugh-Nagumo system: dv = f (v) − w + 0.05 dt dw = v − 10w, dt where f (v) = v(0.5 − v)(v − 1). 8. Find the moment of inertia for a cylinder of radius a, height h, and density ρ. Assume that an axis of rotation coincides with the axis of symmetry.

References 1. Hood L. A personal view of molecular technology and how it has changed biology. J Proteome Res. 2002;1(5):399–409. 2. Haefner JW. Modeling biological systems: principles and applications. New York: Springer Science & Business Media; 2012. 3. Dill KA, Bromberg S. Molecular driving forces: statistical thermodynamics in chemistry and biology. New York and London: Garland Science; 2003. 4. Schrödinger E. What is life? Cambridge: Cambridge University Press; 1943. 5. Silverthorn DU. Human physiology: an integrated approach. San Francisco: Pearson/Benjamin Cummings; 2009. 6. Law AM, Kelton WD. Simulation Modeling and Analysis. Vol. 2. New York: McGraw-Hill; 1991. 7. Karplus WJ. The spectrum of mathematical models. Perspect Comput. 1983;3(2):4–13. 8. Tian X. Biological modeling and simulation (in Chinese). Beijing: Tsinghua University Press; 2010. 9. Jenicek M, Hitchcock D. Evidence-based practice: logic and critical thinking in medicine. Chicago: AMA Press; 2005. 10. Herman IP. Physics of the human body. Berlin: Springer-Verlag; 2007.

3

Introduction to Basic Modelling Alexey Zaikin

Everything should be made as simple as possible, but not simpler. —Albert Einstein

3.1

Building a Simple Mathematical Model

Modelling can be considered as a kind of art not science because in contrast to well-developed part of mathematics, there are no common recipes how to develop one or another model. In mathematics, when we solve model equations, we use standard and well established methodology to find solutions of mathematical equations. In contrast to this, constructing a model, we should use certain assumptions that can be different to catch particular features that we want to explain. Choosing proper assumptions is not an easy task and requires some experience. To develop this experience, we will start with consideration of very simple models still able to explain observed effects. As everywhere in this book, we will start with some observable measurements or experimental behaviours, develop a model based on the assumptions chosen, and make our best to explain the behaviour observed. We start with some toy models. In all examples throughout the book, we will try to keep the following logic of our considerations: we always start the experimental behaviour observed, discuss its features, then develop a set of assumptions, and, based on these assumptions, construct a mathematical model. If the model is appropriate, the solution of the model will be able to explain, predict, and understand better features of the behaviour observed, see Fig. 3.1.

3.1.1

Model of Falling Flea

Why would a flea survive a fall from 100-storey building, whereas a human would probably not? Is it because the human is much heavier? Or is it because the flea has a stronger (exo)skeleton and hence can survive the impact, the flea’s legs can absorb the impact (good shock absorbers), or some other (sensible) reason? Galileo, or later Newton, tells us that two cannon balls of different sizes reach the ground at the same time. However, by

experience, this is not what we expect from fleas and humans. So, what is missing? A proper answer is friction, i.e., the drag on bodies due to air friction acts to decelerate a falling body. Over long distances, bodies reach terminal velocity, which occurs where the frictional drag force balances weight, and this velocity increases no more until the object hits the ground, see Fig. 3.2. So, we need to understand how frictional drag depends on the size and shape of a body. This force due to friction of the medium appears due to the change of the momentum of the medium that moved in some certain time. Because of this, we can find the drag force, the body’s area in contact with the air, and for the consideration how much air is moved from standstill to (terminal) speed v in a given time t, which gives the momentum change in time t. Hence, we get that f orce × t = (density) × volume × v = ρ × (S × h) × v = ρ × (S × v × t) × v = ρSv 2 t, and the drag force ∝ S v2 , where S is the surface area of the object, see Fig. 3.3. Assuming that the terminal velocity is reached by the flea and the human when  M 2 weight = drag force = Mg ∝ Sv ⇒ v ∝ , (3.1) S where M is the mass of the object. So, how does mass/area = M/S differ for the flea and the human? Approximately, we can assume that F lea = 3 mm long and H uman = 2000 mm long, then making the simplest assumption that there is a linear scale L such that

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_3

M ∝ L3 ,

S ∝ L2 ,

(3.2)

23

24

3 Introduction to Basic Modelling

Fig. 3.1 The pipeline to construct a mathematical model

Fig. 3.2 The frictional drag force balances weight

Fig. 3.3 Computation of the volume met in time t

we get then for each body M/S ∝ L. Thus, the √ terminal velocity varies with the body linear scale L as v ∝ L, and we say that the velocity scales as the square root of the body linear scale. For a flea and a human, we have (very approximately!) Lflea /Lhuman ≈ 3 mm/2000 mm = 0.0015,

(3.3)

and the terminal velocity of a human is approx 100 mph,1 so vflea = vhuman ×

 Lflea /Lhuman ≈ 4 mph

1 1 mph=1.609344 km/h=0.44704 m/s.

(3.4)

This, combined with the strong exoskeleton of the flea, gives it a much better chance of survival! However, for an elephant, we can get the following estimations: √ Lelephant /Lhuman = 4/2 = 2, velephant = 100 2 = 141 mph, so an elephant has zero chances to survive. Note that this is a terminal velocity when the weight force is equal to air drag. In reality, however, the only elephant to be known to fall from high altitude has fortunately survived. On the Internet, one could read that a female circus elephant Tuffi became famous in Germany in 1950, when she jumped from the suspended monorail in Wuppertal into the river below, fell 12 m down, but suffered only minor injuries. This example shows that simple models are unable to take account

3.1 Building a Simple Mathematical Model

25

of all possible circumstances important to describe the real situation. Can you evaluate the maximal speed of the falling elephant?

M(LT −1 )2 , power = work done/time = ML2 T −3 , mass flux = mass/area/time = ML−2 T −1 , and so on.

3.1.3 3.1.2

Scaling Arguments

The previous example is an example of a scaling argument— by making very simple assumptions, we were able to model how the terminal velocity scales with linear scale L. In this approach, the scaling argument summarises weight = Mg ∝ L3

(3.5)

drag ∝ projected area × v ∝ L v 2

2 2

√ weight = drag ⇒ L3 ∝ L2 v 2 ⇒ v ∝ L.

(3.6) (3.7)

Notice that we dropped all boring constants to reach the essential point: v scales as square root of L. Inherent in our assumptions were that for the linear scale L that distinguishes bodies in the similarity class: length ∝ L1 , area ∝ L2 (so projected area ∝ L2 ), and volume ∝ L3 (so mass ∝ L3 ). In applying our model, we were also assuming that the model is being applied to two bodies of the same shape (not exactly the case for the flea and the human, but this is a first approximation model!). Hence, applying scaling laws, we assume that we are comparing between families of animals of similar shape, i.e., are isometric, parameterised by linear scale L. In fact, many simple models can be derived and analysed using L as a length scale, when area scales as L2 and volume scales as L3 . Since many of life’s processes depend on transport of substances across a surface area (e.g., lung surface), and that transport supplies a volume (e.g., blood), it is intriguing to ask how the fact that volume increases faster than area effects (limits) function. Moreover, introducing other common scaling arguments such as mass M, length L, and time T , many physical functions can be described in terms of these scaling arguments, e.g., force = mass × acceleration = MLT −2 , work done = force × distance = ML2 T −2 , kinetic energy = Fig. 3.4 A jumping animal or human

Example: How High Can an Animal Jump?

Sometime, using scaling arguments leads to the situation that the function under question is independent on this argument. This does not mean that the model obtained is incorrectly describing a situation. Let us derive a toy model trying to predict, how does the height that an animal from the same similarity class vary with linear scale L? As known, a potential energy gained is Mgh ∝ L3 × h. Based on our assumptions, see Fig. 3.4, we get that work done by muscles is equal to muscle force × vertical distance, centre of mass displaced ∝ (L2 ) × (L) = L3 . Note that a vertical distance is equal to L, and not h, because muscles stopped doing work when an animal leaved the ground. Hence, equating a potential energy gained to work done by muscle, we get that hL3 ∝ L3 , and h ∝ L0 . That is, the simple model predicts that, for animals in the same isometric class, the height they can jump is independent of their size. Simple considerations prove, however, that these predictions and the model are not so bad, because small insects are known to jump for vertical distances much overwhelming its own size, whereas humans cannot do the same.

3.1.4

Example: How Fast Can We Walk before Breaking into a Run?

Usually, we assume that an average human walks at a speed of 5 km/h, but what will be the maximum possible speed? According to the Olympic Games rules, breaking into a run

Assumptions: 1. work done by leg muscles = gain in potential energy 2. muscle force ∝ cross-sectional area of muscle ∝ L2 (this is not obvious, but there are models that justify this experimentally demonstrated fact). 3. height jumped = height gained h by centre of mass (good approx)

26

3 Introduction to Basic Modelling

Fig. 3.5 Simplified model of a human walk. Courtesy of Dr. S. Baigent

occurs when both legs simultaneously leave the ground that is forbidden by the rules. The world record of 2014 for the distance of 50 km is 3:32:33, which gives us the speed of 14 km/h approximately. Let us derive a model explaining this value. Consider the simplified picture of the human gait (see Fig. 3.5). The human walks with straight legs, so the central of mass (COM) moves in a series of circular arcs. The front foot leaves the ground if the component of weight is not strong enough to provide the centripetal acceleration that increases as v 2 . This gives us a limit on the walking speed, see Fig. 3.6. To ensure walk without breaking into  a run, the maximal speed should not exceed the value v < 10 m/s2 × 1 m = 3 m/s = 0.003 km/0.0003 h = 10 km/h, what gives us pretty realistic estimation of maximal walking for an average human and not masters of Olympic Games. Note that simple modelling like this enables us also to evaluate minimum nerve speed required to make it possible for an animal to balance (e.g., flamingo, see Fig. 3.7). Assuming that toppling of animal scales like free fall, i.e., time required to control a balance is assumed to be equal to the time of free fall, we get the following equation of motion and its solution md 2 x/dt 2 = −mg, x(0) = h, dx(0)/dt = 0, dx/dt = −gt + C1 , x = −gt 2 /2 + C1 t + C2 , or, taking into account the initial conditions x = −gt 2 /2 + h. For x(t) = 0, Fig. 3.6 Condition of not breaking into a run is a balance between centripetal acceleration force and weight

we get the time of free fall:  t=

2h √ ∝ L; g

hence, the nerve speed required to control the balance should not be smaller than distance from brain to muscle divided by time taken, i.e., the time√of free fall vnerve ∝ L/t. Thus, nerve speed scales as √LL = L that explains the fact that giraffes do not need much faster nerve speed than humans because √ both these scales are in the region where the function L does not already dramatically increase (note that its derivative √ declines as ∝ 1/ L).

3.2

Models That Involve Metabolic Rate

3.2.1

Modelling Metabolic Rate

Let us discuss some definitions used in physiology. Homeostasis is the property of a system that regulates its internal state to maintain a stable condition of properties, e.g., temperature. On the other hand, metabolism is the set of chemical reactions that occur in living organisms in order to maintain life. Animals use adenosine triphosphate (ATP) to

3.2 Models That Involve Metabolic Rate

27

Fig. 3.7 Simple model describing animal balance: a time required to control a balance is assumed to be equal to the time of free fall

fuel their metabolic demands, e.g., in growth, locomotion, maintenance, immunological defence, etc. The cell “power plants” are organelles called mitochondria which generate most of the cell’s supply of ATP. The metabolic rate (rate of energy metabolism) of an organism (using aerobic respiration) can be assumed to be equated with the rate of oxygen and/or food consumption. A simple (isometric) scaling argument for variation of metabolic rate (assuming a resting state and after a period of fasting) with size can be derived as follows. Metabolic rate B can be equated with a rate of oxygen consumption, which is proportional to the area of lungs supplying oxygen to mitochondria, and hence, B ∝ L2 . On the other hand, body mass M ∝ L3 , and, thus, B ∝ L2 = (L3 )2/3 ∝ M 2/3 , the dependence known as Rubner’s law. Another argument, at least for warm-blooded animals, put forward by Rubner, is that a warm-blooded animal maintains a constant body temperature, and so their metabolism runs at a rate such that this temperature is maintained. Since the body loses heat energy at a rate proportional to their body surface area, which scales as L2 , the metabolic rate ought to scale as L2 . However, we will see later that it can be improved upon with the experimentally determined B ∝ M 3/4 . Thus, until stated otherwise, we assume that B ∝ L2 ∝ M 2/3 . Example: How Long Can a Diving Mammal Stay under Water on One Breath? A diving mammal (e.g., whale) stores oxygen in its blood before diving. When that oxygen is exhausted, it must surface for more air. An amount of oxygen stored can be assumed to be proportional to the lung volume or to the blood volume, i.e., ∝ L3 . Metabolic rate, the rate at which mammal uses stored oxygen, is, according to Rubner’s law, ∝ L2 Thus, the duration of a dive scales as L3 /L2 = L, and the larger you are, the longer you can dive. In this simple model, we have ignored any specialisation that makes it more efficient for the animal to dive. For example, whales slow their heartbeat and so the blood flow to their muscles is reduced. These

factors enable whales to dive for longer. When we build our simple models, we keep them simple by ignoring such specialisations. We are interested in broad statements about how things typically vary with scale. Experimentally, however, B ∝ M 2/3 is not observed, and Kleiber suggested phenomenologically another dependence for the metabolic rate B = B0 m3/4 , known as Kleiber’s law. This experimental dependence has been found by measuring oxygen consumption of animals in a resting state and after they have fasted for sufficient period (see Fig. 3.8). A new argument for 3/4 power law was recently published by West, Enquist, and Brown in 1997 [1], known as the WBE model under the following assumptions: mammalian energy distribution networks (circulatory system, lungs) are fractallike in structure, and systems have evolved to maximise their metabolic capacity by maintaining networks that occupy a fixed percentage of the volume of the body.

3.2.2

Example: Why Do Large Birds Find It Harder to Fly?

The flight of a bird can be modelled by a balance of following forces, see Fig. 3.9: the bird needs lift to counterfeit its weight; to ensure the lift force, the bird has to provide thrust enough to support the flight speed, and this thrust should be balanced with a drag force due to air friction. As we have considered in the flea/human model (see Sect. 3.1.1), the drag force ∝ L2 v2 . Less obviously, maximum lift during gliding and wing flapping can be assumed to ∝ Aw v2 , where Aw = wing area ∝ L2 and v is the flight velocity. This is a consequence of Bernoulli’s law, stating that pressure is proportional to the square of the velocity (see Sect. 9.5). Air particles moving around the wing profile start and end at same time, so top particle must move faster. By Bernoulli, this generates a lift ∝ Aw v 2 , where v is the wind speed, that is, the speed of the air relative to the wing. Metabolic rate ∝ L2 is the rate at which energy is available. Maximum lift must just overcome gravity, so minimum flying speed v is given by Aw v 2 ∝ Mg ∝ L3 , so that since Aw ∝ L2 , v ∝ L1/2 .

28

3 Introduction to Basic Modelling

Fig. 3.8 The remarkable range of scales over which the 3/4 law holds

Fig. 3.9 Balance of forces enabling a bird to sustain its flight. Courtesy of Dr. S. Baigent

The required power (for flapping wings to get lift) can be estimated as work done per unit of time, hence, equal to work done /time = force × distance /time: Flying power = drag × v ∝ L2 v 3 ∝ L2+3/2 = L7/2 Metabolic power is, however, ∝ L2 , so that required power exceeds supplied power for larger L (L7/2 > L2 for L large enough), and hence there is an upper limit on bird size.

3.2.3

Ludwig von Bertalanffy’s Growth Model

Ludwig von Bertalanfy was one of the founders of “General Systems Theory”, a typical example of System’s Sciences, where we discuss and derive general laws applicable for

all systems and not some systems in particular. Using this approach, one can show how under certain assumptions, one can derive a general law describing a mass of some growing organism. Here is a very simple model he developed in 1957 to study the growth of an organism. He assumed that an organism’s available energy is channelled into 1. growth of the organism—building new cells, all taking the same energy to generate; 2. maintenance of the existing cells—keeping existing cells alive by supplying resources and removing waste products. Following these assumptions, we have incoming power (metabolic rate [B]) = number of cells in body Nc (t) × metabolic rate of one cell [= Bc ] + energy

Reference

29

Fig. 3.10 A universal dependence for the mass of the growing multicellular organism, see Eq. (3.8). Parameters are chosen so that m0 = 0.064 and α = β = 1.2

required to create new cell [= Ec ]× rate of increase in number of cells Nc (t), or B=

This has a general solution u(t) = A, use initial data:

dNc + Ec  dt  maintenance

1/3

m0

Nc Bc   

growth

u(t) =

rearrange



m(t) =

dm dt

= αm2/3 − βm,

α 1/3 (1 − exp−βt/3 ) + m0 exp−βt/3 , β

α 1/3 (1 − exp−βt/3 ) + m0 exp−βt/3 β

3 .

(3.8)

This universal dependence is plotted in Fig. 3.10.

where α = mc B0 /Ec , β = Bc /Ec . Hence, we are to solve

dm = αm2/3 − βm, dt

α + A. β

and finally in terms of m: 

B0 m2/3

+ A exp−βt/3 . To find

Hence, we obtain

Next, we can use that a body mass m = Nc mc , where mc =mass of 1 cell (assumed identical for all cells). Take B = B0 m2/3 (isometric scaling, i.e., ∝ L2 according to Rubner’s law). Thus,

Bc m Ec dm = + mc mc dt

=

α β

3.3

Questions

Recalculate the growth model, getting the dependence of mass upon the time using Kleiber’s law for the metabolic rate.

m(0) = m0

× (small, since organism starts small!). Write as

Reference   dm = m2/3 α − βm1/3 dt

and substitute u = m1/3 . Note that 3du/dt = m−2/3 dm/dt, which gives du 1 = (α − βu), dt 3

1/3

u(0) = m0 .

1. West GB, Brown JH, Enquist BJ. A general model for the origin of allometric scaling laws in biology. Science 1997; 276:122–126.

4

Modelling Resources Shangbin Chen

If I have seen further than others, it is by standing on the shoulders of giants. —Isaac Newton

4.1

Open Courses

So far, more and more universities have founded the course of Quantitative Physiology. The accurate course names may be titled as Quantitative Human Systems Physiology, Quantitative physiology and Transport, Quantitative Engineering Physiology, Quantitative Human Physiology, and so on. The common aim of this kind of course is to promote quantitative and systems approach to understand physiology. The course may be served for both undergraduates and graduates mainly with background of biomedical engineering. There are also several publicly available open courseware provided by different teaching group (Table 4.1). We think these could be very useful resource for studying modelling. If you do academic searching, you should get more open resources on Quantitative Physiology. For each open course, we can get very rich teaching and learning resource. A very good example is the two-course sequence of Quantitative Physiology from Massachusetts Institute of Technology (MIT) in USA [2, 6]. The first part is focusing on cellular biophysics and the second is on major human organ systems. From the webpage of MIT open courseware, we can see a comprehensive list on syllabus, calendar, assignments, projects, tools, study materials, readings, exams, related resources, and download course materials. A screenshot for the part of “Quantitative Physiology: Cells and Tissues” is shown in Fig. 4.1. Another example is from Zhejiang University in China. The course of Quantitative Physiology is a national top-quality course in Chinese language. This course has addressed physics law (especially, thermodynamics), mathematics and engineering techniques as the fundamental basis of physiology. Both theory and practice are combined in the course. Coursera (https://www.coursera.org/) is an online course platform founded in 2012 from Stanford University [7]. In this system, we may find some related free courses, such as Introduction to Systems Biology, Introductory Human Physiology, and Computational Neuroscience. These free courses

will make reasoning: Why do we need systems approach to understand physiological systems? This is a very useful argument for Quantitative Physiology.

4.2

Modelling Software

MATLAB There are various software that can be used for Quantitative Physiology. The first popular piece should be MATLAB (MathWorks Inc.) [8]. So far, MATLAB has been the easiest and most productive software for both professional researchers and common users. Integrated with a large number of higher level built-in functions and toolboxes, it is powerful for both computation and visualisation. It is quite suitable for constructing, computing, simulating, and visualising mathematical models of Quantitative Physiology. The above-mentioned online course of Quantitative Human Systems Physiology is demoed with MATLAB and its toolbox Simulink [9]. We can download both free courseware and MATLAB codes for the models (Fig. 4.2). Although only cardiovascular system, muscle dynamics, and HodgkinHuxley model are simulated, these representative examples present systematic work on building, visualizing, and validating models for major human systems. In fact, there is MATLAB-based graphical simulator for Hodgkin-Huxley equations (http://www.cs.cmu.edu/~ dst/HHsim/). Recently, a more comprehensive MATLAB toolbox DynaSim for neural modelling and simulation has been developed [4]. Also, there is a series of MATLAB codes for Luo-Rudy (LRd) model of a mammalian ventricular action potential (http://rudylab.wustl.edu/research/cell/code/ AllCodes.html). NEURON Another important software is NEURON (https://www. neuron.yale.edu/neuron/) [3], which is an extensible nerve modelling and simulation programme (Fig. 4.3). It is widely

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_4

31

32 Table 4.1 Open courses on Quantitative Physiology

4 Modelling Resources Name of university Country Massachusetts institute of America technology

University of Illinois, Urbana-Champaign

America

University of Washington America

University of California, Irvine Zhejiang University

America China

Name of course Webpage Quantitative physiology: https://ocw.mit.edu/courses/ cells and tissues electrical-engineering-andquantitative physiology: computer-science/6-021jorgan transport systems quantitative-physiologycells-and-tissues-fall-2004/# Quantitative human https://cn.mathworks.com/ systems physiology academia/courseware/ quantitative-humansystems-physiology.html Quantitative physiology http://nsr.bioeng.washington. and transport edu/Course/ 2015winBioen498e/ Quantitative physiology: https://eee.uci.edu/08f/14030 sensory motor systems Quantitative physiology http://course.jingpinke.com/ (in Chinese) details?uuid=8a8339992152448b-0121-52448c17018d

Fig. 4.1 Screenshot of the webpage on Quantitative physiology from MIT

used in research and education around the world for building and using computational models of neurons and neural networks. In NEURON environment, it is easy to construct complex models by connecting multiple one-dimensional sections to form arbitrary neuron morphologies and by inserting multiple membrane properties (including channels, synapses, and ionic concentrations) into these sections. This type of models is multi-compartment models.

The graphical user interface of NEURON offers the neural modellers an intuitive environment and hides the details of the numerical methods used in the simulation. It is easy to get a lot of useful tutorials from the webpage of NEURON. We can learn how to use very simple NEURON language to implement a series of stimuli to induce a train of spikes based on the HH model. Figure 4.4 shows the codes and simulation results. It is possible to write codes in txt file

4.2 Modelling Software

33

Fig. 4.2 Screen shot of the webpage on MATLAB-based courseware of Quantitative Physiology

Fig. 4.3 Snapshot of the webpage on NEURON software

and save it as a hoc file for direct execution in NEURON. More detailed guides and tutorials can be found from the website of NEURON (https://www.neuron.yale.edu/neuron/ docs). E-Cell E-Cell is a piece of software from the E-Cell Project started in 1996 at Keio University of Japan (http://www.e-cell.org/) [5]. This project aims to develop general technologies and theoretical supports to make precise whole-cell simulation at the molecular level possible. E-Cell has been a software platform (Fig. 4.5) for modelling, simulation, and analysis of complex,

heterogeneous, and multi-scale systems like the cell. The ECell integrated development environment has provided some embedded projects. For example, the repressilator of gene expression network is presented as a project of oscillation [1]. The genetic regulatory network of repressilator is an artificial clock and thus be recognised as a milestone of synthetic biology. In addition, E-Cell offers opportunity to develop new project for cell modelling. The latest version of E-Cell released in 2016 can be downloaded from GitHub (https:// github.com/ecell/ecell4/releases). In fact, we can find and download numerous codes and software in GitHub.

34

4 Modelling Resources

Fig. 4.4 Codes and results of the spike modelling

Fig. 4.5 Snapshot of E-Cell software

4.3

Model Repositories

On the website of Physiome Project, we can find a lot of classical and useful models from the link https://models. physiomeproject.org/welcome (Fig. 4.6). In total, there are over 600 models covered from gene regulation to cardiovascular circulation. Most of the models are attached with MATLAB codes. If the software OpenCOR (http://www. opencor.ws/) has been installed on a personal computer, the CellML models can be launched from the webpage of the Physiome Repository. For computational neuroscience, ModelDB has provided an accessible platform for storing and retrieving models (https://senselab.med.yale.edu/ModelDB/). The ModelDB

models are compatible with many simulators and simulation environments. All collected models can be downloaded easily, and some models can be run directly from the webpage by clicking “Auto-launch” button. In addition, from the link of NSR Physiome Project (https://www.physiome.org/Links/), we can get access to a lot of websites of research group. From the websites, we may get relevant open course, model resource, and specific research project. Here, we would like to recommend the “3M rule” for studying Quantitative Physiology: use open course to study how to modelling, login repository database to find applicable model, and visit research group website to learn from the modellers. We hope this “3M rule” will be useful to guide the readers to find critical resource (Fig. 4.7).

References

35

Fig. 4.6 Snapshot of the webpage of Physiome Repository

Fig. 4.7 The 3M rule is recommended for studying Quantitative Physiology

Open course (Modelling)

Quantitative Physiology Repository database (Model)

4.4

Questions

1. Make an academic search to find out how many universities have founded the course on Quantitative Physiology. 2. The Lotka-Volterra model is the simplest model of predator–prey interactions [10]. It is based on linear per capita growth rates: dx = b − py xdt dy = rx − d. ydt Here, the parameter b is the growth rate of the preys x without interactions with the predators y. Prey numbers are decreased linearly with the increase of y. The parameter d is the death (or emigration) rate of y in the absence of interaction with x. The term rx denotes the net rate of growth (or immigration) of the predator population in response to the size of the prey population. Set possible numbers for the parameters, the initial x and y, and then

Research group (Modeler)

use MATLAB to solve the Lotka-Volterra model. Change the parameters and check the conditions for oscillation. 3. Read and comment the paper “A synthetic oscillatory network of transcriptional regulators” [1]. Try to find a computational model to implement the simulation. 4. Find a recent research paper on physiology modelling, and introduce the modelling processes and mathematical model involved. 5. Use different software to construct the same computational model on physiology and show the results. You can select the model from a recently published article.

References 1. Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature 2000; 403 6767:335–338. 2. Freeman D. 6.021J Quantitative Physiology: Cells and Tissues. Fall 2004. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NCSA. 3. Hines ML, Carnevale NT. The NEURON simulation environment. Neural Comput. 1997; 9 6:1179–1209.

36 4. Sherfey JS, Soplata AE, Ardid S, Roberts EA, Stanley DA, PittmanPolletta BR, Kopell NJ. DynaSim: A MATLAB toolbox for neural modeling and simulation. Front Neuroinform. 2018; 12:10. 5. Tomita M, Hashimoto K, Takahashi K, et al. E-CELL: software environment for whole-cell simulation. Bioinformatics 1999; 15 1:72–84. 6. Venegas J, Mark R. HST.542J Quantitative physiology: organ transport systems. Spring 2004. Massachusetts Institute of Technol-

4 Modelling Resources

7. 8. 9. 10.

ogy: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA. https://www.coursera.org/. https://www.mathworks.com/products/matlab. https://cn.mathworks.com/academia/courseware/quantitativehuman-systems-physiology.html. http://www.scholarpedia.org/article/Predator-prey_model.

Basic Case Studies

In the second part, we will consider basic case studies on how to apply modelling to explain experimental observations in Quantitative Physiology. We will utilise multi-scale approach, starting with gene expression inside the cell, considering intracellular calcium signalling, then go to the cellular scale, considering metabolic networks and neuron activity, and, finally, finish with the macroscopic scale, considering blood flow or bone and body mechanics. In all these case studies, we will start with some observations, then discuss the assumptions, and, based on these assumptions, develop the mathematical model to explain the observations.

5

Modelling Gene Expression Alexey Zaikin

We, and all other animals, are machines created by our genes. —Richard Dawkins

5.1

Modelling Transcriptional Regulation and Simple Networks

5.1.1

Basic Notions and Equations

Most of cellular functions are performed by complex molecules—proteins. Proteins are produced from a genome (DNA) that is inherited from parents and by the way of genes organise gene-regulatory networks (GRNs). GRNs are controlling the life and decisions of cells and are responsible, e.g., for circadian oscillations, i.e., ability of cells to count the time, or for cellular differentiation in embryonal development, or for organisation of newly programmed stem cells called induced pluripotent stem cells (iPS). Most of the genes are able to control the function of other genes, activating or inhibiting them, and, hence, form a genetic network. In synthetic biology, these genetic networks can be of artificial origin, forming artificial genetic networks. Our aim is to discuss a general methodology, how, given known topology of activating and repressing genetic interactions, to model its time evolutions with mathematical equations. DNA, which we inherit from our parents, has a form of a double-stranded spiral, and during cell proliferation, each strand is coupled with a new one. Some parts of DNA organise a gene. The central dogma of molecular biology, a kind of a “Newton’s law” for living systems, can be illustrated with Fig. 5.1. This process involves two steps: transcription, or forming from molecule of four elements, nucleotides, a molecule-mRNA containing four elements, also nucleotides; and translation, or forming from a molecule of four elements, nucleotides, a molecule-protein containing an alphabet of 20 elements, called amino acids. Each amino acid is encoded by three nucleotides, called together a codon, and the exact code, called genetic code, has been cracked in 1960s. The 1968 Nobel Prize in Physiology or Medicine was awarded to Marshall W. Nirenberg, Har Gobind Khorana, and Robert

W. Holley “for their interpretation of the genetic code and its function in protein synthesis”.

5.1.2

Equations for Transcriptional Regulation

Basically, except for the basal expression, when gene expression occurs irrespectively from any regulation and at constant rate, transcriptional regulation may happen according to two possible scenarios, shown in Fig. 5.2. Here, we follow an approach, developed in [1]. A scenario “a” describes the function of a transcriptional activator, when the free unbounded gene does not express a protein, and starts to express it only if the activator is bounded; and scenario “b” corresponds to the case of a transcriptional repressor, which, if bounded, represses the gene expression and production of a protein from the gene, which is expressing this protein in a free, unbounded state. Both transcriptional activator and repressor are called together transcriptional factors (TFs), and these TFs can be proteins, which were transported to the gene from extracellular space, from a different gene, or were produced from the same gene in the case of transcriptional self-regulation. Most commonly, the role of TF is played by a dimer, and we will start with a consideration of this case. Let us assume that P is a monomer concentration of TF, P2 is a dimer concentration, Ou is a concentration of unbounded operator, i.e., gene molecule, and Ob if bounded one. Then, two scenarios shown in Fig. 5.2 can be modelled with two chemical reactions: kd

P + P  P2 , k−d

kb

Ou + P2  Ob ,

(5.1)

k−b

where kd and k−d are rates of dimerisation and dedimerisation reactions, and kd and k−d are rates of bounding and unbounding reactions.

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_5

39

40

5 Modelling Gene Expression

Fig. 5.1 A paradigm of molecular biology: the expression of a protein from a gene, occurring through an intermediate molecule of mRNA, is transcriptionally regulated by other proteins playing the role of an activator or a repressor

Next step is to rewrite these two chemical reactions in the form of mathematical equations. This could be done as follows: kd P 2 = k−d P2 ,

kb Ou P2 = k−b Ob ,

unknowns Ou and Ob can be easily solved to get Ou =

(5.2)

where we assumed chemical equilibrium and multiplicated concentrations on one side of the reaction with the reaction rate to find the reaction probability that should be equal to the probability computed for another side of equation. Here, we used the fact that concentration is, in fact, a probability to be located at a certain spatial point, and joint probability of two molecules to be located in the same point is given by multiplication of concentrations. Our aim is to find the dependence of Ou and Ob as a function of P , because a rate of gene expression is proportional to these concentrations, and this dependence describes how expression can be transcriptionally regulated. It is easy to note that second equation can be rewritten as kb Ou kk−dd P 2 = k−b Ob , but this equation has two unknown variables and, hence, cannot be solved. To include additional condition Ou + Ob = N , we use the assumption that the total number of DNA molecules is fixed in the cell under consideration. Then, the two equations with Fig. 5.2 Two scenarios of gene expression: (a) transcriptional activator and (b) transcriptional repressor. Adapted from [1]

Fig. 5.3 The Hill-type functions, i.e., a dependence of gene expression product concentration on the concentration of TF functioning as (a) a transcriptional repressor and (b) a transcriptional activator

N , 1 + (P /K)2

Ob =

N (P /K)2 , 1 + (P /K)2

(5.3)

√ where we have introduced a constant K = k−b k−d /kb kd . These two functions are called the Hill-type functions, and the number 2—the Hill coefficient or coefficient of cooperativity. We are just one step from writing the rate of gene expression of a gene under consideration which will be proportional to Ou and Ob . If gene promoter works with the coefficient αu , when unbounded, and with αb , when bounded, the total rate is ν(P ) = αu Ou + αb Ob =

αu N αb N (P /K)2 + . 1 + (P /K)2 1 + (P /K)2 (5.4)

Note that, if TF is a repressor, and, hence, αb = 0 according to Fig. 5.3, ν(P ) is a decreasing function, and if TF is an activator, and, hence, αu = 0, ν(P ) is an increasing function as shown in Fig. 5.3.

5.1 Modelling Transcriptional Regulation and Simple Networks

41

The rate of gene expression is, in fact, a time derivative of protein concentration, and hence, taking account of degradation of a protein rdeg (X), we get X˙ =

αu N αb N (P /K)2 + − rdeg (X). 1 + (P /K)2 1 + (P /K)2

(5.5)

This is an equation describing time evolution of the protein X, produced in the course of gene expression transcriptionally regulated by the transcriptional factor P . Usually, since simultaneous activation and repression are impossible, either αu or αb is set to be zero. In case, if an intermediate step of mRNA production (including degradation) should be taken into account, one can introduce its concentration m to get a system of equations: m ˙ =

αb N (P /K)2 αu N + − rdeg (m), (5.6) 1 + (P /K)2 1 + (P /K)2

X˙ = rtrl m − rdeg (X),

(5.7)

where rtrl is a rate of translation and rdeg (X) stands for a degradation rate of protein X. Here, several comments should be made about the derivation of a transcriptional equation. First, there are two main directions of modelling gene expression. One, using stochastic simulations or the Gillespie algorithms, directly works with chemical reactions and computes when the next reaction occurs and which reaction it will be. This is a good approach for the case of small number of molecules when fluctuations in the system are very high. In case, if we deal with a sufficient number of molecules and can approximate processes by averaging, one can apply the Hill-type ODEs, described here. Second, the coefficient of cooperativity, equal to 2 because of TF represented by dimer, is also equal to 1 − 4 in case of monomers, tetramers, or quadramers, or even equal to a non-integer value in case we describe phenomenologically a cascade of chemical reactions behind the formation of TF. Still, the main idea of the Hill approach remains the same.

5.1.3

Examples of Some Common Genetic Networks

5.1.3.1 Autorepressor Autorepressor is a genetic network containing just one gene that represses itself (Fig. 5.4). Despite its simplicity, this is a very common motif to meet in a genetic network because, as we will see now, it provides stable production of a protein expressed from this gene. Following the methodology of the Hill-type equations, a dynamics can be represented by the following equation, where some constants are excluded by rescaling: x˙ =

α − x. 1 + xn

Fig. 5.4 A schematic representation of an autorepressor, a gene that represses itself

One can easily see that in equilibrium x0 : x0n+1 + x0 − α = 0, x0 > 0, an equilibrium state is unique and stable, hence, providing constant and stable concentration of a protein.

5.1.3.2 Repressilator Using the transcriptional equation, we can model all possible topologies of genetic network, provided we know the character of the link between all pairs of genes. Let us illustrate this approach with several well-studied simple genetic networks. The most famous model of a genetic oscillator or a genetic clock is a repressilator, synthetically designed by Elowitz et al. in 2000 [2]. Consider a system of three genes that express proteins A, B, and C (see Fig. 5.5). Assuming gene regulation according to the Hill equations, let us derive the differential equations for the concentration of A, B, and C. In our derivation, we assume that A regulates B, B regulates C, and C regulates A in the form of dimers A2 , B2 , and C2 . All rates for the organisation of A2 , B2 , and C2 are equal to 1, and they degrade into single molecules also with the rate 1. All rates for binding of DNA molecules are 1 and for unbinding 1. The DNA molecule of gene A can be in two possible states, unbound: OAU or bound with C: OAC . Additionally, we have the condition: OAU + OAC = N . The expression of A gene occurs with the rate αAU OAU + αAC OAC but can occur only when gene is not bound with its inhibitor C. The same works for genes B and C. All proteins degrade with the rate γ X, where X is the concentration of this protein. As the system is symmetric for all three genes, we will get the same equation. For gene A, we have OAU C2 = OAC (for the reaction, we need joint event, hence, multiplying the concentrations to get probabilities). As a transcriptional regulation is fulfilled by dimers, we get C 2 = C2 , OAU C 2 = OAC , and OAU + OAC = N , where Fig. 5.5 A schematic representation of a repressilator generating periodic genetic oscillations and, hence, being a model of a genetic clock

42

5 Modelling Gene Expression

we have used the fact that all reaction rates we assumed to be equal to 1. Solving this, we get OAU =

N , 1 + C2

and writing the rate of the expression and adding degradation, we get αAU N A˙ = − γ A. 1 + C2 Similarly, for other genes, αBU N B˙ = − γB 1 + A2 αCU N C˙ = − γ C. 1 + B2 These equations describe periodic oscillations of variables A, B, and C, hence, describing a repressilator behaviour.

• The DNA molecule of gene C can be unbound: OCU , or bound with B4 : OCB . The expression of gene C occurs with the rate αCU OCU +αCB OCB but can occur only when gene C is bound with its activator B4 . • The total number of DNA molecules is fixed, i.e., OAU + OAC = N , OBU + OBC + OBA + OBAC = N , and OCU + OCB = N . • All rates for the production of A3 , B4 , and C2 are equal to kd = 1, and they degrade with the rate k−d = 1. All rates for binding of DNA molecules with the transcriptional factors are kb = 1 and for unbinding k−b = 1. • Proteins A, B, and C degrade with rate γ . Since all reaction rates are equal to 1, for organisation of transcriptional factors, we have A + A + A  A3 ,

C + C  C2 ,

or A3 = A3 ,

5.2

Simultaneous Regulation by Inhibition and Activation

Slightly more complicated situation occurs in the case of simultaneous regulation by two genes. To model it, let us consider a system of three genes that express proteins A, B, and C (see Fig. 5.6). A activates B by means of trimer A3 . B activates C by means of tetramer B4 , and C inhibits B and A by means of dimer C2 . Assuming gene regulation according to the Hill equations, derive the differential equations for the concentrations of A, B, and C. In our derivation, let us assume that • The DNA molecule of gene A can be in two possible states, unbound: OAU or bound with C2 : OAC . The expression of gene A occurs with the rate αAU OAU + αAC OAC but can occur only when gene A is not bound with the repressor C2 . • The DNA molecule of gene B can be in 4 possible states, unbound: OBU , bound with C2 : OBC , bound with A3 : OBA , or bound with A3 and C2 : OBAC . The expression of gene B occurs with the rate αBU OBU + αBC OBC + αBA OBA + αBAC OBAC but can occur only when gene B is bound with activator A3 and is not bound with the repressor C2 . Fig. 5.6 A genetic network with a gene B simultaneously regulated a repressor C and an activator A

B + B + B + B  B4

C 2 = C2 ,

B 4 = B4 .

For gene A, we have OAU + C2  OAC and OAU C2 = OAC (for the reaction, we need joint event, hence, multiplying the concentrations to get the probabilities). As C 2 = C2 , we have OAU C 2 = OAC and OAU + OAC = N . Solving this, we get OAU =

N . 1 + C2

Writing the rate of the expression (= αAU OAU , as αAC = 0) and adding degradation, we get αAU N A˙ = − γ A. 1 + C2 The most interesting situation is with the gene B: OBU + A3  OBA , OBA + C2  OBAC , OBU + C2  OBC , OBC + A3  OBAC OBU A3 = OBA ,

OBA C2 = OBAC ,

OBU C2 = OBC ,

OBC A3 = OBAC

So, B˙ = αBA OBA − γ B, and we need to find OBA . We can use OBU + OBC + OBA + OBAC = N . OBA = OBU A3 , OBU = N − OBC − OBA − OBAC = N − OABAC − OBA − C 2 OBA 3 2 Using OBAC = C 2 OBA , we have OBA = A3 (N − C AO3BA − OBA − C 2 OBA ), OBA (1 + C 2 + A3 + A3 C 2 ) = A3 N , and OBA =

1 + C2

A3 N , and + A3 + A3 C 2

B˙ =

αBA A3 N −γ B. (1 + C 2 )(1 + A3 )

5.3 Autorepressor with Delay

43

α − x0 − ξt 1 + x0n + nx0n−1 ξτ + . . . α ξ˙ = − x0 − ξt nx0n−1 ξτ n (1 + x0 )(1 + 1+x n )

ξ˙ =

Note that the last equation could be obtained by a simple multiplication of a repressor and activator parts without derivation of equations. This approach is called the SheaAckers formalism and is described in [3, 4]. For C, we get again C˙ = αCB OCB −γ C, so we need OCB . Using OCU + OCB = N and from OCU + B4  OCB , we get OCB B 4 = OCB , and (N − OCB )B 4 = OCB , so OCB =

5.3

B 4N , 1 + B4

αCB B 4 N C˙ = − γ C. 1 + B4

0

ξ˙ =

α − 1 + x0n

x˙ =

α 1+

x n (t

− τ)

− x,

where x is the protein concentration, α > 0—some constant, t—time, n—the Hill coefficient (integer), and τ —time delay. Note that ODE with a delay has, in fact, infinite dimensionality since for an initial conditions, we have to define the whole interval of a duration τ . Let us find the necessary condition for genetic oscillations, in terms of n, α, and x0 , where x0 is the unique equilibrium state of the system. Starting with

1 ξ˙ = − nx0n+1 ξτ − ξt α x0 α n( − 1)ξτ − ξt α x0 x0 ξ˙ = n( − 1)ξτ − ξt . α

ξ˙ = −

Assuming ξ expλt , we get λ = n(

let us linearise this system around equilibrium x0n+1 +x0 −

α = 0. Having x = x0 + ξ , ξτ = ξ(t − τ ), ξt = ξ(t), ξ˙ =

x0 − 1) exp−τ λ −1. α

In the Hopf bifurcation (condition of oscillations) λ = iω, iω = λ = n(

x0 − 1) exp−iωτ −1. α

Considering real and imaginary parts: Re(λ) = 0 − > n(

x0 1 −1) cos ωτ −1 = 0 − > cos ωτ = − α n(1 −

I m(λ) = ω = −n( Or

α x˙ = − x, n 1 + x (t − τ )

− x0 − ξt .

Using x0n+1 + x0 − α = 0, we have

Autorepressor with Delay

Autorepressor with delay is a one-gene network that represses itself (Fig. 5.7). This genetic network produces very reliable oscillations [5]. Moreover, it was shown experimentally that one can couple many oscillators to operate synchronously as a proper “genetic clock” [6]. Consider the dynamics of genetic autorepressor with delay described by the following equation:

αnx0n−1 ξτ (1 + x0n )2

n(

x0 α)

x0 − 1) sin ωτ. α

x0 − 1) cos ωτ = 1 α

x0 − 1) sin ωτ = −ω. α Squaring and taking sum n(

α − x0 − ξ(t) 1 + (x0 + ξ(t − τ ))n

n2 ( So, we have

x0 − 1)2 = ω2 + 1. α

x0 2 ) −1 α 1 cos ωτ = − . n(1 − xα0 ) ω2 = n2 (1 −

Fig. 5.7 An autorepressor with a delay

ω2 should be positive. Taking account that x0 < α, because x0n+1 = α − x0 > 0, we get the necessary condition for oscillations x0 n2 (1 − )2 − 1 > 0, α i.e., only if n  2.

44

5 Modelling Gene Expression

To get the characteristic equation,

Also, it is interesting to note that cos ωτ = − √

1 1 + ω2

  ∂f1 − λ  ∂ξ1  ∂f2  ∂ξ

,

1

and hence for ω >> 1, cos ωτ ≈ 0, ωτ ≈ T = 2π/ω ≈ 4τ , so much larger than τ .

5.4

π 2

Let us consider the model of a genetic switch which consists of two mutually inhibiting genes. This genetic network, called a toggle switch, was one of the two, together with a repressilator, pioneering artificial networks constructed in 2000 [7]. Suppose that a network contains two mutually inhibiting genes (see Fig. 5.8). Assume that dynamics of this system is described by the equations: K − X1 , 1 + X2n

X˙ 2 =

n2 (x0n+1 )2 = λ2 + 2λ + 1 − =0 K2  n2 (x n+1 )2 −2 ± 4 − 4(1 − K0 2 ) n λ1,2 = = −1 ± x0n+1 . 2 K λ crosses 0, when the equilibrium changes from stable node to unstable saddle. This happens when n −1 + x0n+1 = 0, K

K ξ˙2 = − ξ2 − x0 . 1 + (x0 + ξ1 )n Linearising

x0n+1

K = , n

 x0 =

n+1

K . n

Substituting in x0n+1 + x0 − K = 0, we get

K − X2 , 1 + X1n

where X1 and X2 are concentrations of proteins, n is the Hill coefficient, and K is the constant of gene expression. Find the critical value of K at which the equilibrium state with X1 = X2 loses stability, and is replaced by the two stable symmetrical equilibria (X1 , X2 ) and (X2 , X1 ), so that the system functions as a bistable switch. Because of symmetry, the steady state solutions are either x1 = x2 or a pair (x1 , x2 ) and (x2 , x1 ). If x1 = x2 = x0 , then x0n+1 + x0 − K = 0. For this equation, we have one positive root x0 . Linearising around it, and using x1,2 = x0 + ξ1,2 , we get K ξ˙1 = − ξ1 − x0 1 + (x0 + ξ2 )n

ξ˙1 = −

∂f2 ∂ξ2

and period

Bistable Genetic Switch

X˙ 1 =

    −1 − λ − nx0n+1  n2 (x0n+1 )2   K  = (−1−λ)2 −  =  nx n+1 K2 − λ − 0 −1 − λ K

∂f1 ∂ξ2

K + n K n(

 n+1

K − K = 0, n

n − 1 n+1 1 ) = , n n

1 K(1 − ) = n

so answer is

Kcrit =

 n+1

K n

n . √ (n − 1) n n − 1

This system can function as a genetic switch if only K > Kcrit . Note that again the coefficient of cooperativity n should be larger than or equal to 2 to provide this, and hence, a transcriptional regulation via dimers seems to be more appropriate for a genetic switch. Indeed, regulation by dimers is the most common one in biology.

5.5

Questions

1. Consider a system of three genes that express proteins A, B, and C. A inhibits B, B inhibits C, and C inhibits A by monomers. Assuming gene regulation according to the Hill equations, derive the differential equations for the concentration of A, B, and C.

n n+1 x ξ2 − ξ1 = f1 K 0

n ξ˙2 = − x0n+1 ξ1 − ξ2 = f2 . K

Gene 1

Gene 2

Fig. 5.8 A bistable genetic switch used to control, e.g., cellular differentiation

2. How will the equations for a repressilator change if we have six genes instead of three as shown in the figure below?

References

45

rates for binding of DNA molecules with the transcriptional factors are kb = 1 and for unbinding k−b = 1. f. Proteins A, B, and C degrade with rate γ . 4. Consider the dynamics of genetic autorepressor with delay described by the following equation: x˙ = 3. Consider a system of three genes that express proteins A, B, and C (see the following figure). A activates B by means of trimer A3 . B activates C by means of tetramer B4 , and C inhibits A, B, and itself by means of dimer C2 . Assuming gene regulation according to the Hill equations, derive the differential equations for the concentrations of A, B, and C.

where x is the protein concentration, α > 0—some constant, t—time, n—the Hill coefficient (integer), and τ —time delay. Find the necessary condition for genetic oscillations, in terms of n, α, and x0 , where x0 is the unique equilibrium state of the system. 5. Consider the model of a genetic switch which consists of two mutually inhibiting genes. Assume that dynamics of this system is described by the equations: X˙ 1 =

In your derivation, assume that a. The DNA molecule of gene A can be in two possible states, unbound: OAU or bound with C2 : OAC . The expression of gene A occurs with the rate αAU OAU + αAC OAC but can occur only when gene A is not bound with the repressor C2 . b. The DNA molecule of gene B can be in four possible states, unbound: OBU , bound with C2 : OBC , bound with A3 : OBA , or bound with A3 and C2 : OBAC . The expression of gene B occurs with the rate αBU OBU + αBC OBC + αBA OBA + αBAC OBAC but can occur only when gene B is bound with activator A3 and is not bound with the repressor C2 . c. The DNA molecule of gene C can be in 4 possible states, unbound: OCU , bound with C2 : OCC , bound with B4 : OCB , or bound with C2 and B4 : OCCB . The expression of gene C occurs with the rate αCU OCU + αCB OCB + αCC OCC + αCCB OCCB but can occur only when gene C is bound with its activator B4 . d. The total number of DNA molecules is fixed, i.e., OAU + OAC = N , OBU + OBC + OBA + OBAC = N , and OCU + OCB + OCC + OCCB = N . e. All rates for the production of A3 , B4 , and C2 are equal to kd = 1, and they degrade with the rate k−d = 1. All

α − x, 1 + x n (t − τ )

K − X1 , 1 + X2n

X˙ 2 =

K − X2 , 1 + X1n

where X1 and X2 are concentrations of proteins, n is the Hill coefficient, and K is the constant of gene expression. Find the critical value of K at which the equilibrium state with X1 = X2 loses stability and is replaced by the two stable symmetrical equilibria (X1 , X2 ) and (X2 , X1 ), so that the system functions as a bistable switch.

References 1. O’Brien EL, Van Itallie E, Bennett MR. Modelling synthetic gene oscillators. Math Biosci. 2012;236:1–15. 2. Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature. 2000;403:335–338. 3. Ackers GK, Johson AD, Shea MA. Quantitative model for gene regulation by λ phage repressor. Proc Natl Acad Sci. 1982;79:1129–1133. 4. Tamsir A, Tabor JJ, Voigt CA. Robust multicellular computing using genetically encoded NOR gates and chemical ‘wires’. Nature. 2011;469:212–215. 5. Stricker J, Cookson S, Bennett MR, Mather WH, Tsimring LS, Hasty J. A fast, robust and tunable synthetic gene oscillator. Nature. 2008;456:516–519. 6. Danino T, Mondragón-Palomino O, Tsimring L, Hasty J. A synchronized quorum of genetic clocks. Nature. 2010;463:326–30. 7. Gardner TS, Cantor CR, Collins JJ. Construction of a genetic toggle switch in Escherichia coli. Nature. 2000;403:339–342.

6

Metabolic Network Shangbin Chen and Ziwei Xie

Genomics and proteomics tell you what might happen, but metabolomics tells you what actually did happen. —Bill Lasley

6.1

Metabolism and Network

Metabolism is the sum of biochemical reactions in living organism [1]. The foodstuffs consisting of nutrients like carbohydrates, fats, and proteins can be broken down into glucose, fatty acids, and amino acids in the digestive system. Metabolism provides cells with chemical energy in the form of adenosine triphosphate (ATP), and building blocks for the synthesis of lipids, proteins, and nucleic acids. ATP is the “energy currency” of cell. Metabolism is the essence of life. Metabolic pathways can be classified into two major categories: anabolism, catabolism. Anabolism is the set of pathways that synthesise simple molecules or polymerise them into macromolecules by consuming energy. Pathways of catabolism degrade molecules to release energy [2]. In fact, waste disposal is also important which helps to eliminate the toxic waste produced by anabolism and catabolism. On the other hand, metabolisms can also be divided into three types based on functions they support: central carbon metabolism, biosynthesis, secondary and endogenous metabolism. Central carbon metabolism ensures provision of energy, and precursors for biosynthesis, and balance of redox powers, e.g., carried by Nicotinamide adenine dinucleotide (NADH) and nicotinamide adenine dinucleotide phosphate (NADPH) [3]. Biosynthesis converts the precursors into building blocks like amino acids, nucleotides, fatty acids, etc. required for cell growth. The secondary and endogenous metabolism is thought to be specialised and comprised of species-specific pathways but not aiding in the growth. Metabolism is not only material metabolism but also energy metabolism. Energy derived from the chemical energy stored in food is used to convert adenosine diphosphate (ADP) to ATP as a common energy currency. ATP is consumed by the ubiquitous physiological processes [4]: (1) active transport of ions and molecules across cell membranes; (2) syntheses of hormones, cell membranes, and many other

essential molecules of the body; (3) conduction of nerve impulses; (4) contraction of muscles and performance of mechanical work; (5) growth and division of cells division; (6) many other physiological events to maintain and propagate life. Disrupted metabolic reactions can lead to metabolic diseases [5], e.g., obesity, diabetes, cancer, and liver disorders. Understanding metabolism is of great value in medical research. Quantitative analysis of metabolism is crucial for evaluating organisms’ physiological or pathological states. Metabolic engineering and related synthetic biology [6] have been widely applied in the areas of material, energy, food, and health care. In particular, genome-scale metabolic model [7] brings a promising scaffold for omics data integration and analysis which is essential for developing systems medicine and personal health care. Metabolism is a network of highly coordinated chemical reactions. Each reaction is a metabolic pathway. In the network, one chemical is transformed through a series of steps into another chemical by a sequence of enzymes. Enzymes are proteins acting as catalysts to speed up the rate of chemical reactions significantly (typically 106 –1012 times greater). Even for the simple sugar of glucose, it may need over 10 kinds of catalytic enzymes during metabolic reactions. Enzymes are crucial to metabolism because they can lower the activation energy of reactions and allow organisms to drive desirable reactions. In addition, enzymes may regulate the metabolic pathways in response to changes or signals at cellular level. Carbohydrates are used in human metabolism as the primary source of energy in human body (about 50%–70%). Glucose has a central role in carbohydrate metabolism. It contains essential biochemical pathways, e.g., glycolysis, pentose phosphate pathway, and the tricarboxylic acid (TCA) cycle. The metabolism consists of anaerobic and aerobic processes. The metabolic pathway of glycolysis (anaerobic) converts glucose to pyruvate by via a series of intermediate metabolites (Fig. 6.1).

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_6

47

48

6 Metabolic Network

Hexokinase

Glucose 6phosphate

Glucose

Phosphoglucose isomerase

Phosphofructokinase

2

1 ATP

Fructose 6phosphate

4

3 ATP

ADP

Dihydroxiacetone phosphate

Fructose 1.6biphosphate Aldolase

ADP 4

5 Ald

Triosephosphate isomerase

ola

se

Glyceraldehyde 3phosphate

NAD+ H2O

Glyceraldehyde phosphate dehydrogenase

H2O

6 NADH/H+

Pyruvate kinase

Pyruvate

10 ATP

Phosphoenolpyruvate

2-phosphoglycerate

9

Enolase

ADP

Phosphoglycerate mutase

8

3-phosphoglycerate

Phosphoglycerate kinase

ATP

7

1.3biphosphoglycerate

ADP

Fig. 6.1 The metabolic pathway of glycolysis. 10-step chemical reaction is involved and 2 ATP are produced per glucose molecule

There are ten steps of chemical reactions for the metabolic pathway of glycolysis. The steps 1 and 3 consume 1 ATP, respectively, and each step 7 and 10 produce 1 ATP. Since the steps 6–10 occur twice per glucose molecule, this leads to a net production of ATP. The net glycolysis reaction per molecule of glucose is the following: Glucose + 2ADP + 2P O4−3 − > 2P yruvate + 2AT P + 2H2 O + 2H + It is necessary to explain the outcome of pyruvate. If there is no oxygen supply at this time, pyruvate is directly transformed into a metabolic end product. In animal cells and plant cells, the metabolic end product of anaerobic metabolism is different. In animal and human cells, pyruvate will be converted into lactic acid. In the case of vigorous exercise, the oxygen provided by the circulatory system does not meet the needs. The body will perform part of the anaerobic metabolism to meet the needs of the energy supply on a temporary basis. Sudden and intense exercise is often followed by the situation of muscle soreness which is the result of lactic acid accumulation. If pyruvate continues to be used as an intermediate metabolite under aerobic condition, it will enter the next metabolic pathway—TCA cycle. TCA cycle was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson, for which the former received the Nobel Prize for Physiology or Medicine in 1953. Thus, the TCA cycle is sometimes named as Krebs cycle. The TCA cycle is widely found in organisms. It is not only in the oxidation and decomposition of carbohydrate, but also fat and protein.

TCA is the common metabolic pathway for all the three major organic compounds with the beginning of the acetylCoA (Fig. 6.2). The pyruvic acid can be converted to acetyl coenzyme A (acetyl-CoA, AcCoA). The TCA cycle is a sequence of chemical reactions in which the acetyl portion of acetylCoA is degraded to carbon dioxide and hydrogen atoms (aerobic metabolism) in the matrix of mitochondria. The overall reaction is P yruvate + 3H2 O + GDP + P i + FAD + 4N AD + − > 3CO2 + GT P + 4N ADH + FADH2 + 2H + The oxygen is not explicitly shown in the above reaction, but it is required for the continued operation of TCA cycle. The reduced NADH and FADH2 are then oxidised by a special system called the electron transport chain (ETC) and enable the synthesis of ATP. If 1 mole of glucose are completely oxidised, they can release tremendous amounts of energy to form ATP. The process of glycolysis contributes 2 moles to the ATP, 2 moles NADH. Each mole of NADH produces 1.5 mole ATP (3 mole ATP if no transfer energy consumption). The TCA cycle contributes directly to 2 moles of ATP, contributes 8 moles NADH and 2 moles FADH2 . NADH produces 2.5 ATP per molecule (3 ATP if no transfer energy). FADH2 produces 1.5 ATP per molecule (2 ATP if no transfer energy). Taken together, each mole of glucose may potentially produce 30 or 38 moles of ATP after complete oxidation [4]. Complexity of metabolism is exhibited in its organisation and scale. Take central carbon metabolism, for example,

6.2 Constructing Metabolic Network

49

Fig. 6.2 The metabolic pathway of TCA cycle in mitochondria

it is closely integrated with cellular processes, e.g., the flow of energy and synthesising biological macro molecules from elementary components like amino acids. To facilitate functional diversity, central carbon metabolism has to be highly connected with other biological processes and therefore forms a densely-connected metabolic network. In addition to high degree of connectivity, the scale of a comprehensive metabolic system can be enormous. There have been recorded more than 60,000 unique reactions in KEGG (https://www.genome.jp/kegg/), BRENDA (https:// www.brenda-enzymes.org/) [8], and MetaCyc (https:// metacyc.org/).

6.2

Constructing Metabolic Network

Defined in Wikipedia, “a metabolic network is the complete set of metabolic and physical processes that determine the physiological and biochemical properties of a cell. As

such, these networks comprise the chemical reactions of metabolism, the metabolic pathways, as well as the regulatory interactions that guide these reactions”. Like the other biological networks, metabolic network is an intuitive graphical representation of chemical reactions. The vertices represent metabolites and edges represent chemical transformations. Vertices can carry weights to quantify concentrations, and directed edges carry weights to represent flux. In addition, edges can also carry constraints concerning the biochemical or thermodynamic properties of the reactions, and kinetic parameters. As a result, a metabolic network can be assembled from many interconnected metabolites in order to elucidate the structure of a dynamic metabolic system. Metabolic network modelling allows for an in-depth insight into the molecular mechanisms of a particular organism. Constructing metabolic network needs to collect all the metabolic pathways (including reactions and corresponding enzymes) and build a proper mathematical model. Quantita-

50

6 Metabolic Network

tive analysis of the network can offer critical information on metabolism, such as energy yield, growth rate, drug target, gene essentiality and robustness of network. Many formalisms of metabolic network modelling have been studied. Generally, the methods can be categorised as kinetic or constraint-based. Kinetic approach models the temporal behaviour of metabolic networks via a system of nonlinear differential equations representing the kinetic expressions, such as Michaelis–Menten and Hill Equations. It is only possible to using differential equations to describe dynamics of metabolic reactions when the metabolites are very limited. To construct and solve the differential equations, we may need to know metabolite concentrations, enzyme concentrations, and the enzyme activity rate. It is practically impossible to model large-scale metabolic networks when a lot of metabolites and critical enzymes are involved. The kinetic method is hindered by inadequacy of precise mechanistic details, e.g., functional forms of rate equations and associated parameters, experimental data to support mechanistic modelling. Constraint-based modelling relies on physio-chemical constraints such as mass balance, energy balance, and flux limitations to describe the potential behaviour of an organism at a steady state. Such steady-state description of metabolic behaviour has been introduced for modelling metabolism [9]. With the assumption of steady state, there are no changes in metabolite concentrations. The metabolite production and consumption rates are equal. Flux balance analysis (FBA) is a constraint-based modelling approach which assumes that metabolic networks will reach a steady state constrained by the stoichiometry [10–12]. Stoichiometry is the calculation of reactants and products in chemical reactions. Stoichiometry is founded on the law of conservation of mass where the total mass of the reactants equals the total mass of the products, leading to the insight that the relations among quantities of reactants and products (a)

typically form a ratio of positive integers. This means that if the amounts of the separate reactants are known, then the amount of the product can be calculated. Conversely, if one reactant has a known quantity and the quantity of the products can be empirically determined, then the amount of the other reactants can also be calculated.

6.3

Flux Balance Analysis

Mathematical Representation In a simplified cell model, all the metabolites and metabolic reactions are included. There are also three trans-membrane exchange fluxes. Supposing the model system is under steady-state, we can derive a series of mass balance equations. Also, all the metabolic reactions can be represented as a matrix like Fig. 6.3b. Every row represents one unique metabolite and every column represents one reaction. The entry is the stoichiometric coefficient of the reaction represented by its corresponding column. A negative coefficient means that the metabolite is consumed, or the corresponding flux leaves the cell system and a positive one indicates that the metabolite is synthesised, or the flux comes into the model cell. Stoichiometric coefficient matrix is a good mathematical representation of metabolic reactions. FBA assumes that the network is at its steady state, which means fluxes entering a particular node balance fluxes leaving it. Therefore, we have the mass constraint expressed as follow: S ∗ V = 0, where S is the stoichiometric matrix and V is the vector of the flux. Additional Constraints Generally, metabolic networks have more edges the nodes, which means that the stoichiometric matrix has more columns than rows. As a result, the stoichiometric constraint

(b) =− A

1+ 2− 3+ 1

B

C

=

1+ 4− 2

=

3− 4− 3

System boundary

Fig. 6.3 Principle on flux balance analysis (the schematic drawing is adapted from [11]). (a) A model system consisting of three metabolites (A, B, and C) with four reactions (internal fluxes, vi ) and three exchange fluxes (bi ). (b) For individual metabolites, mass balance equations can

−1 = 1 0

1 0 0

−1 0 1

0 1 −1

1 0 0

0 −1 0

0 0 −1

1 2 3 4 1 2 3

be written by considering all reactions and transport mechanisms are written for each species. These equations can also be transformed into matrix form. At steady state, this reduced to S ∗ V = 0. Here, S is stoichiometry

6.5 Questions

leads to an under-determined system; however, a bounded solution space of all feasible fluxes can be identified. This solution space can be further restricted by specifying maximum and minimum fluxes through any particular reaction and by specifying other physio-chemical constraints. The mass balance of all the metabolites in the metabolic network is defined in terms of the flux through each reaction and the stoichiometry of that reaction. In the second generation of FBA, other constraints could be added, such as thermodynamic constraints and regulatory constraints [12]. Objective Functions FBA can be applied to predict the optimal growth rate. In such case, the objective function is a biomass objective function. Biomass objective function takes the contributions of multiple biomass components and biomass precursors made to cell’s growth into account. Formulation of a biomass objective function is introduced in [13]. We can simply formulate the objective function as a linear function, and use linear programming to get the optimal solution. Metabolic networks usually contain numerous reactions, so to solve a linear programming problem containing many free variables efficiently is not easy. There happened the system optimal solution lies on an edge instead of a point when a limiting constraint parallels the objective function, as a result, a mixed integer linear programming can be applied. To understand the optimisation techniques used in FBA, one could look into the book of Optimisation Methods in Metabolic Networks [14]. Utilising biomass function in FBA relies on the assumption that microbes maximise the growth rate to approximate the overall fitness along evolution. There are many cases in which the premise does not hold. Microbes may not be exposed to long-term selective pressure, e.g., geneticallyengineered bacteria [15], and utilise different growth strategies reducing growth rate and biomass yields to improve efficiency, e.g., invest more in enzyme synthesis rather than cellular growth. In addition, wild-type bacteria should be able to accommodate environmental fluctuations in order to maintain numerical dominance through long-term evolution rather than to optimise growth rates under a specific constant nutrient input. Thus, some other objective functions have been purposed. Minimisation of metabolic adjustment (MOMA) [3] is an extension of FBA and is applied to predict the mutant behaviour of perturbed metabolism, whose growth performance is not hugely influenced by natural selection and is general suboptimal. Its objective function is based on the assumption that a mutant’s initial flux distribution approximates the wildtype’s optimal solution. Dynamic FBA (DFBA) [15] is a more flexible framework for modelling the time-dependent properties of metabolic networks and to incorporate experimental data. It relaxes the strict steady-state assumption and is able to predict the concentrations of metabolites. It replaces

51

the terminal-type objective function with an instantaneous function to predict the biomass, growth rate, and metabolites’ concentrations over each time step, and is able to integrate an end-point function to bring more power to modelling.

6.4

Myocardial Metabolic Network

In order to build one example, a simplified myocardial model of metabolic network is introduced [16]. Here, 8 pathways are considered in the myocardial metabolism, including glycolysis, fatty acid oxidation, glycogen oxidation, phosphocreatine synthesis and breakdown, and the TCA cycle. And 7 metabolites are involving the chemical reactions. Glucose, fatty acid and lactate and oxygen are transported from the blood flow into myocardial cells as substrates of energetic metabolism. Glycogen and phosphocreatine act as endogenic supplements. Although the real reactions in myocardia are very complex, we only sketch the major pathways associated with ATP production and consumption. According to previous work [15, 17], three rules are applied: (1) priority is placed on the pathways that contribute most to ATP production; (2) anaerobic metabolism, as a rescue mechanism, is also considered, even though it produces less ATP in normal myocardia than does aerobic metabolism; (3) metabolic intermediates, such as pyruvate and acetyl-CoA, are overlooked in the simplified metabolic network. Taken together, we build the current network including eight crucial pathways involving seven important metabolites (Fig. 6.4). By using this simple model, we can check the priority of glucose and fatty acid of metabolism under insufficient oxygen condition. Also, the model can be developed for simulating myocardia under normal and ischemic conditions. The computational results show that the myocardial metabolic network does not follow the optimal objective of ATP production maximisation under ischemic conditions, but instead reaches a suboptimal level of energy production with minimal adjustment with respect to metabolite concentration distribution. This work extends the MOMA hypothesis and implies the evolutionary optimisation of metabolism [3]. No doubt, these results may help us to understand the underlying mechanisms involved in the dynamic regulation of optimality of myocardial metabolic networks under ischemic conditions.

6.5

Questions

1. How many ATP molecules are produced during glycolysis per mole of glucose? Does this ATP production require oxygen? k1

k2

2. Suppose we have reactions A → B  2C+D. List all the k3

reactions and use stoichiometric matrix to describe them.

52

6 Metabolic Network

(a)

(b) Gly F Out Blood

In &

2 Lac + 2 ATP

u2 : Gluc

O2

Gluc

26.2 ATP

u1 : Gluc + 3.68 O2

Myocardium

AcCoA

ATP

FA

PC

u3 : FA + 13.82 O2

81.79 ATP

u4 : Lac + 1.84 O2

12.1 ATP

ATP

u5 : PC

26.9 ATP

u6 : Gly + 3.68 O2

Lac

2 Lac + 2 ATP

u7 : Gly

Gly

u8 : Gluc + ATP

(c) u1

u2

u3

u4

u5

u6

u7

u8

Glu

–0.0382

–0.5000

0

0

0

0

0

–1

FA

0

0

–0.0122

0

0

0

0

0

Lac

0

1

0

–0.0826

0

0

1

0

PC

0

0

0

0

–1

0

0

0

Gly

0

0

0

0

0

–0.0372

–0.5000

1

O2

–0.1405

0

–0.1690

–0.1521

0

–0.1368

0

0

1

1

1

1

1

1

1

–1

Fig. 6.4 A simplified myocardial metabolic network (adapted from [16]). (a) The myocardial model contains eight crucial pathways involving seven important metabolites (Gluc: glucose; FA: fatty acid; Gly: glycogen; Lac: lactate; PC: phosphocreatine; Pyr: pyruvate; AcCoA:

acetyl-CoA). (b) The eight metabolic pathways are listed with the ATP yield or consumption. (c) The corresponding stoichiometric matrix of the myocardial metabolic network

3. FBA is one kind of constraint-based reconstruction and analysis (COBRA) method [18]. There is a COBRA Toolbox which is freely available for FBA. Please download the COBRA Toolbox and make an application example. 4. Linear programming is used in flux balance analysis. Try to master this technique by solving the practice. Suppose variables A and B with the constraints (A < 40, B < 50, 2A+B < 100), find a maximum of the objection function Z = 10A + 20B. 5. There is a myocardial metabolic network example in the paper [16]. Download the MATLAB codes from the supplementary information, then execute and explain the simulation.

6. https://en.wikipedia.org/wiki/Synthetic_biology. 7. Oberhardt MA, Palsson BØ, Papin JA. Applications of genomescale metabolic reconstructions. Mol Syst Biol. 2009; 5 1:320. 8. Placzek S, Schomburg I, Chang A, et al. BRENDA in 2017: new perspectives and new tools in BRENDA. Nucleic Acids Res. 2016; 45:D380–D388. 9. Stelling J, Klamt S, Bettenbrock K, et al. Metabolic network structure determines key aspects of functionality and regulation. Nature 2002; 420 6912:190. 10. Orth JD, Thiele I, Palsson BØ. What is flux balance analysis? Nat Biotechnol. 2010; 28 3:245. 11. Kauffman KJ, Prakash P, Edwards JS. Advances in flux balance analysis. Curr Opin Biotechnol. 2003; 14 5:491–496. 12. Raman K, Chandra N. Flux balance analysis of biological systems: applications and challenges. Brief Bioinform. 2009; 10 4:435–449. 13. Feist AM, Palsson BØ. The biomass objective function. Curr Opin Microbiol. 2010; 13(3):344–349. 14. Maranas CD, Zomorrodi AR. Optimization methods in metabolic networks. Hoboken: Wiley; 2016. 15. Mahadevan R, Edwards JS, Doyle FJ. Dynamic flux balance analysis of diauxic growth in Escherichia coli. Biophys J. 2002; 83(3):1331–1340. 16. Luo R, Liao S, Tao G, et al. Dynamic analysis of optimality in myocardial energy metabolism under normal and ischemic conditions. Mol Syst Biol. 2006; 2(1):208. 17. Cairns CB, Walther J, Harken AH, Banerjee A. Mitochondrial oxidative phosphorylation thermodynamic efficiencies reflect physiological organ roles. Am J Phys. 1998; 43: R1376–R1384. 18. http://cobramethods.wikidot.com/start.

References 1. Silverthorn, Unglaub D, et al. Human physiology: an integrated approach. San Francisco: Pearson/Benjamin Cummings; 2009. 2. DeBerardinis RJ, Thompson CB. Cellular metabolism and disease: what do metabolic outliers teach us? Cell 2012; 148 6:1132–1144. 3. Segre D, Vitkup D, Church GM. Analysis of optimality in natural and perturbed metabolic networks. Proc Natl Acad Sci. 2002; 99 23:15112–15117. 4. Hall JE. Guyton and Hall textbook of medical physiology. Philadelphia: Saunders Elsevier; 2011. 5. Nielsen J. Systems biology of metabolism: a driver for developing personalized and precision medicine. Cell Metab. 2017; 25 3:572– 579.

7

Calcium Signalling Shangbin Chen

We start off confused and end up confused on a higher level. —Alan Chalmers

7.1

Functions of Calcium

Calcium is one of the major essential elements of the human body [1]. It contributes about 1.5 % of one’s body mass and ranks as the fifth (oxygen 65%, carbon 18%, hydrogen 10%, nitrogen 3%, calcium 1.5%, phosphorus 1%, potassium 0.35%, sulphur 0.25%, sodium 0.15%, chlorine 0.15%, magnesium 0.05%). The details are shown in Fig. 7.1. In fact, 99% of calcium in the body is found in the bones. For an adult of 75 kg, the calcium in the bones is about 1.13 kg. It exists mainly as the calcium salt, i.e. calcium phosphate (hydroxyapatite, Ca10 (P O4 )6 (OH )2 ). So, the bones can be rigid and strong. The bones of human can be considered as calcium source and sink, and it is important to maintain the level of nonbone calcium. The free calcium ions are crucial for many physiological functions. As we all known, calcium ion (Ca 2+ ) involves nearly in every aspect of life [2]. “Almost everything that we do is

controlled by Ca 2+ —how we move, how our hearts beat and how our brains process information and store memories.” This is the first sentence in the Nature review paper entitled “Calcium—a life and death signal” [3]. It impacts synaptic transmission, muscular contraction, cell division (mitosis), endocytosis, exocytosis, cellular motility, fertilisation, even necrosis and apoptosis. It is essential for the clotting of blood and the formation of bones and teeth. Ca 2+ is an important signal molecule. To decipher signalling pathway, we need to resort the two terms: the first messenger and the second messenger. The first messengers are the extracellular substances (such as the hormones, neurotransmitters, cytokines, lymph factors, growth factors, and chemical inducing agents) that bind to cell-surface receptors and elicit intracellular responses. The second messengers are molecules or ions inside cells that transmit signals received at receptors on the cell membrane to the target molecules or nucleus. There are 3 major classes of the second messengers: (1) cyclic nucleotides (e.g., cAMP and cGMP); (2) inositol trisphosphate (I P3 ) and diacylglycerol (DAG); (3) calcium ions (Ca 2+ ). Generally, the basic cell signalling pathway can be identified as the following steps [4]: 1. The first messenger molecules bind to and activate membrane receptors. 2. The activated receptors turn on the associated proteins to create the second messengers and activate protein kinases. 3. The second messengers regulate ion channels, increase intracellular Ca 2+ , and change enzymes (e.g., protein kinases) activity. 4. The Ca 2+ binding and phosphorylated proteins induce cell responses, including exocytosis, protein synthesis, and cellular motility.

Fig. 7.1 Sector graph shows the percentage of major essential elements in body mass. Potassium, sulphur, sodium, chlorine, and others are enlarged to show as a colour bars

In the abovementioned part, Ca 2+ is a key second messenger. It is crucial during the action of the first messengers. Taking exocytosis as an example, the event usually begins with an increase of intracellular Ca 2+ concentration. The

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_7

53

54

7 Calcium Signalling

Ca 2+ interacts with a calcium sensing protein, which in turn initiates secretory vesicle docking and fusion to the cell membrane. When the fused area of membrane opens, the vesicle contents diffuse into the extracellular fluid while the vesicle membrane stays behind and becomes part of the cell membrane. This process is also called as “kiss and run”. In resting cells, the cytoplasmic Ca 2+ level is low, about 100 nM. But the extracellular Ca 2+ level is relatively high, about 2 mM. This could induce a Nernst potential about 130 mV. Ca 2+ enters the cell either through voltage-gated Ca 2+ channels or through ligand-gated or mechanically gated channels. Blood Ca 2+ concentrations is stable in a narrow range (9–11 mg Ca 2+ / 100 ml blood). What is the mechanism of homeostasis? Blood Ca 2+ concentrations affect the excitability of neurons. Hypocalcaemia (low blood [Ca 2+ ]) causes neuron hyperexcitable, while hypercalcaemia depresses the excitability of neurons. Calcium entry from the extracellular fluid plays an important role in both smooth muscle and cardiac muscle contraction. Excitation-contraction (E-C) coupling is the process in which muscle action potentials initiate calcium signals that in turn activate a contraction-relaxation cycle. Blocking Ca 2+ entry through Ca 2+ channels decreases the force of cardiac contraction and decreases the contractility of vascular smooth muscle. Both of these effects lower blood pressure. Most intracellular Ca 2+ is stored in the endoplasmic reticulum (ER). Calcium can also be released from intracellular compartments by the second messengers, such as I P3 and Ca 2+ . Ca 2+ induced Ca 2+ release (CICR): an autocatalytic mechanism by which cytoplasmic Ca 2+ activates the release of Ca 2+ from internal stores through channels such as inositol-1,4,5-trisphosphate receptors or ryanodine receptors. CICR is central to the mechanism of Ca 2+ signalling. In many cell types, the concentration of cytosolic calcium shows transient increase. This process of Ca 2+ signalling can be distinguished as two forms: calcium oscillations and calcium waves [5]. Calcium oscillations are defined as repetitive increases in [Ca 2+ ] within single cells [6]. Calcium waves are identified as propagating increases of cytosolic [Ca 2+ ] that originate from a single cell and then engage neighbouring cells or from a part of cell then propagate to the other part [7, 8].

7.2

and mitochondria it is orders of magnitude higher. These concentration gradients are maintained in part by calcium ATPase pumps, which use the energy of ATP to pump calcium against its gradient, either out of the cell or into internal compartments. These large concentration gradients allow for a very rapid increase in cytosolic calcium concentration when calcium channels are opened. The increased calcium concentration in an oscillation comes from one of two places [9]: internal stores (such as the ER, the SR, the mitochondria, or calcium buffers), extracellular space of the cell. Calcium channels in the plasma membrane can be controlled by a variety of things, including voltage, agonist stimulation, and the amount of calcium in the ER. Typically, in non-excitable cells, binding of an agonist such as a hormone or a neurotransmitter to cell-surface receptors initiates a series of reaction, linked through a G-protein, that ends in the production of the second messenger inositol trisphosphate (I P3 ), which diffuses through the cytoplasm of cell and binds to I P3 receptors (IPR) which are located on the membrane of the ER. IPR are also calcium channels, and when they bind I P3 they open, allowing for the fast release of calcium from the ER into the cytoplasm (Fig. 7.2). Without some additional feedback mechanisms, simple release of calcium from the ER will not result in oscillations. Why? In fact, there are a number of positive or negative feedback pathways that operate in different cell types to mediate calcium oscillations. The two principal mechanisms are: modulation by calcium of the IPR open probability and modulation by calcium of I P3 production and degradation. Even the rates of I P3 production and degradation are calcium-dependent. The complex feedback loops result in Ca 2+ oscillations. There have been a lot of mathematical models on calcium oscillations [10]. There are mainly two types of models of

Calcium Oscillations

In many cell types, the concentration of free intracellular calcium oscillates, with a period ranging from a few seconds to a few minutes. These calcium oscillations are thought to control a wide variety of cellular processes. At rest, the concentration of calcium in the cell cytoplasm is low, around 100 nM, while outside the cell, and in the internal compartments such as the ER, sarcoplasmic reticulum (SR),

Fig. 7.2 Schematic drawing of the Somogyi and Stucki model Ca 2+ oscillations (adapted from [12]). There are 5 different Ca 2+ fluxes showed by the arrows and described by different rate parameters

7.2 Calcium Oscillations

55

calcium oscillations: deterministic and stochastic [11]. We can start the deterministic modelling work from the example of Somogyi and Stucki model (SS model) [12]. The SS model is a minimal model with only 2 independent variables: concentrations of Ca 2+ in cytosol [Ca 2+ ]Cyt and Ca 2+ in endoplasmic reticulum [Ca 2+ ]ER . As common knowledge, an inositol 1,4,5-triphosphate (I P3 ) dependent calcium-induced calcium release (CICR) mechanism is involved in Ca 2+ oscillations. CICR is central to the process in the intracellular space (ICS). The Ca 2+ influx from ECS to the cytosol (v1 ) may be constant rate. And the efflux rate out of the cell (v2 ) is linear with [Ca 2+ ]Cyt . Hormone hinds to the receptor on the cell surface, leading to the cascade of G protein-mediated activation of phospholipase C (PLC) and the production of I P3 . Both I P3 and the increased Ca 2+ in cytosol interact with the Ca 2+ release channel of an intracellular store (endoplasmic reticulum or sarcoplasmic reticulum, i.e. ER or SR) and induce Ca 2+ release (v3 ). The pumping rate of cytosolic Ca 2+ into ER or SR (v4 ) is linear with the [[Ca 2+ ]Cyt . Also, the leak rate out of ER or SR (v5 ) is proportional to [Ca 2+ ]ER . In the 5 different fluxes, the rate equations are linear except the CICR equation of v5 . We can list the 5 different rate equations in the following Table 7.1. Here, we can write the ordinary equations for the two variables of [Ca 2+ ]Cyt and [Ca 2+ ]ER . d[Ca 2+ ]Cyt = v1 − v2 + v3 − v4 + v5 dt

(7.1)

neuronal permeability d[Ca 2+ ]ER = −v3 + v4 − v5 . dt

(7.2)

With the constant parameters, we can use MATLAB program to generate typical pattern of Ca 2+ oscillations. In fact, the exact mechanisms for any particular cell type to generate the oscillatory pattern are difficult to determine. For astrocytes, more and more experiments suggested that Table 7.1 The 5 different calcium fluxes and rate equations Processes of Calcium event Rate equation Influx from ECS into cytoplasm v1 = k1 Efflux from cytoplasm into ECS v2 = k2 [Ca 2+ ]Cyt k3 [Ca 2+ ]ER [Ca 2+ ]4

Constants k1 = 1 k2 = 1 k3 = 5 K = 3.1

Cyt Calcium-induced calcium release v3 = K 4 −[Ca 2+ ]4Cyt from intracellular calcium store (ER or SR) Pumping from cytoplasm into v4 = k4 [Ca 2+ ]Cyt k4 = 2 ER or SR Leak into cytoplasm from ER or v5 = k5 [Ca 2+ ]ER k5 = 0.01 SR

they have a more direct and active role in the dynamic regulation of neuronal activation, synaptic transmission, neuronal production, cerebral blood flow, and some neurological diseases. Spontaneous Ca 2+ oscillations in astrocytes have been observed and implicated in the abovementioned important functions of the brain. However, the underlying mechanism of spontaneous Ca 2+ oscillations is far from clear. Voltagegated calcium channels (VGCCs) were found to be involved in Ca 2+ oscillations in pharmacological trials. The putative notion of Ca 2+ oscillations in astrocytes was that a small influx of Ca 2+ into the cytoplasm via VGCCs induces CICR activated by I P3 . In 2008, Lavrentovich and Hemkin’s (LH) developed a mathematical model [13] to simulate spontaneous astrocytic Ca 2+ oscillations based on the I P3 dependent CICR process. A small and constant Ca 2+ influx through the membrane was hypothesised to induce the Ca 2+ oscillations. The LH model was consistent with experiments. Since no model had investigated VGCCs during the spontaneous Ca 2+ oscillations in astrocytes, the details of Ca 2+ influx were missing. So, we modified the LH model by integrating different types of VGCCs [14]. It is well-known that the electrophysiological features of VGCCs can be described by Hodgkin-Huxley (HH) equations. Here, the Ca 2+ influx was modulated by physiological conditions (extracellular Ca 2+ concentration, membrane potential, temperature and blocking of specific channels) and quantified with HH equations. So, a varying but tractable Ca 2+ influx replaced a constant flow in our work [14]. In the proposed model, the HH equation and the LH model were combined together to describe the biological processes: Ca 2+ influx through VGCCs induces I P3 activated CICR, the CICR and recovery mechanisms generate cytosol Ca 2+ oscillations. For simplicity, only three compartments were considered, i.e. the extracellular space (ECS), intracellular space (ICS), and the intra-space of endoplasmic reticulum (ER). Three variables of Ca 2+ concentration in ICS and ER, and I P3 concentration in ICS, were modeled in this study. The dynamics of Ca 2+ oscillations (including the frequency, amplitude, onset and half-maximal duration) in the physiologically relevant range were simulated. Our results successfully reproduced the experimental observations of spontaneous astrocytic Ca 2+ oscillations. In Fig. 7.3, an astrocyte is separated from the ECS by the membrane, with two compartments representing ECS and ICS. Different types of VGCCs distributed in the membrane form the pathway of Ca 2+ influx. In fact, VGCCs are identified as low-voltage-activated channels (T-type) and L, N, P, Q, and R types of high-voltage activated channels. In the current model, the P, Q, and R types were grouped together as the R type. Four HH equations were used to model the VGCCs with the same formula but modified conductance parameters from Amini et al’s work [15]. Correspondingly,

56

7 Calcium Signalling Table 7.2 Variables and parameters used in the model Symbol I g m h V ECa p q t z T R

Fig. 7.3 An astrocyte model of voltage gated calcium channels (VGCCs) mediating Ca 2+ oscillations (reprinted with the permission from [14]). Different types of VGCCs (T, L, N, R, P, Q) form the Ca 2+ flow JV GCC from the extracellular space (ECS) to the intracellular space (ICS). Cytoplasmic Ca 2+ induces the production of inositol 1,4,5-triphosphate (I P3 ) with the catalysation of phospholipase C (PLC). Cytoplasmic Ca 2+ and I P3 modulate I P3 receptors (I P3 R), facilitating Ca 2+ flow JCI CR out of the endoplasmic reticulum (ER). The sarcoplasmic/endoplasmic reticulum calcium ATPase (SERCA) fills ER with the Ca 2+ flow JSERCA . Two “Leak” arrows present the concentration gradient dependent fluxes

the four types of currents were indicated as the T-type current (ICa,T ), L-type current (ICa,L ), N-type current (ICa,N ), and the current combination of R, P, and Q types current (ICa,R ). All the four types of Ca 2+ ionic currents via VGCCs have the generalised HH form: I = gmp hq (V − ECa ).

dh h¯ − h = . dt τh

RT Caout ln zF CaCyt

Vast JVGCC CaCyt JCICR JSERCA Pout Pf

(7.4)

V is the membrane potential, and ECa is the Nernst potential for Ca 2+ ion: ECa =

g¯ T g¯ L g¯ N g¯ R IVGCC

(7.3)

Here, g is the maximal membrane conductance. m and h are dynamic variables to quantify channel activation and inactivation. p and q are integers that are experimentally fixed for each type of channel (both are 1 in the current model). m and h may relax exponentially to their steady-state values m ¯ and h¯ according to: dm m ¯ −m = , dt τm

F

(7.5)

In Eq. (7.5), R is the ideal gas constant, F is the Faraday constant, T is the temperature, and z is the valence of Ca 2+ ion (see details in Table 7.2). The detailed formula for

CaER I PCyt JPLC Pdeg MCICR PCaA PCal n1 n2 PIP3 MSERCA PSERCA MPLC PPCa

Description Ionic current Membrane conductance Channel activation variable Channel inactivation variable Membrane potential Nernst potential of Ca2+ Integer for activation variable Integer for inactivation variable Time Valence of Ca2+ ion Temperature Ideal gas constant

Value (unit) fA µS

mV mV 1 1 s 2 300 K 8.31 J/(mole·K) 96485 Faraday’s constant coul/mole Steady conductance of T-type channel 0.0600 pS Steady conductance of L-type channel 3.5000 pS Steady conductance of N-type channel 0.3900 pS Steady conductance of R-type channel 0.2225 pS Total Ca2+ Current through all fA VGCCs Volume of an astrocyte 5.233 × 10−13 | Influx of extracellular Ca2+ Into 0 µM/s (t=0) cytosol via VGCCs Ca2+ Concentration in cytosol 0.1 µM (t=0) IP3 mediated CICR flux to the cytosol 0 µM/s (t=0) from the ER The filling with Ca2+ from the cytosol 0 µM/s (t=0) to ER Rate of calcium efflux from the 0.5 s−1 cytosol into ECS Rate of leak flux from the ER into the 0.5 s−1 cytosol Ca2+ Concentration in ER 1.5 µM (t=0) IP3 concentration in cytosol 0.1 µM (t=0) Production of IP3 0 µM/s (t=0) Rate of IP3 degradation 0.08 s−1 Maximal flux of calcium ions into the 40 s−1 cytosol Activating affinity 0.15 µM Inhibiting affinity 0.15 µM Hill coefficient 2.02 Hill coefficient 2.2 Half-saturation constant for IP3 0.1 µM activation of IP3R Maximal flux across SERCA 15.0 µM/s Half-saturation constant for SERCA 0.1 µM activation Maximal production rate of PLC 0.05 µM/s Half-saturation constant for calcium 0.3 µM activation of PLC

7.2 Calcium Oscillations

57

current contribution to the increase of [Ca 2+ ], the current was converted into the flux as follows:

Table 7.3 Details of the voltage-gated calcium channels Channel type T-type

Equation of channel kinetics ICa,T = g¯ T mT (hTf + 0.04hTs )(V − ECa ) 1.0 1.0 ¯ m ¯ T = 1.0+e−(V +63.5)/1.5 hT = 1.0+e(V +76.2)/3.0

JV GCC = −

−((V +72.0)/10.0)2

L-type

τhTf = 50.0e + 10.0 2 τhTs = 400.0e −((V +100.0)/10.0) + 400.0 2 τmT = 65.0e−((V +68.0)/6.0) + 12.0 ICa,L = g¯ L mL hL (V − ECa ) 1.0 0.00045 m ¯ L = 1.0+e−(V hL = 0.00045+Ca +50.0)/3.0 cyt τmL = 18.0e−((V +45.0)/20.0) + 1.5 ICa,N = g¯ N mN hN (V − ECa ) 1.0 0.00010 m ¯ N = 1.0+e−(V +45.0)/7.0 hN = 0.00010+Cacyt

τmN = 18.0e −((V +70.0)/25.0) + 0.30 ICa,R = g¯ R mR hR (V − ECa ) mR = 1.0+e−(V1.0+10.0)/10.0 h¯ R = 1.0+e(V1.0 +48.0)/5.0 2

R-type

τmR = 0.1 e−((V +62.0)/13.0) + 0.05 2 τhR = 0.5 e−((V +55.6)/18.0) + 0.5 2

every type of calcium current is given in Table 7.3. Due to the period of spontaneous astrocytic Ca 2+ oscillations was reported to be about 100 s, the steady-state current of VGCCs was considered here. The steady-state activation fraction of different channels vs. membrane potential is shown in Fig. 7.4. Amount of calcium current flow through VGCCs in an individual cell can be calculated as follows: IV GCC = ICa,T + ICa,L + ICa,N + ICa,R .

(7.6)

In Eq. (7.6), the entry of Ca 2+ current into an astrocyte was routinely defined as negative. In order to measure the

Fig. 7.4 The normalized steady-state activation fraction of the T-, L-, R- and N-type voltage-gated calcium channels (reprinted with the permission from [14]). They show different dependence on membrane potential.

(7.7)

A model astrocyte is assumed as a spherical soma with a radius of 5 μm, and Vast is the volume calculated to be 5.23 × 10−13 l. JV GCC is the change rate of [Ca 2+ ] with the unit μM/s. Besides the total Ca 2+ flux, every subpopulation influx via different VGCCs could be recorded.

2

N-type

IV GCC . zF Vast

Calcium-Induced Calcium Release Previous work has suggested that astrocytes exhibit Ca 2+ oscillations via an I P3 dependent CICR mechanism. CICR was modelled in the LH model, where ER was considered as the calcium store. LH model has been adopted in our work. There are three variables: Ca 2+ concentration in the cytosol ([Ca 2+ ]Cyt ), Ca 2+ concentration in the ER ([Ca 2+ ]ER ), and the I P3 concentration in the cell ([I P3 ]Cyt ). Each variable is described by an ordinary differential equation. d[Ca 2+ ]Cyt = JV GCC − Pout [Ca 2+ ]Cyt + JCI CR dt      + JSERCA + Pf Ca 2+ ER − Ca 2+ Cyt (7.8) Here JV GCC is the same variable as in Eq. (7.5). Pout [Ca 2+ ]Cyt indicates the rate of Ca 2+ efflux from the cytosol into ECS. JCI CR represents the I P3 mediated CICR flux to the cytosol from the ER. JSERCA is the flux by

58

7 Calcium Signalling

the sarcoplasmic/endoplasmic reticulum calcium ATPase (SERCA) which fills the ER with Ca 2+ from the cytosol. Pf ([Ca 2+ ]ER − [Ca 2+ ]Cyt ) describes the leakage flux from the ER into ICS along the concentration gradient. d[Ca 2+ ]ER = JSERCA − JCI CR + Pf dt



Ca 2+

ER



 − Ca 2+

Cyt

(7.9) In Eq. (7.9), all the 3 terms in the right hand have the same meaning as in Eq. (7.8). d[I P3 ]Cyt = JP LC + Pdeg [I P3 ]Cyt dt

(7.10)

In Eq. (7.10), JP LC denotes I P3 production, and Pdeg [I P3 ]Cyt represents I P3 degradation. In the above 3 equations, Pout , Pf , and Pdeq are constants, while the other three terms JCI CR , JSERCA , and JP LC are determined by the following 3 equations: JCI CR =

  2+    2+ n1  n1 2+ 4MCI CR PCaA Ca − Ca Cyt Ca Cyt [I P3 ]n2 Cyt ER      n1  n1 n1 n1 n2 Ca 2+ Cyt + PCaA Ca 2+ Cyt + PCaI [I P3 ]n2 Cyt + PI P3

(7.11)

(7.13)

Among them, MCI CR , MSERCA , MP LC , PCaA , PCaI , PI P3 , PSERCA , PP Ca , n1, and n2 are all constants in the current model. All the variables and parameters have been listed in Table 7.2. A majority of the parameters were taken from the LH model [13]. The initial values of some variables are indicated as t=0 in Table 7.2. The details of VGCCs are listed in Table 7.3 (both Tables 7.2 and 7.3 reused with the permission from [14]). The purposed model has been implemented in the MATLAB environment (MATLAB 7.0, the MathWorks Inc., USA). The model system was discretized with a temporal precision of 10 ms. The canonical explicit difference method was exploited to solve the three ordinary differential equations (Eqs. (7.6)–(7.8)). All the variables and intermediate values in the model were calculated and recorded with double precision. A specific script for the model has been shared in the supplementary resource of the online paper [14]. The current model could replicate the oscillatory phenomena under different experimental conditions, such as resting membrane potential (−75 ∼ −60 mV), extracellular Ca 2+ concentration (0.1 ∼ 1500 μM), temperature (20 ∼ 37 ◦ C), and blocking specific Ca 2+ channels. The typical Ca 2+ oscillations in astrocytes are shown in Fig. 7.5.

0.8

[Ca2+ ] (M) Cyt

Fig. 7.5 Representative spontaneous astrocytic Ca 2+ oscillations from the simulation (reprinted with the permission from [14]). The oscillations in cytoplasmic Ca 2+ , ER Ca 2+ , and cytoplasmic I P3 are shown from top to bottom. Clearly, the three variables oscillate with the same frequency but different peak time (refer to the dash line)

(7.12)

0.6 0.4 0.2 0

0

100

200

300

400

500

600

700

800

0

100

200

300

400

500

600

700

800

400

500

600

700

800

6 2+ ] (M) [CaER

MSERCA [Ca 2+ ]2Cyt  2 2 Ca 2+ Cyt + PSERCA

4 2 0

0.4

[IP3] (M)

JSERCA = 

MP LC [Ca 2+ ]2Cyt  JP LC =  2 Ca 2+ Cyt + PP2 Ca

0.3 0.2 0.1 0

0

100

200

300

Time (s)

7.4 Questions

59

By varying the experimental conditions, the amplitude and duration of Ca 2+ oscillations changed less than 25%, but the frequency changed to about 400%. This suggested that spontaneous astrocytic Ca 2+ oscillations are an “all-ornone” process, which might be frequency encoded signalling. Further, the features of Ca 2+ oscillations were found to be associated with the dynamics of Ca 2+ influx but a constant influx. Thus, calcium channel dynamics should be considered in studying Ca 2+ oscillations. Taken together, our work has provided a tool to probe the still unclear mechanism of spontaneous astrocytic Ca 2+ oscillations.

7.3

Calcium Waves

There are two types of calcium waves: intracellular wave and intercellular wave. The first type indicates that the propagating increases of cytosolic [Ca 2+ ] only spreads throughout the individual cell (i.e. from a part of cell then propagates to the other part). The second type means the wave originates from a single cell and then engages neighbouring cells (Fig. 7.6). One example of intracellular Ca 2+ waves occurs in immature Xenopus oocytes (can have a diameter larger than 600 μm). In 1991, Lechleiter et al observed the Ca 2+ waves showing both concentric circles and multiple spirals [7]. The cellular automaton model was used to simulate the Ca 2+ wave. The computational model had demonstrated the spatiotemporal patterns but also determined the absolute refractory period. There are a wealth of intercellular Ca 2+ waves in different cell types’ culture, such as epithelial cells, glia, hepatocytes, (a)

(b) Stimuli [Ca2+]

Stimuli [Ca2+]

Fig. 7.6 Two types of Ca 2+ waves: (a) intracellular Ca 2+ wave; (b) extracellular Ca 2+ wave

and pancreatic acinar cells. The diffusion of Ca 2+ , I P3 , and ATP in both extracellular and intracellular space is the basic mechanism. The partial differential equation with diffusion term is usually used. The Ca 2+ dynamics can be model as reaction-diffusion equation. In some cases, a fire-diffuse-fire model is used to mimic Ca 2+ waves. Astrocytes participate in brain functions through Ca 2+ signals, including Ca 2+ waves and Ca 2+ oscillations. Ca 2+ waves in the astrocyte networks are considered to represent an effective form of intercellular signaling in the central nervous system. Ca 2+ released from the ER is usually considered to be the key factor in the generation of Ca 2+ waves which are induced by ATP or I P3 , but this is not necessary when Ca 2+ waves are triggered by cortical spreading depression (CSD) [16] (see more details on CSD in Sect. 13.1). A one-dimensional astrocyte network model (Fig. 7.7) has been used to investigate the contributions of different Ca 2+ flows, including Ca 2+ flows among the extracellular space, the cytoplasm, and the ER of astrocytes, to the generation of these Ca 2+ oscillations and waves. The results suggested that the Ca 2+ oscillations depend primarily on Ca 2+ released from internal stores in astrocytes, whereas CSD induced Ca 2+ waves depend mainly on voltage-gated Ca 2+ influx. Notification: A big part of the section “Calcium oscillations” is originally published in the Biophysical Journal paper [14].

7.4

Questions

1. Nimodipine is a calcium channel blocker. It is used for the treatment of high blood pressure. How does calcium channel blocker lower blood pressure? 2. Visit the model repository (https://models.physiomeproje ct.org/welcome) of Physiome Project, and find one model for Ca 2+ oscillations and another one for Ca 2+ wave. Try to run the codes and show the results. 3. Use MATLAB to implement computational model for the minimal SS model of Ca 2+ oscillations [12]. 4. Implement the cellular automaton model in the Science paper on spiral Ca 2+ wave [7]. 5. Download the codes of model on spontaneous Ca 2+ oscillations in astrocytes [14] and run it.

60

7 Calcium Signalling

(a) ICS

ECS

Jserca

Astrocyte IP3

Ca2+

VGCCs

PLC Jvgcc

Ca2+

Ca2+

IP3R

ER

Jcicr Jleak

K+ Jupt

Jdis

Jpump Jleak

K+

(b) Astrocyte ER

Gap

Astrocyte

Juntion

ER

Gap Juntion

Astrocyte ER

Fig. 7.7 A schematic diagram of the one-dimensional astrocyte network model (reprinted with the permission from [16]). (a) In a single astrocyte, Ca 2+ influx through voltage-gated calcium channels (VGCCs) triggers the fluctuation of Ca 2+ in the intracellular space (ICS), activating the process of calcium-induced calcium release (CICR) via the 1,4,5-triphosphate (I P3 ) mechanisms. The endoplasmic reticulum (ER) is filled with Ca 2+ by the sarcoplasmic/endoplasmic reticulum calcium ATPase (SERCA). A Ca 2+ pump discharges Ca 2+ from the ICS into the extracellular space (ECS). K + in the ECS is partly pumped into

the ICS during cortical spreading depression (CSD). JV GCC , JCI CR , and JSERCA represent the Ca 2+ flow through VGCCs, CICR, and SERCA, respectively. Jpump represents the Ca 2+ flow through the Ca 2+ pump. Jleak represents the leak Ca 2+ flow. Jupt represents the K + flow untaken into ICS, and Jdis represents the K + flow discharged into ECS. (b)The individual astrocytes are coupled to the adjacent ones by the transfer of I P3 from cytosol to cytosol through gap junctions to form a one-dimensional astrocyte network

References

11. Falcke M. Deterministic and stochastic models of intracellular Ca 2+ waves. New J Phys. 2003; 5 1:96–96. 12. Somogyi R, Stucki JW. Hormone-induced calcium oscillations in liver cells can be explained by a simple one pool model. J Biol Chem. 1991; 266 17:11068–11077. 13. Lavrentovich M, Hemkin S. A mathematical model of spontaneous calcium(II) oscillations in astrocytes. J Theor Biol. 2008; 251 4:553–560. 14. Zeng S, Li B, Zeng S, Chen S. Simulation of spontaneous Ca 2+ oscillations in astrocytes mediated by voltage-gated calcium channels. Biophys J. 2009; 97(9): 2429–2437. 15. Amini B, Clark J, Canavier C. Calcium dynamics underlying pacemaker-like and burst firing oscillations in midbrain dopaminergic neurons: a computational study. J Neurophysiol. 1999; 82:2249–2261. 16. Li B, Chen S, Zeng S, Luo Q, Li P. Modeling the contributions of Ca 2+ flows to spontaneous Ca 2+ oscillations and cortical spreading depression-triggered Ca 2+ waves in astrocyte networks. PLoS One 2012; 7 10:e48534.

1. Bootman MD. Calcium signaling. Cold Spring Harb Perspect Biol. 2012; 4 7:a011171. 2. Clapham DE. Calcium signaling. Cell 2007; 131 6:1047–1058. 3. Berridge MJ, Bootman MD, Lipp P. Calcium-a life and death signal. Nature 1998; 395 6703:645–648. 4. Silverthorn DU. Human Physiology: an integrated approach. San Francisco: Pearson/Benjamin Cummings; 2009. 5. Dupont G, Falcke M, Kirk V, Sneyd J. Models of calcium signalling. Switzerland: Springer; 2016. 6. Nedergaard M, Ransom R, Goldman SA. New roles for astrocytes: redefining the functional architecture of the brain. Trends Neurosci. 2003; 26 10:523–530. 7. Lechleiter J, Girard S, Peralta E, Clapham D. Spiral calcium wave propagation and annihilation in Xenopus laevis oocytes. Science 1991; 252 5002:123–126. 8. Zhao Y, Zhang Y, Liu X, et al. Photostimulation of astrocytes with femtosecond laser pulses. Opt Express 2009; 17 3:1291–1298. 9. Sneyd J. Models of calcium dynamics. Scholarpedia 2007; 2 3:1576. 10. Goldbeter A. Computational approaches to cellular rhythms. Nature 2002; 420 6912:238–245.

8

Modelling Neural Activity Alexey Zaikin

There is no scientific study more vital to man than the study of his own brain. Our entire view of the universe depends on it. —Francis Crick

8.1

Introduction to Brain Research

The brain research is now a very hot topic and, if, probably, the sequencing the human genome was the most important research question of the twentieth century, understanding mammal brains, its intelligence and consciousness is the most important question of the twenty-first century and still remains the unsolved mystery. A lot of information is now known about the human brain. The human brain contains a tremendous number of neurons, ≈ 100,000,000,000 neurons, and each of this neuron can be linked with up to 10,000 synaptic connections. We have a very large network with complex connections, but, in addition to this, we have approximately the same number of glial cells covering all brain volume and overlapped with neurons. Glial cells can be classified as belonging to four main different groups, among which astrocytes are the mostly known. Initially it was believed that astrocytes, generating calcium events, serve just as a support cells, helping neurons to get energy. Later it was found out that astrocytes take important role in information processing, because appearing, propagating, and disappearing calcium events mediate neuron activity. Taking account of the fact that an average duration of calcium events is much larger than the duration of a neuron spike, human brain can be described as a system of two overlapping networks (i.e., a multiplex network) with very different characteristical time scales. Here it is important to note that each cell also has inside a network of interacting genes with approximately 20,000 nodes what makes this network to be even more complex, hence, the mammal brain is a complex network of networks. In addition to astrocyte, mammal brain contains microglial cells performing the role of immune cells cleaning the neurons from garbage. A theory has been developed that ageing is linked with the propagation of inflammation caused by this garbage. This theory is called inflammageing and has been developed by Claudio Franceschi [1]. The human brain

is occupying just 2% of the body’s mass but takes around 20% of the energy generated in metabolic reactions. So high metabolism leads to the production of the significant amount of garbage, and this garbage should be properly removed, otherwise inflammation occurs. Probably a cleaning process occurs during the process of sleeping. Till recently it was believed that neurons cannot be born in adult age, and, actually, after 25 years in age we start just to lose our neurons. As it was recently found out, it is not completely true, because due to neurogenesis new neurons can be also born in the human brain. The human brain is responsible for appearance of intelligence and consciousness functions. Whereas, it is more or less clear now that an intelligence could appear as a result of connections like in artificial neural networks, appearance of consciousness is a complete mystery. Recently the US neuroscientist Julio Tononi suggested a theory of integrated information (II) as a theory of consciousness (see Sect. 13.4.6), but this approach remains controversial. The core idea of this approach is that a system would generate positive II if it generates more information as a whole than any sum of its parts or partitions. It was speculated that the II can serve as a measure of consciousness and, indeed, some preliminary results were very promising. However, this research encountered several difficulties, one of which was connected with the tremendous numerical complexity on II estimations. Another difficulty is that it was found that even relatively simple systems of coupled oscillators also are able to generate positive II without any consciousness. This research is fast developing now and soon we will know new and interesting results. We will consider very simple models of a neuron activity, for more detail see [2]. In experiment it was observed that some neurons emit action potentials periodically (e.g., 100 per second), some irregularly, in excitable regime. Our aim is to construct and analyse a minimal model of neuron activity demonstrating these two regimes. In Fig. 8.1 one can see a simplified scheme of neuron topology consisting a soma,

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_8

61

62

8 Modelling Neural Activity

Fig. 8.1 A scheme of one neuron

dendrites to collect inputs from other neurons, and axon with synaptic terminals to distribute the neural spike. If we compare the soma’s size with a human height that a length of the axon is about 1 mile, i.e., axon is really long and can cover large areas in the brain. If collected input exceed some threshold a neuron generates a spike, an action potential of the duration approximately 0.001 s which will propagate through axon with a propagation speed approximately equal to 1–100 m/s.

8.2

The Hodgkin-Huxley Model of Neuron Firing

In 1952 Hodgkin and Huxley developed a simple model of neural activity for the nerve axon of a giant squid. These neural cells are so large that it was physically possible to put electrodes inside the axon and measure its voltage current characteristics. They have assumed that an electrical pulse arises because membrane is preferentially permeable to various chemical ions, affected by the current and potential present. There are two main ion types, potassium K + and sodium N a + ions, and much less a current is supported by all other types of ions, which were called “leakage” ions. Let us take the positive direction for membrane current I outward from axon. Then the current I (t) occurs due to the individual ions Ii (IN a sodium, IK potassium, and IL “leakage” because of all other ions) and the contribution from the time variation in the transmembrane potential, i.e., the membrane capacitance contribution C dV . dt So if Ia is applied to the axon, the current is described by the equation Ia = I (t) = C

 dV + Ii , dt

where based on experimental observations the following phenomenological functions were fitted 

Ii = IN a + Ik + IL = gN a m3 h(V − VN a ) + gK n4 (V − VK ) + gL (V − VL ),

where gN a m3 h sodium conductance, gK n4 —potassium, and gL is leakage conductance, VN a , VK , VL are constants responsible for equilibrium potentials, and m, n, h are variables bounded between 0 and 1, and responsible for opening and closing of ion channels. These variables are described by the equations: dm = αm (V )(1 − m) − βm (V )m dt dn = αn (V )(1 − n) − βn (V )n dt dh = αh (V )(1 − h) − βh (V )h, dt where α functions are responsible for an opening of the corresponding ion channel and β functions were responsible for its closing. Together with the equation C

dV = −gNa m3 h(V −VNa )−gK n4 (V −VK )−gL (V −VL )+Ia dt

these equations describe the Hodgkin-Huxley model of neural firing. One can show by numerical simulations that if Ia = 0 it can demonstrate excitable regime, and if Ia = 0 appearance of a limit cycle occurs, that is in a good agreement with experimental measurements.

8.3 The FitzHugh-Nagumo Model: A Model of the HH Model

The FitzHugh-Nagumo Model: A Model of the HH Model

Richard FitzHugh who suggested the model in 1961 and J. Nagumo who created the equivalent circuit in the following year described a model which can be considered as an approximation of the Hodgkin-Huxley model still catching its main dynamical features. Following their approach, the neuron dynamics can be qualitatively approximated by v for V , and w for m, n, h, where v plays the role of the membrane potential V , and w is responsible for opening and closing of ion channels. Then the Hodgkin-Huxley equations can be approximated as dv = f (v) − w + Ia dt dw = bv − γ w, dt

(8.1) (8.2)

where f (v) = v(a − v)(v − 1) and 0 < a < 1. Let us consider a methodology of an analysis in the phase plane for a dynamical system. Suppose we consider a set of equations: ˙ = f1 (x1, x2) x1

(8.3)

˙ = f2 (x1, x2). x2

(8.4)

Since a system is two-dimensional, the phase plane is a plane with axes (x1 , x2 ). A first step is to draw nullclines f1 (x1, x2) = 0, f2 (x1, x2) = 0. Now we note that a nullcline f1 (x1, x2) = 0 splits the total phase space into ˙ > 0 and x1 ˙ < 0, whereas a nullcline subspaces x1 ˙ > 0 f2 (x1, x2) = 0 splits the space into subspaces x2 ˙ < 0. Hence, for each nullcline it is enough to and x2 find just one point with known sign of the derivative, an all signs will be knowns. These signs provide information about

1

0.2

 w 0

0.05 W

a

 v 0 0.0

Using the methodology of a phase plane, we find that two nullclines are described by w = f (v) + Ia with f (v) = v(a−v)(v−1) for v variable, , and by w = γb v for w variable. Let us consider first a case of Ia = 0. Two possible regimes correspond to two scenarios shown in Fig. 8.2, which differ just by the value b/γ , hence, by the slope value of the straight nullcline. Nullclines w = f (v) and w = γb v can cross either in one point, if it is an excitable regime, and this equilibrium point is stable. Excitable regime, corresponds, e.g., to parameters Ia = 0, a = 0.25, b = γ = 0.002. In this regime there is one stable equilibrium point, with coordinates (0, 0), and the system behaves qualitatively different whether the perturbation is larger or smaller than some threshold. Small perturbation does not push the trajectory from the vicinity of the equilibrium point, whereas larger perturbations lead to the large excursion from the stable point, which corresponds to the v variable producing spike in time series, which may serve as a good model of neural spike (see Fig. 8.3). Note that the size and shape of the spike does not depend on the perturbation, in contrast to this, it is completely defined by the form of nullclines in the phase space. As one can see, for Ia = 0 there can be no periodic solutions—everything what can be different, is the shape of cubic nullcline defined by the parameter a, and the slope of the straight nullcline, adjusted by the ratio b/γ .

 w 0

 v 0

Analysis of Phase Plane with Case Ia = 0

w bv/

unstable  w 0 0

–0.05

0.05 W

0.00 –0.05

0

8.3.1

w bv/

 w 0 stable

the differential field, and we can identify equilibrium points (where nullclines cross each other) and analyse their stability, or analyse evolution of trajectories from most important points.

0.00

8.3

63

a

 v 0

1

 v 0

w=f(v) 0.4

0.6

0.8

V

Fig. 8.2 Two regimes: either excitable (left) or bistable (right)

1.0

w=f(v) 0.0

0.2

0.4

0.6 V

0.8

1.0

64

8 Modelling Neural Activity

0.3

0.6

0.2

0.4

w

w

Fig. 8.3 Excitable regime: a behaviour in the phase space (left), and time series (right). From P fast return P O is observed, whereas starting from A leads to a large excursion ABCDO

0.1 0.0 –0.1

0.2 0.0

0.0

0.5

0.0

1.0

0.3

0.6

0.2

0.4

0.1 0.0 –0.1

0.0 0.0

0.5

1.0

0.0

0.5

1.0

v

0.3

0.6

0.2

0.4

w

w

1.0

0.2

v

0.1 0.0 –0.1

0.5

v

w

w

v

0.2 0.0

0.0

0.5

1.0

v

0.0

0.5

1.0

v

Fig. 8.4 Bottom panel: 0 < Ia < I1 , middle: I1 < Ia < I2 , upper panel: Ia > I2 . The parameters are a = 0.5, left: b/γ = 0.15, right: b/γ = 0.5

8.3.2

Case Ia > 0 and Conditions to Observe a Limit Cycle

Oscillatory regime can be obtained if we change other parameters. What will happen, if Ia > 0? As it follows from the equation for the v-nullcline w = f (v)+Ia the effect is the same as moving of v-nullcline in the upward direction. This is demonstrated in Fig. 8.4. Depending on the ration b/γ , two scenarios can be observed: either the system is excitable, bistable, and again excitable or (this corresponds to the right part of Fig. 8.4) the system will have no stable equilibrium points for some range I1 < Ia < I2 . Since the system will

attract a trajectory from the infinite values as it follows from the signs of derivatives, according to the Poincare-Bendixson theorem there should be a limit cycle in the phase space. And, indeed, this is what observed for these parameter values: a system generates periodic spikes. As it follows from the figure, this will be observed only for a certain ratio b/γ responsible for the slope of the straight nullcline. Let us obtain this condition. For this we approximate the v nullcline by a piecewise linear approximation, as shown in Fig. 8.5. We draw a piecewise linear function through points (0, 0), (v1 , w1 ), (v2 , w2 ), and (1, 0), where (v1 , w1 ) is the point of

8.4 Questions

65

0.05

v2,w2

0.00

w

So, a major property of this model for the space-clamped axon membrane is that it can generate excitable regime (for Ia = 0) or generate regular beating of a limit cycle nature when Ia is in an appropriate range I1 < Ia < I2 and the condition for b/γ holds. This matches the experimental observations, and, hence, the FitzHugh-Nagumo model is so popular as a simple model of neural activity.

a

0

1

−0.05



8.4

v1,w1

0.0

0.2

0.4

0.6

0.8

Questions

1. Suppose that the temporal activity of the space-clamped nerve axon is modelled with the two-variable FitzHughNagumo model with an external applied current Ia :

1.0

v

dv = f (v) − w + Ia , dt

Fig. 8.5 Approximation of the cubic nullcline with a piecewise linear function

dw = bv − w, dt

where the function f (v) is the piecewise linear function: ⎧ ⎨ −v, for v ≤ 1 f (v) = 12 v − 32 , for 1 ≤ v ≤ 5 ⎩ 6 − v, for v ≥ 5

the local minimum of the v− nullcline, and (v2 , w2 ) is a position of a local maximum. The positions of the minimum and maximum are obtained as  1 a + 1 ± ((a + 1)2 − 3a)1/2 , 3

and Ia is a positive constant. a. Find the range of values of the parameters b and Ia for which the model exhibits periodic behaviour. b. Assume now that b = 1, and Ia = 0, 3.5, or 7. Using the phase plane, show the equilibrium points and analyse their stability, noting which points are stable, asymptotically stable or unstable. For each of these three values of Ia sketch the trajectory in the phase plane (v, w) for the initial conditions v = 4, w = −2. c. Assume that b = 12 and Ia = 1.5. Using the phase plane show the equilibrium points and analyse their stability, noting which points are stable, asymptotically stable or unstable. Sketch a typical trajectory in the phase plane.

and wi = −vi (a − vi )(1 − vi ) + Ia ,

i = 1, 2.

The angle Θ defining a slope of a medium interval  of the  −1 w2 −w1 piecewise (see Fig. 8.5) is given by Θ = tan v2 −v1 . Two situations are possible for I1 < Ia < I2 shown in Fig. 8.6, hence, to observe oscillations, the gradient of vnullcline should be less than gradient γb of the w nullcline, i.e., w2 − w1 b tan Θ = < . v2 − v1 γ

0.15

w

0.05

0.05

−0.05

w

Fig. 8.6 Left: a slope of the piecewise linear nullcline is larger than the slope of the straight nullcline and a system is bistable; right: slope is less and a limit cycle can be observed

0.25

Values of I1 and I2 are found from the following expressions: v1 (a − v1 )(v1 − 1) + I1 = bv1 /γ and v2 (a − v2 )(v2 − 1) + I2 = bv1 /γ .

0.15

v2,1 =

0.0

0.2

0.4

0.6

v

0.8

1.0

0.0

0.2

0.4

0.6

v

0.8

1.0

66

8 Modelling Neural Activity

2. Consider a different model described by the equations: dv v3 =v− − w, dt 3

dw = v + b, dt

where b is a constant. Find the values of b, for which the only attractor is a. a stable fixed point; b. a limit cycle.

References 1. Franceschi C, Zaikin A, Gordleeva S, Ivanchenko M, Bonifazi F, Storci G, Bonafè M. Inflammaging 2018: An update and a model. Semin Immunol. 2018;40:1. 2. Murray JD. Mathematical biology: I. An introduction. New York: Springer; 2002.

9

Blood Dynamics Alexey Zaikin

All we know is still infinitely less than all that remains unknown. —William Harvey

9.1

Blood Hydrodynamics

In our considerations, we accept several assumptions. We will follow a macroscopic approach, which means that any small volume V under consideration still contains a very large number of molecules. We will consider only a continuous medium. At any point (x, y, z), a liquid is characterised by the values of pressure p(x, y, z), density ρ(x, y, z), and → a velocity − u (u, v, w), a vector with coordinates that are, in its turn, also functions of (x, y, z). In case of ρ = const, a liquid is incompressible, otherwise it is compressible. Fluids are “sticky” because of a shearing stress, and this property is a viscosity. Finally, we will consider, if not otherwise stated, only Newtonian fluids, i.e., liquids for which, stress is proportional to the strain. In this chapter, we will use the examples, described in more detail in [1].

9.1.1

Basic Equations

(9.1)

where we have taken into account that the volume is moving with the velocity u if we assume that only u = 0. Using the second law of Newton, an equation of motion, called the Navier–Stokes equation, is written as  ρ

∂u ∂u +u ∂t ∂x



  ∂ρ ∂ ΔxΔyΔz = ρuΔyΔz − ρu + (ρu)Δx ΔyΔz, ∂t ∂x which gives by cancelling ΔxΔyΔz the equation of continuity: ∂ρ ∂ + (ρu) = 0. (9.3) ∂t ∂x These two basic equations, with added equations taken from thermodynamics or from mechanics of elastic media, are usually used to find variables characterising the liquid motion. In this case, on the rigid walls, one can use the following boundary conditions: Vn = 0 for normal coordinate of the velocity, and Vtan = 0 for a tangential one near the wall.

9.1.2

Following classical hydrodynamics, for a small volume of a liquid, its acceleration is given by du ∂u ∂u ∂x ∂u ∂u = + = +u , dt ∂t ∂x ∂t ∂t ∂x

Another equation, an equation of continuity, can be derived as follows. Let us consider the mass flux into and out of some arbitrary volume ΔxΔyΔz per unit of time as shown in Fig. 9.1. Then, local change of mass per unit of time is

Poiseuille’s Law

Let us consider the motion of a fluid through a long cylindrical tube of length L, radius a, and two pressures p1 > p2 , as shown in Fig. 9.2. In the following, we assume steady flow, hence, all ∂t∂ = 0. Let us also assume incompressible fluid, i.e., ρ = const, so from equation of continuity, ∂u ∂x = 0. From equation of motion, we have 0 = − ∂p ∂x + μΔu. The solution is symmetrical (independent) on Θ. In cylindrical coordinates, we get then

! " ∂ 2u ∂ 2u ∂ 2u ∂p + 2 + 2 + fext ernal , =− +μ ∂x ∂x 2 ∂y ∂z

(9.2) where in the right-hand side (rhs), we have sum of forces due to the pressure (− ∂p ), shearing stress with a viscosity ∂x coefficient μ, and all other external forces are modelled by the term fexternal , e.g., a force of gravity.

μ

1 ∂ ∂u ∂p dp (r ) = = , r ∂r ∂r ∂x dx

because u = 0 only in one direction. u = u(r) because 0. Hence, differentiating with respect to x: d 2p = 0, dx 2

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_9

dp = const, dx

∂u ∂x

=

p = const × x + K. 67

68

9 Blood Dynamics

Fig. 9.1 In- and out-flux to derive the equation of continuity Fig. 9.3 Left: parabolic velocity profile obtained as a consequence of Poiseuille’s law. Right: a linear dependence of a volume flux on the pressure difference, as it follows from Poiseuille’s law

Fig. 9.2 A scheme to derive Poiseuille’s law

Utilising p = p1 at x = 0 and p = p2 at x = L, we get p = p1 + But μ

p2 − p1 x, L

so

dp p1 − p2 =− . dx L

1 ∂ ∂u dp p1 − p2 (r ) = =− . r ∂r ∂r dx L

So, ∂ ∂u r dp (r ) = , ∂r ∂r μ dx

r

∂u r 2 dp = + A. ∂r 2μ dx

And, we get u=

r 2 dp + A log r + B 4μ dx

A = 0 otherwise u → −∞ as r → 0. u = 0 as r = a so a 2 dp B = − 4μ and dx u=−

1 dp 2 1 (p1 − p2 ) 2 (a − r 2 ) = (a − r 2 ) 4μ dx 4μ L

Poiseuille’s law can be used to find volume flux Q through the pipe (see Fig. 9.3). Considering that per unit time and per annular element the flux is u = 2πrdr, we get Q=

9.2

 a 0

u2π rdr =

 a 0



1 2 p − p2 π p1 − p2 4 (a − r 2 ) 1 2π rdr = a . 4μ L 8 μL

Properties of Blood and ESR

Poiseuille’s law is so well established experimentally that it is often used to find a viscosity μ. μblood ≈ 5μwater at 37 ◦ C, but in capillaries μblood ≈ 1.5μwater . Blood is a non-

Fig. 9.4 Erythrocytes will sediment in standing blood

Newtonian fluid and μ = const. Blood contains plasma and blood cells (red—erythrocytes and white—lymphocytes), Nred ≈ 600Nwhite , so it is a suspension with plasma as the suspending medium. The density of red cells is ρe = 1.09 and that of plasma ρp = 1.03. Consequently, if blood is allowed to stand in a container, the red cells will settle out of suspension at a definite rate, i.e., the erythrocyte sedimentation rate (ESR) (see Fig. 9.4). Mathematics (Stokes 1851) is simple for slow motion. The Reynolds number R = ρva μ determines it, where a is a radius and v the velocity. If R > 1, it is turbulent flow. For R 0.1 spoiled perceptron has better accuracy in classification than not spoiled one. It could lead to speculations that, since some noise is certainly present in gene expression, classifying genetic networks will evolve towards shifting the classification threshold to compensate an effect of noise.

follows:

Resonance in the Learning Algorithm The design presented allows for the “contextualisation” of classification whereby the same network implemented in a different scenario could perform an entirely different separation task depending on the stimuli present. While this allows for a great deal of flexibility we would like to explore the idea of the implementation of some kind of perceptron learning in the same vein as the delta learning rule, a simple version of back propagation learning. In this learning method weights are updated by a process of comparison of the perceptron output to some ideal result for the given inputs. Mathematically we consider it as follows. Let w = {w 5 1 , . . . , wn6} denote the desired weight set and w¯ l = w1l , . . . wnl denote the learning algorithm’s lth iterative desired weights. Also let  attempt at7 learning the 8' j j j j χ = X : x = x1 , . . . , xn be the learning set. Each

While this is more computationally intensive than simply setting our weights as desired as in the current implementation it has a major advantage in the sense that it is results driven. With this kind of learning we are guaranteed for our system to conform to our desired outputs as provided in the test sets. When setting weights directly we are not guaranteed that we will get the results we desire, simply that we will have the weights given. The thinking behind back propagation learning involves the system performing a comparison between the result which it has arrived at and some ideal result. By considering discrepancies between these two results it can then make adjustments to the process by which it arrived at its result in order to move closer to the ideal result. This kind of learning requires a large degree of reflexivity on behalf of the network and as such it is difficult to imagine a biological implementation of such a learning technique. Despite this we feel it is worth considering while on the topic of perceptrons as it is an extremely effective technique. The

x j is a set of input values for which we have the desired output given by O(x j ). Then we update our weights as

wil+1 = wil + α(O(x l ) − D(x l ))xil , where  D(x ) = l

1 0

 if i xi wil > T  if i xi wil ≤ T

(12.46)

References

183

procedure is simple: for each element j of some training set O, where we have a set of inputs along with their desired output, we should compute the following: Xj =

n 

xi .wi

i=1

 F (Xj ) =

1 0

if Xj > T if Xj ≤ T

(12.47)

δ(j ) = Oj − F (Xj ) wi,new = wi,old + αδ(j )xi . Here α is some sort of learning rate, if we choose α too large, then we may end overcorrecting and adjusting the weights too far in each update, but the smaller it is chosen then the longer it will take to arrive at the desired weights. This improvement to wi is performed for each i and looped through for each entry in the training set. The whole process is then repeated until the error in the system is sufficiently small or a maximum number of iterations is reached. For a training set of N input/outputs we have N error =

j =1

N

δ(j )

.

For an error threshold of 0.001, α = 0.01 was found to be suitable for optimal learning speeds and for what follows, the unspoiled threshold has T = 1. Previously, we have just been examining the output assignment functionality of the algorithm but the ideas of stochastic resonance could also be applied to the learning algorithm. As in the previous case we must spoil the system in some way and then we will examine some kind of measure of accuracy for increasing intensity of noise. The way in which this will be implemented is by spoiling the value of T in F (X) as defined in the previous section and adding the noise to each x1 and x2 from the test set before it is fed into F (x). The accuracy is measured by considering the error from the actual intended weights. This is done by finding the Euclidean distance of the algorithms attempts at the weights to the correct values:  error = (w1 − w1,new )2 + (w2 − w2,new )2 The algorithm was allowed to run through the test set 2000 times before arriving at its final attempt at the correct weights w1,new and w2,new . As is clear from Fig. 12.32b we again find that the system performs optimally under some non-zero amount of noise, shown here by a minimised error (where noise intensity σ 2 ≈ 0.12) rather than a maximised accuracy rate as before.

Another simulation featuring independently changed weights shows that the resonance still applies in this case. It would be of interest to perform a more thorough investigation into this effect and to try and describe it analytically if possible. A better understanding of this effect and the circumstances in which we can find it would be of great interest for applications in learning algorithms in general. Of additional interest would be to investigate the possibility of a genetic implementation of learning of this kind. It is not clear whether such an implementation would be possible, if it were possible it would undoubtably be much more complex than our linear classification network. If possible its construction and simulation would certainly be of great interest. Finally, construction of intelligent intracellular gene regulating networks is the hot topic of synthetic biology, e.g., see for a review [84], and here we have shown that unavoidable noise can be constructively used in such design. Notification: This chapter is mainly edited from Zaikin and colleagues’ papers [11, 12, 15, 105]. The related contents are reused with permission.

12.8

Questions

1. Discuss which design of a genetic network is necessary to construct: (a) a genetic clock (suggest several possible designs); (b) a genetic network with dynamical memory (c) a genetic network responsible for a differentiation such that the result of the differentiation will depend on the speed of the cellular decision making. 2. Explain a mechanism of oscillation death in a system of repressilators with coupling. 3. What is the nature of noise which is always present in a gene expression? Which effect can such a noise have? Could the role of noise be constructive? 4. What is a genetic intelligence? How could it be constructed?

References 1. Ajo-Franklin CM, Drubin DA, Eskin JA, Gee EPS, Landgraf D, Phillips I, Silver PA. Rational design of memory in eukaryotic cells. Genes Dev. 2007; 21 18:2271–2276. 2. Alberts B. Molecular biology of the cell. New York: Garland Science; 2008. 3. Anderson JC, Voigt CA, Arkin AP. Environmental signal integration by a modular and gate. Mol Syst Biol. 2007; 3 1:133. 4. Annaluru N, Muller H, Mitchell LA, Ramalingam S, Stracquadanio G, Richardson SM, Dymond JS, Kuang Z, Scheifele LZ, Cooper EM, et al. Total synthesis of a functional designer eukaryotic chromosome. Science 2014; 344 6179:55–58. 5. Atkinson MR, Savageau MA, Myers JT, Ninfa AJ. Development of genetic circuitry exhibiting toggle switch or oscillatory behavior in Escherichia coli. Cell 2003; 113 5:597–607.

184 6. Balázsi G, Cornell-Bell A, Neiman AB, Moss F. Synchronization of hyperexcitable systems with phase-repulsive coupling. Phys. Rev. E 2001; 64 4:041912. 7. Balázsi G, Alexander van O, Collins JJ. Cellular decision making and biological noise: from microbes to mammals. Cell 2011; 144 6:910–925. 8. Balsalobre A, Damiola F, Schibler U. A serum shock induces circadian gene expression in mammalian tissue culture cells. Cell 1998; 93 6:929–937. 9. Ba¸stanlar Y, Özuysal M. Introduction to machine learning. In: Yousef M, Allmer J, editors. miRNomics: MicroRNA biology and computational analysis. Methods in molecular biology (methods and protocols), vol 1107. Totowa: Humana Press; 2014 10. Basu S, Gerchman Y, Collins CH, Arnold FH, Weiss R. A synthetic multicellular system for programmed pattern formation. Nature 2005; 434 7037:1130. 11. Bates R, Blyuss O, Zaikin A. Stochastic resonance in an intracellular genetic perceptron. Phys Rev E 2014; 89:032716. https://doi. org/10.1103/PhysRevE.89.032716 12. Bates R, Blyuss O, Alsaedi A, and Zaikin A. Effect of noise in intelligent cellular decision making. PloS One 2015; 10 5:e0125079. 13. Benzi R, Sutera A, Vulpiani A. The mechanism of stochastic resonance. J Phys A 1981; 14:L453. 14. Bonnet J, Yin P, Ortiz ME, Subsoontorn P, Endy D. Amplifying genetic logic gates. Science 2013; 340 6132:599–603. 15. Borg Y, Ullner E, Alagha A, Alsaedi A, Nesbeth D, Zaikin A. Complex and unexpected dynamics in simple genetic regulatory networks. Int J Mod Phys B 2014; 28 14:1430006. 16. Bratsun D, Volfson D, Tsimring L, Hasty J. Delay-induced stochastic oscillations in gene regulation. Proc Natl Acad Sci. 2005; 102 41:14593–14598. 17. Bray D. Protein molecules as computational elements in living cells. Nature 1995; 376 6538:307–312. 18. Bray D, Lay S. Computer simulated evolution of a network of cell-signaling molecules. Biophys J. 1994; 66 4:972–7. 19. Cambras T, Weller JR, Anglès-Pujoràs M, Lee ML, Christopher A, Díez-Noguera A, Krueger JM, Horacio O. Circadian desynchronization of core body temperature and sleep stages in the rat. Proc Natl Acad Sci. 2007; 104 18:7634–7639. 20. Carr AJF, Whitmore D. Imaging of single light-responsive clock cells reveals fluctuating free-running periods. Nat Cell Biol. 2005; 7 3:319. 21. Carrillo O, Santos MA, García-Ojalvo J, Sancho JM. Spatial coherence resonance near pattern-forming instabilities. Europhys. Lett. 2004; 65 4:452. 22. Cichocka D, Claxton J, Economidis I, Hogel J, Venturi P, Aguilar A. European Union research and innovation perspectives on biotechnology. J. Biotechnol. 2011; 156 4:382–391. 23. Cohen M, Georgiou M, Stevenson NL, Miodownik M, Baum B. Dynamic filopodia transmit intermittent delta-notch signaling to drive pattern refinement during lateral inhibition. Dev. Cell 2010; 19 1:78–89. 24. Crowley MF, Epstein IR. Experimental and theoretical studies of a coupled chemical oscillator: phase death, multistability and in-phase and out-of-phase entrainment. J Phys Chem. 1989; 93 6:2496–2502. 25. Danino T, Mondragón-Palomino O, Tsimring L, Hasty J. A synchronized quorum of genetic clocks. Nature 2010; 463 7279:326. 26. Didovyk A, Kanakov OI, Ivanchenko MV, Hasty J, Huerta R, Tsimring L. Distributed classifier based on genetically engineered bacterial cell cultures. ACS Synth Biol. 2015; 4 1:72–82. 27. Diez-Noguera A. A functional model of the circadian system based on the degree of intercommunication in a complex system. Am J Physiol-Regul Integr Comp Physiol. 1994; 267 4:R1118– R1135.

12 Complex and Surprising Dynamics in Gene Regulatory Networks 28. Eldar A, Elowitz MB. Functional roles for noise in genetic circuits. Nature 2010; 467 7312:167. 29. Elowitz MB. Stochastic gene expression in a single cell. Science 2002; 297 5584:1183–1186. 30. Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature 2000; 403 6767:335. 31. Fernando CT, Liekens AML, Bingle LEH, Beck C, Lenser T, Stekel DJ, Rowe JE. Molecular circuits for associative learning in single-celled organisms. J R Soc Interface 2008; 6 34:463–469. 32. Fraser A, Tiwari J. Genetical feedback-repression: Ii. cyclic genetic systems. J Theor Biol. 1974; 47 2:397–412. 33. Friedland AE, Lu TK, Wang X, Shi D, Church G, Collins JJ. Synthetic gene networks that count. Science 2009; 324 5931:1199– 1202. 34. Fung E, Wong WW, Suen JK, Bulter T, Lee S, Liao JC. A synthetic gene–metabolic oscillator. Nature 2005; 435 7038:118. 35. Gammaitoni L, Hänggi P, Jung P, Marchesoni F. Stochastic resonance. Rev Mod Phys. 1998; 70 1:223. 36. Gandhi N, Ashkenasy G, Tannenbaum E. Associative learning in biochemical networks. J Theor Biol. 2007; 249 1:58–66. 37. Garcia-Ojalvo J, Elowitz MB, Strogatz SH. Modeling a synthetic multicellular clock: repressilators coupled by quorum sensing. Proc Natl Acad Sci. 2004; 101 30:10955–10960. 38. Gardner TS, Cantor CR, Collins JJ. Construction of a genetic toggle switch in Escherichia coli. Nature 2000; 403:339–342. 39. Ghahramani Z. Probabilistic machine learning and artificial intelligence. Nature 2015; 521 7553:452. 40. Gibson DG, Young L, Chuang RY, Venter JC, Hutchison CA, Smith HO. Enzymatic assembly of DNA molecules up to several hundred kilobases. Nat Methods 2009; 6:343–345. 41. Gillespie DT. The chemical Langevin equation. J Chem Phys. 2000; 113 1:297. 42. Ginsburg S, Jablonka E. The evolution of associative learning: a factor in the Cambrian explosion. J Theor Biol. 2010; 266 1:11– 20. 43. Giuraniuc CV, MacPherson M, Saka Y. Gateway vectors for efficient artificial gene assembly in vitro and expression in yeast Saccharomyces cerevisiae. PLoS One 2013; 8 5:e64419. 44. Glass L. Mackey MC. From clocks to chaos: the rhythms of life. Princeton, Princeton University; 1988. 45. Golomb D, Hansel D, Shraiman B, Sompolinsky H. Clustering in globally coupled phase oscillators. Phys Rev A 1992; 45 6: 3516. 46. Goñi-Moreno A, Amos M. A reconfigurable NAND/NOR genetic logic gate. BMC Syst Biol. 2012; 6 1:126. 47. Gonze D, Halloy J, Goldbeter A. Robustness of circadian rhythms with respect to molecular noise. Proc Natl Acad Sci. 2002; 99 2:673–678. 48. Gonze D, Bernard S, Waltermann C, Kramer A, Herzel H. Spontaneous synchronization of coupled circadian oscillators. Biophys J. 2005; 89 1:120–129. 49. Goodwin BC. Temporal organization in cells; a dynamic theory of cellular control processes. Academic Press, Waltham; 1963. 50. Goodwin BC. Oscillatory behavior in enzymatic control processes. Adv Enzyme Regul. 1965; 3:425–437. 51. Granados-Fuentes D, Prolo LM, Abraham U, and Herzog ED. The suprachiasmatic nucleus entrains, but does not sustain, circadian rhythmicity in the olfactory bulb. J Neurosci. 2004; 24 3: 615–619. 52. Gurdon JB, Lemaire P, Kato K. Community effects and related phenomena in development. Cell 1993; 75 5:831–834. 53. Haken H. Synergetics. Phys Bull. 1977; 28 9:412. 54. Hänggi P. Stochastic resonance in biology how noise can enhance detection of weak signals and help improve biological information processing. ChemPhysChem 2002; 3 3:285–290.

References 55. Hastings MH, Herzog ED. Clock genes, oscillators, and cellular networks in the suprachiasmatic nuclei. J Biol Rhythms 2004; 19 5:400–413. 56. Hastings MH, Reddy AB, Maywood ES. A clockwork web: circadian timing in brain and periphery, in health and disease. Nat Rev Neurosci. 2003; 4 8:649. 57. Herzog ED, Schwartz WJ. Invited review: a neural clockwork for encoding circadian time. J Appl Physiol. 2002; 92 1:401–408. 58. Hjelmfelt A, Ross J. Implementation of logic functions and computations by chemical kinetics. Physica D Nonlinear Phenom 1995; 84 1–2:180–19. 59. Hjelmfelt A, Weinberger ED, Ross J. Chemical implementation of neural networks and turing machines. Proc Natl Acad Sci. 1991; 88 24:10983–10987. 60. Hogenesch JB, Ueda HR. Understanding systems-level properties: timely stories from the study of clocks. Nat Rev Genet. 2011; 12 6:407. 61. Horsthemke W, Lefever R. Noise-induced transitions: theory and applications in physics, chemistry, and biology. Springer Series in Synergetics. Berlin, Springer; 2006. 62. Hutchison CA, Chuang R, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, Gill J, Kannan K, Karas BJ, Ma L, et al. Design and synthesis of a minimal bacterial genome. Science 2016; 351 6280:aad6253. 63. Inagaki N, Honma S, Ono D, Tanahashi Y, Honma K. Separate oscillating cell groups in mouse suprachiasmatic nucleus couple photoperiodically to the onset and end of daily activity. Proc Natl Acad Sci. 2007; 104 18:7664–7669. 64. Jones B, Stekel DJ, Rowe JE, Fernando CT. Is there a liquid state machine in the bacterium Escherichia coli? In: Proceedings of IEEE symposium on artificial life; 2007. pp. 187–191. 65. Kanakov O, Kotelnikov R, Alsaedi A, Tsimring L, Huerta R, Zaikin A. Multi-input distributed classifiers for synthetic genetic circuits. PLoS One 2015; 10:e0125144. 66. Kanakov O, Laptyeva T, Tsimring L, Ivanchenko M. Spatiotemporal dynamics of distributed synthetic genetic circuits. Physica D Nonlinear Phenom. 2016; 318:116–123. 67. Kaneko K. Clustering, coding, switching, hierarchical ordering, and control in a network of chaotic elements. Physica D Nonlinear Phenom. 1990; 41 2:137–172. 68. Kaneko K, Yomo T. Cell division, differentiation and dynamic clustering. Physica D Nonlinear Phenom. 1994; 75:89–102. 69. Kashiwagi A, Urabe I, Kaneko K, Yomo T. Adaptive response of a gene network to environmental changes by fitness-induced attractor selection. PloS One 2006; 1 1:e49. 70. Kim J, Winfree E. Synthetic in vitro transcriptional oscillators. Mol Syst Biol. 2011; 7 1:465. 71. King DP, Zhao Y, Sangoram AM, Wilsbacher LD, Tanaka M, Antoch MP, Steeves TDL, Vitaterna MH, Kornhauser JM, Lowrey PL, et al. Positional cloning of the mouse circadian clock gene. Cell 1997; 89 4:641–653. 72. Kiss IZ, Hudson JL. Chaotic cluster itinerancy and hierarchical cluster trees in electrochemical experiments. Chaos 2003; 13 3:999–1009. 73. Ko CH, Yamada YR, Welsh DK, Buhr ED, Liu AC, Zhang EE, Ralph MR, Kay SA, Forger DB, Takahashi JS. Emergence of noise-induced oscillations in the central circadian pacemaker. PLoS Biology 2010; 8 10:e1000513. 74. Kobayashi H, Kaern M, Araki M, Chung K, Gardner TS, Cantor CR, Collins JJ. Programmable cells: interfacing natural and engineered gene networks. Proc Natl Acad Sci. 2004; 101 22:8414– 8419. 75. Kok S, Stanton LH, Slaby T, Durot M, Holmes VF, Patel KG, Platt D, Shapland EB, Serber Z, Dean J, et al. Rapid and reliable DNA assembly via ligase cycling reaction. ACS Synth Biol. 2014; 3 2:97–106.

185 76. Koseska A, Volkov E, Zaikin A, Kurths J. Inherent multistability in arrays of autoinducer coupled genetic oscillators. Phys Rev E 2007; 75 3:031916. 77. Koseska A, Volkov E, Kurths J. Parameter mismatches and oscillation death in coupled oscillators. Chaos 2010; 20 2:023132. 78. Kuznetsov AS, Kurths J. Stable heteroclinic cycles for ensembles of chaotic oscillators. Phys Rev E 2002; 66 2:026201. 79. Laje R, Mindlin GB. Diversity within a birdsong. Phys Rev Lett. 2002; 89 28:288102. 80. Levskaya A, Chevalier AA, Tabor JJ, Simpson ZB, Lavery LA, Levy M, Davidson EA, Scouras A, Ellington AD, Marcotte EM, et al. Synthetic biology: engineering Escherichia coli to see light. Nature 2005; 438 7067:441. 81. Lewis J. Autoinhibition with transcriptional delay: a simple mechanism for the zebrafish somitogenesis oscillator. Curr Biol. 2003; 13 16:1398–1408. 82. Lindner B, Garcıa-Ojalvo J, Neiman A, Schimansky-Geier L. Effects of noise in excitable systems. Phys Rep. 2004; 392 6:321– 424. 83. Liu T, Borjigin J. Reentrainment of the circadian pacemaker through three distinct stages. J Biol Rhythms 2005; 20 5:441– 450. 84. Lu TK, Khalil AS, Collins JJ. Next-generation synthetic gene networks. Nat Biotechnol. 2009; 27 12:1139–50. 85. Lutz R, Bujard H. Independent and tight regulation of transcriptional units in Escherichia coli via the LacR/O, the TetR/O and AraC/I1-I2 regulatory elements. Nucleic Acids Res. 1997; 25 6:1203–1210. 86. Maheshri N, O’Shea EK. Living with noisy genes: how cells function reliably with inherent variability in gene expression. Annu Rev Biophys Biomolecular Structure 2007; 36: 413–434. 87. Manrubia SC, Mikhailov AS. Mutual synchronization and clustering in randomly coupled chaotic dynamical networks. Phys Rev E 1999; 60 2:1579. 88. Mariño IP, Ullner E, Zaikin A. Parameter estimation methods for chaotic intercellular networks. PloS One 2013; 8 11:e79892. 89. Maywood ES, Reddy AB, Wong GKY, O’Neill JS, O’Brien JA, McMahon DG, Harmar AJ, Okamura H, Hastings MH. Synchronization and maintenance of timekeeping in suprachiasmatic circadian clock cells by neuropeptidergic signaling. Curr. Biol. 2006; 16 6:599–605. 90. McAdams HH, Arkin A. It’s a noisy business! genetic regulation at the nanomolar scale. Trends Genet. 1999; 15 2:65–69. 91. McClung CR. Plant circadian rhythms. Plant Cell 2006; 18 4:792– 803. 92. McMillen D, Kopell N, Hasty J, Collins JJ. Synchronizing genetic relaxation oscillators by intercell signaling. Proc Natl Acad Sci. 2002; 99 2:679–684. 93. Meinhardt H. Models of biological pattern formation. London: Academic Press; 1982. 94. Miller MB, Bassler BL. Quorum sensing in bacteria. Annu. Rev. Microbiol. 2001; 55:165–199. 95. Miyakawa K, Yamada K. Synchronization and clustering in globally coupled salt-water oscillators. Physica D Nonlinear Phenom. 2001; 151 2–4:217–227. 96. Mori T, Kai S. Noise-induced entrainment and stochastic resonance in human brain waves. Phys Rev Lett. 2002; 88:218101. 97. Munsky B, Neuert G, Van Oudenaarden A. Using gene expression noise to understand gene regulation. Science 2012; 336 6078:183– 187. 98. Murray-Zmijewski F, Slee EA, Lu X. A complex barcode underlies the heterogeneous response of p53 to stress. Nat Rev Mol Cell Biol. 2008; 9 9:702. 99. Nakagaki T, Yamada H, Tóth A. Intelligence: maze-solving by an amoeboid organism. Nature 2000; 407 6803:470.

186 100. Neiman A, Saparin PI, Stone L. Coherence resonance at noisy precursors of bifurcations in nonlinear dynamical systems. Phys Rev E 1997; 56 1:270. 101. Nené NR, Zaikin A. Interplay between path and speed in decision making by high-dimensional stochastic gene regulatory networks. PLoS One 2012; 7 7:e40085. 102. Nené NR, Zaikin A. Decision making in noisy bistable systems with time-dependent asymmetry. Phys Rev E 2013; 87 1:012715. 103. Nene NR, Garca-Ojalvo J, Zaikin A. Speed-dependent cellular decision making in nonequilibrium genetic circuits. PloS One 2012; 7 3:e32779. 104. Nené NR, Rivington J, Zaikin A. Sensitivity of asymmetric ratedependent critical systems to initial conditions: Insights into cellular decision making. Phys Rev E 2018; 98 2:022317. 105. Nesbeth DN, Zaikin A, Saka Y, Romano C, Giuraniuc C, Kanakov O, Laptyeva T. Synthetic biology routes to bio-artificial intelligence. Essays Biochem. 2016; 60:381–391 106. Nicolis G, Prigogine I. Self-organization in nonequilibrium systems. New York, Wiley; 1977. 107. Nishimura K, Fukagawa T, Takisawa H, Kakimoto T, Kanemaki M. An auxin-based degron system for the rapid depletion of proteins in nonplant cells. Nat Methods 2009; 6 12:917. 108. Noskov VN, Karas BJ, Young L, Chuang R, Gibson DG, Lin Y, Stam J, Yonemoto IT, Suzuki Y, Andrews-Pfannkoch C, et al. Assembly of large, high G+G bacterial DNA fragments in yeast. ACS Synth Biol. 2012; 1 7:267–273. 109. Ohta H, Yamazaki S, McMahon DG. Constant light desynchronizes mammalian clock neurons. Nat Neurosci. 2005; 8 3:267. 110. Okuda K. Variety and generality of clustering in globally coupled oscillators. Physica D Nonlinear Phenom. 63 3–4:424–436. 111. Osipov GV, Zhou C, Kurths J. Synchronization in oscillatory networks. Berlin: Springer; 2007. 112. Pais-Vieira M, Chiuffa G, Lebedev M, Yadav A, Nicolelis MA. Building an organic computing device with multiple interconnected brains. Sci Rep. 2015; 5:11869. 113. Partridge L, Barrie B, Fowler K, French V. Evolution and development of body size and cell size in drosophila melanogaster in response to temperature. Evolution 1994; 48 4:1269–1276. 114. Paulsson J, Ehrenberg M. Noise in a minimal regulatory network: plasmid copy number control. Q Rev Biophys. 2001; 34 1:1–59. 115. Pavlov IP. Conditional reflexes: an investigation of the physiological activity of the cerebral cortex. Nature 1928; 121:662–664. 116. Peltier J, Schaffer DV. Systems biology approaches to understanding stem cell fate choice. IET Syst Biol. 2010; 4 1:1–11. 117. Peplow M. Synthetic biology’s first malaria drug meets market resistance. Nature 2016; 530 7591:389. 118. Pikovsky AS, Kurths J. Coherence resonance in a noise-driven excitable system. Phys Rev Lett. 1997; 78 5:775. 119. Pokhilko A, Fernández AP, Edwards KD, Southern MM, Halliday KJ, Millar AJ. The clock gene circuit in Arabidopsis includes a repressilator with additional feedback loops. Mol Syst Biol. 2012; 8 1:574. 120. Priplata AA, Niemi JB, Harry JD, Lipsitz LA, Collins JJ. Vibrating insoles and balance control in elderly people. The Lancet 2003; 362 9390:1123–1124. 121. Purcell O, Savery NJ, Grierson CS, Di Bernardo M. A comparative analysis of synthetic genetic oscillators. J R Soc Interface 2010; 7 52:1503–1524. 122. Qian L, Winfree E, Bruck J. Neural network computation with DNA strand displacement cascades. Nature 2011; 475 7356:368– 372. 123. Quintero JE, Kuhlman SJ, McMahon DG. The biological clock nucleus: a multiphasic oscillator network regulated by light. J Neurosci. 2003; 23 22:8070–8076. 124. Raser JM, O’Shea EK. Noise in gene expression: origins, consequences, and control. Science 2005; 309 5743:2010–2013.

12 Complex and Surprising Dynamics in Gene Regulatory Networks 125. Reimann P. Brownian motors: noisy transport far from equilibrium. Phys Rep. 2002; 361 2–4:57–265. 126. Rosen-Zvi M. On-line learning in the Ising perceptron. J Phys A Math Gen. 2000; 33 41:7277. 127. Russell DF, Wilkens LA, Moss F. Use of behavioural stochastic resonance by paddle fish for feeding. Nature 1999; 402 6759:291– 294. 128. Russell S, Hauert S, Altman R, Veloso M. Ethics of artificial intelligence. Nature 2015; 521 7553:415–416. 129. Sadooghi-Alvandi SM, Nematollahi AR, Habibi R. On the distribution of the sum of independent uniform random variables. Stat Pap. 2007; 50 1:171–175. 130. Sagués F, Sancho J, García-Ojalvo J. Spatiotemporal order out of noise. Rev Mod Phys. 2007; 79 3:829. 131. Saigusa T, Tero A, Nakagaki T, Kuramoto Y. Amoebae anticipate periodic events. Phys Rev Lett. 2008; 100 1:018101. 132. Saka Y, Lhoussaine C, Kuttler C, Ullner E, Thiel M. Theoretical basis of the community effect in development. BMC Syst Biol. 2011; 5 1:54. 133. Samborska V, Gordleeva S, Ullner E, Lebedeva A, Kazantzev V, Ivancheno M, Zaikin A. Mammalian brain as networks of networks. Opera Medica Physiol. 2016; 1:23–38. 134. Sanchez A, Golding I. Genetic determinants and cellular constraints in noisy gene expression. Science 2013; 342 6163:1188– 1193. 135. Simonotto E, Riani M, Seife C, Roberts M, Twitty J, Moss F. Visual perception of stochastic resonance. Phys Rev Lett. 1997; 78:1186. 136. Siuti P, Yazbek J, Lu TK. Synthetic circuits integrating logic and memory in living cells. Nat Biotechnol. 2013; 31 5:448. 137. Slipantschuk J, Ullner E, Baptista MS, Zeineddine M, Thiel M. Abundance of stable periodic behavior in a red grouse population model with delay: a consequence of homoclinicity. Chaos 2010; 20 4:045117. 138. Stormo GD, Schneider TD, Gold L, Ehrenfeucht A. Use of the ‘perceptron’ algorithm to distinguish translational initiation sites in E. coli. Nucleic Acids Res. 1982; 10 9:2997–3011. 139. Stricker J, Cookson S, Bennett MR, Mather WH, Tsimring L, Hasty J. A fast, robust and tunable synthetic gene oscillator. Nature 2008; 456 7221:516. 140. Suzuki N, Furusawa C, Kaneko K. Oscillatory protein expression dynamics endows stem cells with robust differentiation potential. PloS One 2011; 6 11:e27232. 141. Swain PS, Elowitz MB, Siggia ED. Intrinsic and extrinsic contributions to stochasticity in gene expression. Proc Natl Acad Sci. 2002; 99 20:12795–12800. 142. Swain PS, Longtin A. Noise in genetic and neural networks. Chaos 2006; 16 2:026101. 143. Tabor JJ, Levskaya A, Voigt CA. Multichromatic control of gene expression in Escherichia coli. J Mol Biol. 2011; 405 2:315–324. 144. Tamsir A, Tabor JJ, Voigt CA. Robust multicellular computing using genetically encoded nor gates and chemical ‘wires’. Nature 2011; 469 7329:212. 145. Tero A, Takagi S, Saigusa T, Ito K, Bebbe DP, Fricker MD, Yumiki K, Kobayashi R, Nakagaki T. Rules for biologically inspired adaptive network design. Science 2010; 327 5964:439–442. 146. Terrell JL, Wu H, Tsao C, Barber NB, Servinsky MD, Payne GF, Bentley WE. Nano-guided cell networks as conveyors of molecular communication. Nat Commun. 2015; 6:8500. 147. Thomas P, Straube AV, Timmer J, Fleck C, Grima R. Signatures of nonlinearity in single cell noise-induced oscillations. J Theor Biol. 2013; 335:222–234. 148. Tigges M, Marquez-Lago TT, Stelling J, Fussenegger M. A tunable synthetic mammalian oscillator. Nature 2009; 457 7227:309. 149. Tiwari J, Fraser A, Beckman R. Genetical feedback repression i. single locus models. J Theor Biol. 1974; 45 2:311–326.

References 150. Ullner E, Buceta J, Díez-Noguera A, García-Ojalvo J. Noiseinduced coherence in multicellular circadian clocks. Biophys J. 2009; 96 9:3573–3581. 151. Ullner E, Zaikin A, Volkov EI, García-Ojalvo J. Multistability and clustering in a population of synthetic genetic oscillators via phase-repulsive cell-to-cell communication. Phys Rev Lett. 2007; 99 14:148103. 152. Usher M, Feingold M. Stochastic resonance in the speed of memory retrieval. Biol Cybern. 2000; 83 6:L011–L016. 153. Volkov EI, Stolyarov MN. Birhythmicity in a system of two coupled identical oscillators. Physics Letters A 1991; 159 1–2:61– 66. 154. Volkov EI, Stolyarov MN. Temporal variability in a system of coupled mitotic timers. Biol Cybern. 1994; 71 5:451–459. 155. Volkov EI, Ullner E, Kurths J. Stochastic multiresonance in the coupled relaxation oscillators. Chaos 2005; 15 2:023105. 156. Wang W, Kiss IZ, Hudson JL. Experiments on arrays of globally coupled chaotic electrochemical oscillators: Synchronization and clustering. Chaos 2000; 10 1:248–256. 157. Wang W, Kiss IZ, Hudson JL. Clustering of arrays of chaotic chemical oscillators by feedback and forcing. Phys Rev Lett. 2001; 86 21:4954. 158. Weiss JN. The hill equation revisited: uses and misuses. FASEB J. 1997; 11 11:835–41.

187 159. Westermark PO, Welsh DK, Okamura H, Herzel H. Quantification of circadian rhythms in single cells. PLoS Comput Biol. 2009; 5 11:e1000580. 160. Wiesenfeld K. Noisy precursors of nonlinear instabilities. J Stat Phys. 1985; 38 5–6):1071–1097. 161. Yamaguchi S, Isejima H, Matsuo T, Okura R, Yagita K, Kobayashi M, Okamura H. Synchronization of cellular clocks in the suprachiasmatic nucleus. Science 2003; 302 5649:1408–1412. 162. Yamazaki S, Numano R, Abe M, Hida A, Takahashi R, Ueda M, Block GD, Sakaki Y, Menaker M, Tei H. Resetting central and peripheral circadian oscillators in transgenic rats. Science 2000; 288 5466:682–685. 163. Ye H, Baba MDI, Peng R, Fussenegger M. A synthetic optogenetic transcription device enhances blood-glucose homeostasis in mice. Science 2011; 332 6037:1565–1568. 164. You L, Cox RS, Weiss R, Arnold FH. Programmed population control by cell–cell communication and regulated killing. Nature 2004; 428 6985:868. 165. Zopf CJ, Quinn K, Zeidman J, Maheshri N. Cell-cycle dependence of transcription dominates noise in gene expression. PLoS Comput Biol. 2013; 9 7:e1003161.

Modelling Complex Phenomena in Physiology

13

Shangbin Chen, Olga Sosnovtseva, and Alexey Zaikin

We can only see a short distance ahead, but we can see plenty there that needs to be done. —Alan Turing

13.1

Cortical Spreading Depression (CSD)

13.1.1 What Is CSD Cortical spreading depression (CSD) is a complex pathophysiological phenomenon [1], which was first observed in the cerebral cortex of rabbits in 1944 by Aristides A. P. Leao [2]. It is characterised as an expanding but slow wave (2– 5 mm/min) that propagates across the cerebral cortex, accompanying with the suppression of neuronal activity, disturbance of ion homeostasis, a negative direct current potential shift, and the depression of electrocorticogram (EEG). Also, significant changes of blood flow and metabolism occur during CSD (Fig. 13.1). This enables functional optical imaging of CSD with light wavelength around 550 nm [3, 4]. Since the EEG depression may happen in other neuronal tissues but not cortex (such as retina, cerebellum, hippocampus, and striatum), it may be called as spreading depression for generalisation. Also, spreading depolarisation is also used to term spreading depression. In the past seven decades, the clinical relevance of CSD has attracted lots of attention [5]. CSD has been considered to be involved in migraine with aura, concussion, epilepsy, and ischaemic stroke [1]. Both electrophysiological [6] and imaging recording [7] indicate that CSD does occur in human cortex. Recently, a study has revealed that CSD may trigger headache by activation of some neuronal channels [8]. In ischaemic brain, the propagation of CSD waves could induce secondary damage by deteriorating the damaged brain tissue, and a significant correlation has been found between the number of CSD waves and the evolving infarct volume. Thus, the spread of CSD wave has been proposed to monitor the evolution of stroke [4]. Although there have been numerous studies on CSD, the mechanisms on the propagation and initiation of CSD are still not completely understood. The putative reaction–diffusion (RD) model of CSD propagation has been introduced [1] (Fig. 13.2). CSD can be triggered by a variety of stimuli,

such as pinprick, electrical stimulation, and high potassium application. The stimulus makes the local neurons depolarised and leads the extracellular potassium concentration ([K + ]o ) increase. The massive rise in [K + ]o may be sufficient to depolarise neighbouring cells by diffusion. And then the depolarisation of neurons is formed as a contiguous spreading wave. This is primarily a positive feedback for the self-regenerative processes during CSD. In addition, the activation of Na,K-ATPase and glia buffering may be negative feedback for the recovering of CSD. A pioneer researcher on CSD, Charles Nicholson, ever addressed [5]: “No matter how many channel proteins we sequence, how many neuromodulators we identify and how many neural networks we construct, if we cannot explain spreading depression, we do not understand how the brain works”. A lot of experimental and computational studies have been performed to explore CSD. It is really difficult to perform detailed experiments that modulate and record various variables involved CSD. A mathematical modelling approach may be easy to handle numerous parameters and facilitate our understanding on CSD. Here, we will address more on computational simulations.

13.1.2 Models of CSD So far, a lot of computational models for CSD have been built. The first mathematical model was suggested by Alan L. Hodgkin, Andrew F. Huxley, and Bernice Grafstein [9]. CSD was described by a reaction-diffusion equation (RDE) consisting of a single diffusion equation and a cubic reaction term appearing in Eq. 13.1. It is also called as bistable RDE. Why is it called as bistable equation? ∂K ∂ 2K = D 2 + f (K) ∂t ∂x f (K) = −A(K − K0 )(K − K1 )(K − K2 ). (13.1)

© Huazhong University of Science and Technology Press 2020 S. Chen, A. Zaikin, Quantitative Physiology, https://doi.org/10.1007/978-981-33-4033-6_13

189

190

13 Modelling Complex Phenomena in Physiology

Fig. 13.1 Depression of EEG (a) and optical imaging (b) of CSD recorded from rat brain cortex. In the 2D optical recording, CSD shows as a moving wave

a new form as Eq. 13.2. ∂K ∂ 2K = D 2 + f (K, R) ∂t ∂x f (K, R) = −A(K − K0 )(K − K1 )(K − K2 ) − RK dR = B(K − K0 − CR). dt

(13.2)

Further, Henry C. Tuckwell and Robert M. Miura developed a system of RDEs [11]. The concentrations of the various ions in both extracellular and intracellular spaces have been taken into consideration. The functions of transmitters, osmotic forces, and spatial buffering are involved in the model system. There are two general equations as follows: ∂Cje Fig. 13.2 A putative reaction-diffusion model of CSD initiation and propagation (adapted from [1]). Both positive feedback (e.g., K + and glutamate release and diffusion) and negative feedback (glia buffering) are believed to be involved during CSD

Here, the potassium ions are taken as the only driver of CSD. The putative mechanism of CSD is the reactiondiffusion hypothesis involving potassium ions. Equation 13.1 can be solved analytically. However, the travelling wavefront is not a wave pulse. What is the pattern of the wavefront? A rational description of CSD needs the inclusion of a recovery process. In James A. Reggia and David Montgomery’s work, RDE with recovery was introduced [10]. The recovery variable R was described by a differential equation. There was a fourth-order polynomial and indirect parameters on physiological meaning. Here, it was slightly modified to

∂t ∂Cji ∂t

= Dj =−

∂ 2 Cje ∂x 2

+ gj (V − Vj ) + Pj ,

α [gj (V − Vj ) + Pj ]. 1−α

(13.3)

Here, C, D, g, V , P , and α are the concentration, diffusion coefficient, membrane conductance, membrane potential, passive ion fluxes, and the extracellular space fraction, respectively. The superscripts e and i indicate the extracellular and intracellular spaces. And the subscript j (j = 1, 2, 3, 4) means the j th ion species (i.e., potassium, sodium, calcium, and chloride ions). Vj is the Nernst potential for the j th ion. Both the diffusion and transmembrane fluxes are considered in the extracellular space. The change of intracellular ion concentrations is determined by the fluxes from or into the extracellular space. To focus on the functions of potassium and calcium, a simplified model has been built and

13.1 Cortical Spreading Depression (CSD)

191

Fig. 13.3 Schematic drawing of Shapiro’s model [12]. Different kinds of ion channels and pumps distributed in both neuron and glia membrane. Neurons are connected by gap junctions. The endoplasmic reticTable 13.1 Classification of mathematical models for CSD

ulum is treated as a Ca 2+ buffer. Water may enter or leave the cell in response to osmotic gradients

Class of model Reaction-diffusion model

Single neuron model

Cellular automata model

Magnetic dipole model

used for simulating CSD phenomena [11]. In extracellular space, the model demonstrates the travelling solitary waves of increased potassium and decreased calcium. Also, the annihilation of colliding CSD waves can be simulated. Obviously, the above-mentioned models are all based on potassium diffusion in extracellular space. In 2001, Bruce Shapiro provided a novel model integrating the effects of gap junctions, specific ion channels, cytosolic voltage gradients, and osmotic volume changes [12]. Similarly, cortical tissue is simplified as a one-dimensional continuum with both extracellular and intracellular spaces. Here, the intracellular compartment represents a syncytium of neurons connected by gap junctions (Fig. 13.3). Besides the standard reactiondiffusion equations are applied to describe the interstitial ionic concentrations, the electrodiffusion equations are used to delineate the cytosolic ionic concentrations due to the addition of gap junctions. The whole model system has 29 state variables and 18 Hodgkin-Huxley (HH) variables. Of course, the most important mechanism associated with CSD is the ion currents especially that play significant roles in neuron

Primary feature of model Driven by extracellular (and intracellular) diffusion of ions and transmitters Triggered by transmembrane ion fluxes and concentration changes between extracellular and intracellular spaces Presenting CSD via finite state automaton transmission Modelling magnetic field of brain surface evoked by CSD

References Tuckwell [11], Shapiro [12] Kager et al. [13], Makarova et al. [14] Reshodko and Bures [15], Chen et al. [16] Tepley and Wijesinghe [17]

activities. A simple mathematical model that describes these channel currents for each ion is the HH formula. A cellular level schematic drawing of the model is shown in Fig. 13.3. So far, we have introduced a series of reaction-diffusion models for CSD. In fact, there are some other types of CSD models (Table 13.1), including the single neuron model [13, 14], cellular automata model [15, 16], and magnetic dipole model [17]. In the single cell model of CSD, a hippocampal pyramidal cell in the CA1 region is routinely used [13, 14]. The reconstructed morphology of the neuron is subdivided into segments for canonical computations in the NEURON and GENESIS modelling environment. Rich ion channels and pumps are distributed in the membrane. Hodgkin and Huxley equation and Goldman-Hodgkin-Katz current equation are usually used for computing membrane current and voltage. Even the volume of the cell may be considered with osmotic forces. In addition, thousands of interacted neurons can be modelled with this kind of approach. However, the RDE is not necessarily used in the single cell model.

192

13 Modelling Complex Phenomena in Physiology

Finite state automaton (i.e., cellular automaton) has been used to model CSD in the 2D homogeneous network of cortex [15]. In this descriptive model, the cortex is simulated as a planar array of mutually interconnected cells. Each cell is in one of three states: quiescent, depressed, or refractory. The state of each cellular automaton at next time point is determined by its own state and the states of the 4 or 8 adjacent cells at current time. In another word, each cell forming a 2D lattice is connected to 4 (von Neumann neighbourhood) or 8 surrounding cells (Moore neighbourhood). A similar study employs the lattice-cellular automaton to simulate potassium buffering mechanisms in a two-dimensional brain-cell system [18]. In another aspect, since ionic current exists during CSD, magnetic dipoles are applied to simulate the magnetic fields [17]. This would be helpful to guide the detection of CSD with magnetic signals. As we have known, magnetoencephalography is a neuroimaging technique for mapping brain activity by recording the very weak magnetic fields produced by electrical currents occurring in the brain. In some other models, neurovascular coupling has been modelled [19]. The interaction between metabolic state and CSD can be studied in this framework.

13.1.3 Applications of CSD Models For the reaction-diffusion type mathematical model of CSD, we usually need to apply numerical approach to solve the

Kjn+1 − Kjn Δt

=



K1n n ⎢ Δt DΔt ⎢ K2 n k= , r= , K = ⎢ . ⎣ .. 2 (Δx)2 KJn 2 −2





f1n ⎥ ⎢fn ⎥ ⎢ 2 ⎥ , Fn = k ⎢ .. ⎦ ⎣ . fJn

0 ... ... 2 0 ...

1 −2 .. .. . . ... 0 ... ...

equations, except for Eq. 13.1 in this chapter. The CrankNicolson method has been used effectively. Here, we take Eq. 13.2 as an example and introduce the Crank-Nicolson method in Fig. 13.4 briefly. In the discretised space, Δx and Δt are the spatial and temporal step lengths, respectively. The state variable of potassium concentration at time tn+1 and location xj is related to the values at the two surrounding grid points xj ±1 at tn+1 , and the other neighbouring three points at the previous time step tn .

D 1 n+1 n+1 n n n (Kjn+1 + Kjn+1 − fjn ). +1 − 2Kj −1 + Kj +1 − 2Kj + Kj −1 ) + (fj 2 2(Δx) 2

Here, f is used to present the value of reaction term. For the boundary spatial endpoints (j = 1 and J ), Eq. 13.4 has a different formula by taking mirror symmetry values. After matrix transformation from Eq. 13.4 by defining

⎧ −2 ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎪ ⎪ ⎪ r⎨ 0 Dn = . 2⎪ ⎪ ⎪ .. ⎪ ⎪ ⎪ ⎪ 0 ⎪ ⎩ 0

Fig. 13.4 Schematic drawing for the Crank-Nicolson method. The 2D spatiotemporal space is discretised into individual grids. The implicit relationship of state variables among the boxed six different grid points (four grid points at boundary) is investigated. Reprinted with permission from Ref. [20]

1 0 .. .. . . 1 −2 0 2

⎫ 0⎪ ⎪ ⎪ 0⎪ ⎪ ⎪ ⎪ .. ⎪ ⎬ . , .. ⎪ .⎪ ⎪ ⎪ ⎪ ⎪ 1⎪ ⎪ ⎭ −2

⎤ ⎥ ⎥ ⎥, ⎦

(13.5)

(13.4)

we can get the matrix equation as below: Kn+1 − Kn = Dn+1 Kn+1 + Dn Kn + Fn+1 + Fn .

(13.6)

Then, Eq. 13.6 can be transformed into Kn+1 = (I − Dn+1 )−1 [(I + Dn )Kn + Fn+1 + Fn ], (13.7) where I is the eye matrix and “−1” indicates matrix inversion. Since Eq. 13.7 was implicit in F n+1 , the iteration method was used until the sum of the potassium concentration at all spatial points converged with small difference between two successive iteration steps. All the computation and visualisation of the model were implemented in the MATLAB environment (the MathWorks Inc., USA). And all the parameters used in the model are listed in Table 13.2.

13.1 Cortical Spreading Depression (CSD) Table 13.2 Basic parameters for the RD model of CSD

193 Symbol D K0 K1 K2 A B C Δx Δt J N

Description Diffusion constant Resting K+ concentration Threshold K+ concentration for CSD Maximal K+ concentration during CSD Rate constant of K+ dynamics Constant for recovery Constant for recovery Spatial step of discretisation Time step of discretisation Integer for spatial steps Integer for time steps

Value (unit) 1960 μm2 /s 3 mM 20 mM 60 mM -0.0006 mM−2 · s−1 0.003 mM−2 ·s−1 5 mM· s 10 μm 0.05 s 100 1000

Fig. 13.5 Spatiotemporal evolution of CSD wave. (a) CSD wave simulated from Eq. 13.1. (b) The waveforms from two different spatial points 0.1 mm and 0.4 mm. (c) CSD wave from Eq. 13.2. (d) The waveforms from two different points 0.1 mm and 0.4 mm

In fact, both Eqs. 13.1 and 13.2 can be solved numerically with the Crank-Nicolson method. To make easy comparison, we set the same stimulus conditions for the both equations. The initiation of CSD is started at time point t = 0 s around the distance −0.05 ∼ 0.05 mm with a 30 mM potassium concentration. The spatiotemporal evolution of CSD without recovery is shown in Fig. 13.5a. It propagated symmetrically to the left and right place, so the edges formed a “∧” pattern. Both sides of the “∧”pattern have the same constant slope, suggesting the uniform speed of propagation. In fact, the speed calculated from the reciprocal of the slope was 3.19 mm/min. In the two different spatial points 0.1 mm and 0.4 mm, the temporal profiles of potassium concentration are

plotted in Fig. 13.5b. The waveforms were similar except for the latency. From the delay, the speed was also calculated as 3.19 mm/min. Although the media tissue (i.e., cortex) of CSD is assumed as one-dimensional continuum, the computation results can be extended into 2D spatial array by symmetry in Fig. 13.6. In this case, each individual CSD is visualised as a concentric wave initiating from the stimulation point. The mergence but not annihilation of two colliding waves of CSD is demonstrated. This is consistent with our results of optical imaging [20]. In addition, the dynamics of CSD depending on the parameters is studied. We have validated that the speed of CSD

194

13 Modelling Complex Phenomena in Physiology

Fig. 13.6 Spatiotemporal evolution of two colliding CSD waves visualised in 3D graphs. The initiation condition is shown as 30 mM potassium around (5 mm, 2.5 mm) and (5 mm, 7.5 mm) at time point t = 0 s. Reprinted with permission from Ref. [20]

is constrained by the following equation: V =

K0 + K2 − 2K1 √ AD. √ 2

(13.8)

With the parameters provided in Table 13.2, we can get a speed around 3.35 mm/min. This means the speed of CSD is about 50–60 μm/s. It is quite slower than the speed of nerve transmission. The simulation work allies CSD with the emerging concepts of volume transmission rather than synaptic transmission [21]. What is more, the recovery term related to refractory period has been addressed recently. Our model has mimicked that the continuous injection of KCl solution can induce repetitive CSD waves [22]. Usually, the first wave of CSD has a faster speed and a larger amplitude than those of the following secondary waves. Since the relative refractory period lasts much longer than the recovery time of ions turbulence, an early stimulation might lead to a late initiation of CSD, i.e., “haste makes waste” (see Fig. 13.7). If the induction interval is long enough for recovery, a series of CSD waves may present the same profile as the first one. The amplitude and velocity of CSD waves are observed increasing with the initiation interval and asymptotic to those of the first CSD wave. This simulation indicates that the spatiotemporal evolution of CSD waves is modulated by the

relative refractory period. The state of the tissue’s recovery is found related with the variation of successive CSD waves. It suggests that the refractory period is critical for preventing undesirable CSD waves. If we check the 2D spatial pattern of CSD, we will see that the refractory period modulates the spatiotemporal evolution of CSD [23]. L.V. Reshodko and J. Bures ever applied the technique of cellular automata (CA) to model a wave of CSD propagating in a circular pathway around an obstruction [15]. The simplicity of CA provides a useful tool to study CSD. With a purposed generalised cellular automata (CA) model, we have repeated various properties of CSD [16]. In the model, we exploit a generalised neighbourhood connection rule: a central CA cell is connected with a group of surrounding CAs with different weight coefficients. The new CA model simulates several properties of CSD: (1) expanding from the origin site as a circular wave; (2) annihilation of two colliding waves travelling in opposite directions; and (3) the wavefront of CSD breaking and recombining when and after encountering an obstacle (Fig. 13.8). In fact, inhomogeneous propagation of CSD is simulated with high fidelity by setting different refractory periods in the different CA lattice fields, or different connection coefficients in different directions within the defined neighbourhood. The simulation results are analogous to the time-varying CSD waves observed by optical imaging. With

13.1 Cortical Spreading Depression (CSD)

Fig. 13.7 Time interval of stimulation modulates the propagation dynamics of CSD waves in the simulation study. For all the three simulated trials (a)–(c), each stimulation is 20 mM KCl injection lasting 1 s transient but with different time intervals after the first stimulation. Top row, no CSD is induced after the second injection. In the middle and

195

bottom rows, a later injection can induce an earlier appearance of CSD. This indicates that haste makes waste. Each up arrow represents as one injection of KCl. All of the variables are normalised in arbitrary unit. Reprinted with permission from Ref. [22]

Fig. 13.8 Reverberating CSD wave simulated with the proposed generalised CA model. The blue circle is set as obstacle. Reprinted with permission from Ref. [16]

reaction-diffusion based modelling, we have shown that the refractory period modulates the spatiotemporal evolution of CSD [22, 23] . Temporarily assigning some part of CAs in rest state, this is similar with the experimental condition “anodal polariza-

tion block”. When the anodal block is removed, the ongoing wavefront starts to spread into the previously blocked area, gradually turning around and penetrating into the region recovering from the original CSD wave [16]. By this way, a spiral CSD wave is simulated (Fig. 13.9). This is consistent

196

13 Modelling Complex Phenomena in Physiology

Fig. 13.9 Spiral CSD wave simulated with the proposed generalised CA model. The blue area is set as the condition of “anodal polarization block”. Reprinted with permission from Ref. [16]

with the computational results from PDE, but with less computation work. We can also use the model to study CSD and calcium dynamics simultaneously [24]. We have presented a computational model to quantify the relative contributions of different Ca 2+ flows from the extracellular space, the cytoplasm and the endoplasmic reticulum to the generation of spontaneous Ca 2+ oscillations, and CSD-triggered Ca 2+ waves in a onedimensional astrocyte network. This model has revealed that the spontaneous Ca 2+ oscillations depend mainly on Ca 2+ released from endoplasmic reticulum of astrocytes, and the CSD-triggered Ca 2+ waves depend primarily on voltagegated Ca 2+ influx. It predicts that voltage-gated Ca 2+ influx is sufficient to produce Ca 2+ waves during CSD even after depleting internal Ca 2+ stores. So far, there have been a wealth of mathematical models on CSD. The primary objective in CSD modelling is to determine which possible mechanisms involved in CSD are essential for the phenomenon to occur, and to what extent they interact to generate the emergent phenomenon of the wave propagation. By reviewing different types of models, we can conclude that many of them have provided qualitatively or quantitatively consistent solutions with the experimental observations. Different models may give us the

similar insights on CSD. Modelling on CSD will certainly become a key tool to explore the mechanisms of CSD.

13.1.4 Questions 1. Play “Game of Life” in MATLAB with the command “life”. Read the rules as the following and fill the space for the further 4 steps. Black space means live cell. The “neighbors” are defined as the 8 cells around the target cell. For a space that is “populated”: a. Each cell with one or no neighbours dies, as if by loneliness. b. Each cell with two or three neighbours survives. c. Each cell with four or more neighbours dies, as if by overpopulation. For a space that is “empty” or “unpopulated”, each cell with three neighbours becomes populated.

13.2 Heart Physiome

2. Design and implement a cellular automaton-based model on CSD. 3. Compare the two concepts: synaptic transmission and volume transmission. 4. Equation 13.1 in this chapter is called bistable reactiondiffusion (RD) equation. There are 3 equilibria. Please identify the stable equilibria and tell why. 5. Equation 13.2 in this chapter has two variables. Try to use the Crank–Nicolson method to solve Eq. 13.2 and get the time courses of the two variables. Pay more attention to their peak time. 6. Equations 13.2 and 13.3 form a system of RD equations. List the possible physiological processes and introduce some basic ion models involved in the system.

13.2

Heart Physiome

13.2.1 Cardiovascular System The cardiovascular (from Latin words meaning “heart” and “vessel”) system, also called as the circulatory system or the vascular system, consists of the heart, blood vessels, and blood [25]. A physical model of heart is shown in Fig. 13.10. The total volume of blood in the cardiovascular system is about 5 litres. William Harvey ever said “The Heart of all creatures is the foundation of life, …from whence all strength and vigour flows.” The cardiovascular system is an important organ system that mainly functions to [26] : (1) circulate blood and transport oxygen, nutrients, and metabolic wastes; (2) send diverse hormones to different target organs and modulate the whole organism’s activity; (3) maintain the normal body

Fig. 13.10 A physical model of human heart. The labelled numbers are used to indicate different parts of the heart: for example, 15 for right ventricle and 22 for left ventricle

197

temperature by distributing the heat to the skin; and (4) perform immunoprotection by delivering white blood cells and antibodies. The cardiovascular system is considered as two parts: a systemic circulation and a pulmonary circulation. Systemic circulation sends oxygenated blood from the left ventricle to all of the tissues of the body but except for the heart and lungs. It evacuates wastes from body tissues and recalls deoxygenated blood to the right side of the heart. The left atrium and left ventricle (see Fig. 13.10) together form the pumping chambers for the systemic circulation loop. Pulmonary circulation transports deoxygenated blood from the right ventricle to the lungs, where the blood binds oxygen and returns to the left atrium. The pumping chambers of the pulmonary circulation loop are the right atrium and right ventricle. The heart is a four-chambered “double pump” that drives the blood flow in the closed vascular network of arteries, veins, and capillaries. Blood is a fluid comprising plasma, red blood cells, white blood cells, and platelets. Each side of heart (left or right) operates as a separate pump with a muscular wall of tissue known as the septum of the heart. Each heartbeat results in the coordinated pumping of both sides of the heart, making the heart a very efficient pump. The heart is keeping beating for a healthy human, approximately 1 beat per second. Each heartbeat consists of a cascade of depolarisation (and repolarisation). It starts in the sinoatrial node (heart’s pacemaker), spreads through the walls of atrium, and passes through the atrioventricular node, the bundle of His and the Purkinje fibres, spreading down and to the left throughout the ventricles. Electrocardiography (ECG or EKG) is the method of recording the electrical activity of the heart over a period of time by using electrodes placed on the skin. These electrodes can detect the tiny electrical changes generated by the heart muscle, where electrophysiological pattern of depolarising and repolarising during each heartbeat is ongoing. Willem Einthoven invented the first practical ECG in 1903 and received the Nobel Prize in Medicine in 1924 “for the discovery of the mechanism of the electrocardiogram”. Now, ECG is a very commonly performed cardiology test. Figure 13.11 shows an example of EEG recording. To quantify the performance of heart, the cardiac output is routinely used. It is defined as the amount of the blood pumped per ventricle per unit time. For adult, the cardiac output is about 5 L/min at rest. Obviously, both the heart rate and the stroke volume affect the cardiac output. In fact, the stroke volume of the heart increases in response to an increase in end diastolic volume of ventricle, i.e., the Frank–Starling law. Many diseases affect the circulatory system. In this chapter, we will take familial hypertrophic cardiomyopathy (FHC) as an example. FHC is an inherited disease that is

198

13 Modelling Complex Phenomena in Physiology

Fig. 13.11 An example of ECG recording

caused by sarcomeric protein gene mutations [27]. It is a heart condition characterised by thickening (hypertrophy) of the heart muscle. FHC induces an increased risk of sudden death of people.

13.2.2 Heart Physiome Heart is so important in physiology. Heart physiome [28] has become an active project in the whole IUPS Physiome Project (http://physiomeproject.org/research/cardiovascular). It is listed as the first research project among the 12 organ systems. It is designed as “an integration of large-scale computer modelling with experimental studies to understand the mechanisms that underlie reentrant arrhythmia and fibrillation in the heart”. For instance, the Wellcome Trust Heart Physiome Project is a 5-year international collaboration between the universities of Auckland and Oxford to develop a multi-scale modelling framework of heart for addressing scientific and clinical questions. The three linked goals and participants of the project are shown in the webpage (Fig. 13.12): Project Goals 1. To understand how cardiac arrhythmias at the whole heart level can occur as a result of ion channel mutations, ischaemia, and drug toxicity. 2. To demonstrate the use of integrative multi-scale modelling (at the levels of proteins, biochemical pathways,

cells, tissues, and whole organs) as “proof of concept” for further such physiome projects. 3. To provide electro-cardiologists with web-accessible software tools and databases for use in generating new hypotheses and interpreting complex experimental data. Project Participants • Professor David Paterson, University Laboratory of Physiology, Oxford • Professor Peter Hunter, Bioengineering Institute, University of Auckland, New Zealand • Professor Denis Noble, University Laboratory of Physiology, Oxford • Professor Richard Vaughan-Jones, University Laboratory of Physiology, Oxford • Professor Mark Sansom, Dept Molecular Biophysics, University of Oxford • Dr David Nickerson, Bioengineering Institute, University of Auckland, New Zealand • Dr Chris Bradley, University Laboratory of Physiology, Oxford On the other hand, there is a Cardiac Physiome Society (https://www.cardiacphysiome.org/). Its mission is “to promote integrative multi-scale simulation and analysis of cardiac physiology in health and disease, spanning the full breadth of cardiac functions and all scales of biological organization from molecule to patient”. It “aims to encourage and facilitate international collaboration, cooperation, shar-

13.2 Heart Physiome

199

Fig. 13.12 Snapshot of the heart physiome project

ing, exchange, and interoperability in basic, translational and clinical research data, models and technology development by various means including the organization of an annual Cardiac Physiome Workshop”. Over the last two decades, heart modelling has stepped into an advanced stage. At the cellular level, electrophysiological, metabolic, and excitation-contraction coupling processes have been combined into building cardiac myocyte model. At the tissue level, structural details and mechanical conduction properties are integrated to construct the continuum model. At the whole heart level, anatomically detailed information has been incorporated with the cellular model to study the electrical and mechanical behaviours of the whole organ.

13.2.3 Multi-Level Modelling Different computational models have been developed to study the cardiovascular system. At the single cell level, the pioneer Dennis Noble has built a series of models. He proposed the first model describing the action potential of Purkinje fibre cells of heart in 1962 [29]. Also, the Luo-Rudy model is extensively used in heart study (http:// rudylab.wustl.edu/). Some researchers have developed a multi-scale simulation model of the human circulation. In 2006, a comprehensive ventricular cell model was combined

with a lumped circulation model [30]. This work was used to investigate the relationship between the circular blood pressure dynamics and the myocardiac cellular events. In 2012, another multi-level modelling was developed [27]. The model was used to address an inherited disease familial hypertrophic cardiomyopathy (FHC). FHC is related with sarcomeric protein gene mutations, but the mechanism by which these mutant proteins cause disease is unclear. The previous study has revealed the cardiac troponin I (CT nI ) gene mutations mainly to alter myocardial performance via increases in the Ca 2+ sensitivity of cardiac contractility. How does the genetic profile affect FHC? Bo Wu et al. implemented an integrated simulation to investigate the alterations in myocardial contractile function and energy metabolism regulation as a result of increased Ca 2+ sensitivity in CT nI mutations (see Fig. 13.13). In fact, the model was based on the experimental data of guinea pig and rat at body temperature (∼37◦ C). Firstly, six typical mutations of CTnI (R145G, R145Q, R162W, G203S, K206Q, and ΔK183) showing enhanced Ca 2+ sensitivity of force generation were imported to the modelling. The apparent pCa50 were used to quantify the CT nI mutation-induced changes in the binding affinities of Ca 2+ to CTnC. Both pCa50 values of wild type (WT) and mutants were obtained from the fibre mechanics data sets of Takahashi-Yanaga et al.[31] and converted to an apparent binding constant Km,T RP N .

200

13 Modelling Complex Phenomena in Physiology

Fig. 13.13 An integrated model for myocardial contraction and metabolic network is implemented in MATLAB [27]. A simplified myocardial metabolic network, containing eight crucial pathways, is embedded in the graphic user interface (GUI). Here, the blood vessel enables glucose, lactate, fatty acids, and O2 to transfer between blood

and the neighbouring myocytes. PC, phosphocreatine; ETC, electron transport chain; TCA Cycle, tricarboxylic acid cycle; NSR, network sarcoplasmic reticulum; JSR, junctional sarcoplasmic reticulum. This snapshot is reprinted with permission from Prof. Liu

Secondly, the Luo-Rudy electrophysiological model [32] was coupled to the Rice four-state cross-bridge cycling cardiac muscle model [33]. The calculated binding constant Km,T RP N is associated with the Ca 2+ buffering of sarcoplasma and the activation of cross-bridge. By this way, the model could be applied to investigate the multiple effects of increased Ca 2+ sensitivity of different CTnI mutations on the Ca 2+ transient, cardiac muscle force generation, and ATP consumption of cross-bridge. Third, the three primary ATP utilisation pathways were included: actomyosin-ATPase for contraction; sarcolemmal N a + ,K + -ATPase; and the sarcoplasmic reticulum Ca 2+ ATPase (SERCA) for transport of ions. They were modelled to reflect the energy flow. The ATP-related metabolic network in the myocardium, including eight crucial metabolic pathways, was reconstructed based on our previous work [34]. Then, the ATP-producing and ATP-consuming processes were linked by using the M-DFBA approach (see Chap. 6) to examine systemic behaviours of myocardial energy demand and supply in case of increased myofilament Ca 2+ sensitivity. More details on modelling can be found in the supplementary materials of Ref. [27]. What is more, a GUI based

MATLAB toolbox for the simulation can be downloaded from the link.1 This model links electrophysiology, contractile activity, and energy metabolism of the myocardium, particularly, the electromechanical ([Ca 2+ ]i transients, isometric twitches, and half-relaxation times) and energetic metabolic function (ATP consumption and metabolic networks) of the mammalian cardiac myocyte in response to enhanced Ca 2+ sensitivity resulting from CTnI mutations (Fig. 13.14). This simulation study reproduced several typical features of FHC:“ (1) diastolic dysfunction (i.e., slower relaxation) caused by prolonged [Ca 2+ ]i and forced transients; (2) higher energy consumption with increased Ca 2+ sensitivity; (3) enhanced glucose utilisation and reduced fatty acid oxidation in hypertrophied heart metabolism. Furthermore, the simulation demonstrated that in case of high energy consumption (more than an 18.3% increase in to1 http://onlinelibrary.wiley.com/store/10.1113/expphysiol. 2011.059956/asset/supinfo/Matlab_Programm.rar?v=1&s= ac143d7f10a2751feb191f965dfd9514e16cbeb4.

13.2 Heart Physiome

201

Fig. 13.14 Snapshot of the simulation on myocardial contraction. This snapshot is reprinted with permission from Prof. Liu

tal energy consumption), the myocardial energetic metabolic network switches from a net consumer to a net producer of lactate. It results in a low coupling of glucose oxidation to glycolysis, which is a common feature of hypertrophied hearts.” This study provides a novel systematic analysis on myocardial contractile and metabolic processes. It suggests that the alterations in resting heart energy supply and demand could contribute to disease progression. It is helpful to investigate and elucidate the pathogenesis of FHC. In fact, the aforementioned modelling has been developed by combining a lumped parameter model of circulation and 3D structure model of heart. The anatomical model of heart was con-

structed from the Dataset of Visible Chinese Human. Of course, this is a representative work of heart physiome.

13.2.4 Questions 1. Heart is important to everyone. Have you paid some attention to the quantities on your own cardiovascular system, such as heart beat, blood pressure, and cardiac output? You can roughly calculate the total output of your heart within a day. 2. Cellular automaton (CA) has been used for heart modelling. Can you build a CA model to simulate ECG?

202

13 Modelling Complex Phenomena in Physiology

3. ECG is commonly used in cardiology. What can you read from the curves of ECG? 4. Tell the cycle of heart beat and draw a pressure-volume graph for left ventricle. 5. Read a recent paper on cardiovascular system modelling and introduce the model’s objective, quality model, core equations, and validation.

13.3

Modelling of Kidney Autoregulation

13.3.1 Renal Physiology The kidneys serve multiple functions: regulation of water and electrolyte balance, excretion of metabolic products, secretion of hormones, and regulation of arterial pressure. The kidneys work as a purifying filter for the blood. Adults have around 5–6 litres of blood that is constantly flowing through the kidneys, as much as 400 times per day, and 19% of the renal plasma flow is filtered. The kidneys reabsorb and redistribute 99% of the ultrafiltrate; only 1% of it becomes urine. The kidneys provide important long-term regulation for the cardiovascular system. They maintain the volume and composition of the body fluids within a narrow range and produce hormones that affect blood vessels. Disruption of kidney function can cause hypertension—a prevalent disease in modern society. Since the kidneys are perfused with blood, they are exposed to fluctuations of the blood pressure at two different time scales. One periodicity is with the frequency of the heart rate, and the other is at the period of 24 h. Over the intermediate band between these periodicities, the blood pressure displays 1/f fluctuations2 caused by the independent actions of arterioles in the body. The kidneys protect their own function against short-term variations in the blood pressure and reduces the severity of 1/f pattern. The process of autoregulation takes place at the level of individual nephron.

13.3.1.1 Nephron as Functional Unit Figure 13.15 (left panel) illustrates the main structure of a nephron. Blood enters the system through the afferent arteriole whose diameter changes regulating the blood flow. At the glomerulus, the blood passes a system of capillary loops, where 25% − 35% of the water together with blood constituents of low molecular weight is filtered out into tubules but the blood cells and proteins are retained. When the colloid osmotic pressure balances the hydrostatic pressure difference between the blood and the filtrate in the tubule, the filtration process saturates and keeps homeostasis. The blood leaves the glomerulus through the efferent arteriole.

The other components of a single nephron are the proximal tubule, the loop of Henle, the distal tubule, and the collecting duct. The proximal tubule is located within the outer layer of the kidney—the cortex—and is accessible for pressure measurements. From the proximal tubule, the nephron traverses down through the renal medulla, forming the descending and ascending limb of the loop of Henle. When re-entering the cortex, the ascending limb becomes the distal tubule. The terminal part of the ascending limb passes immediately by the afferent arteriole of the same nephron and forms the basis for the tubuloglomerular feedback (TGF). This is a striking anatomical feature. The composition of the filtrate in the proximal tubule is similar to the composition of the blood plasma but lacks large molecules. Without changing the composition much, the proximal tubule reabsorbs approximately 2/3 of the water and salts. As the filtrate goes through the descending limb of the loop of Henle, the concentration of NaCl in the interstitial fluid surrounding the tubule increases significantly, and osmotic processes trigger reabsorption of water. On the other hand, the ascending limb is practically impermeable to water; its epithelial cells contain molecular pumps that transport sodium and chloride from the tubular fluid into the surrounding interstitium. By this way, the NaCl concentration of the tubular fluid is decreased. Near the terminal part of the Henle loop, the macula densa cells monitor the NaCl concentration. The signal produced in response to variation of the NaCl concentration affects the Ca2+ conductance of vascular smooth muscle cells and activates vasoconstriction/dilation mechanisms; this affects glomerular filtration rate providing negative feedback.

13.3.1.2 Mechanisms of Autoregulation The Tubuloglomerular Feedback The tubuloglomerular feedback is a specific regulation mechanism of the kidney. It may lead to vasoconstriction of the afferent arteriole as an increase of the NaCl concentration at the macula densa. The salt reabsorption from the ascending limb of the loop of Henle is an active and more rate limited process than the passive diffusion of water out of descending limb. The NaCl concentration at the macula densa correlates with the rate of tubular flow: the larger the flow, the higher the concentration. Thus, an increase in arterial pressure will increase tubular flow due to enhanced glomerular filtration and reduced tubular reabsorption. This will raise the NaCl concentration at the macula densa and initiate afferent arteriolar vasoconstriction, providing restoration of filtration and autoregulation of renal blood flow. The TGF system causes a delayed response due to: • transmission of the signal through the tubular system from the glomerulus to the macula densa;

2 1/f fluctuation is also called pink noise. Its power spectral density is proportional to the reciprocal of the frequency.

13.3 Modelling of Kidney Autoregulation

203

Distal tubule H2O

NaCl

Proximal tubule

Glomerulus Afferent arteriole

Bowman's capsule

Macula densa

Glomerulus

Efferent arteriole Afferent arteriole

Cortex

H2O

Interlobular artery

NaCl

Collecting duct

Loop of Henle

Medulla

Fig. 13.15 Left panel: Schematic representation of the main components of a functional unit of the kidney-nephron. Right panel: Cartoon of interlobular artery with branching afferent arterioles. Approximately, 50% of nephrons are arranged in pairs [35]

• transmission of the signal across the macula densa to the afferent arteriole. This mechanism causes self-sustained oscillations in the proximal hydrostatic pressure, NaCl concentration, and renal blood flow observed in anaesthetised rats. The oscillations have a period of about 30 s (0.033 Hz) (Fig. 13.16). The Myogenic Response The myogenic response is a function of smooth muscle to react in response to an external stretching force to keep the blood flow within the blood vessel constant. A rise in intraluminal pressure induces a vasoconstriction that reduces the vascular diameter and provides autoregulation of the flow. One of the events following myogenic activation is depolarisation of cell membrane that leads to influx of Ca2+ through voltage-gated Ca2+ -channels. The main signalling pathway following the rise in Ca2+ involves calmodulin and myosin light chain phosphorylation responsible for constrictor response. The frequency of the vasomotor oscillations depends on the size and type of blood vessels. The oscillations in the radius of afferent arterioles are about 5–10 s.

Interaction Between the Mechanisms The balance of the mechanisms contributing to renal blood flow autoregulation is important for kidney functions. It is not only determined by the algebraic sum of their influence, but also by their interaction. Please note that the TGF system and vasomotor oscillator operate in different frequency bands: 0.02–0.04 Hz and 0.1–0.25 Hz, respectively. Since both mechanisms lead the afferent arteriole to control its haemodynamic resistance, the activation of one mechanism modifies the response of the other. In this way, TGF oscillations provide modulation of faster myogenic oscillations of the arteriole radius. The magnitude of the TGF stimulus decreases with distance from the glomerulus along the afferent arteriole.

13.3.1.3 Nephron-Vascular Tree Nephrons are arranged in a tree structure with the afferent arterioles branching off from a common interlobular artery (Fig. 13.15 (right panel)). With this structure, a change of the blood flow to one nephron influences the blood flow to all other nephrons in the tree. Neighbouring nephrons communicate via (1) vascular propagation of electrical signals mediated by TGF and (2) haemodynamic interaction by

204

13 Modelling Complex Phenomena in Physiology

Pressure Cl–

out Blood in

Fig. 13.16 Left: Single nephron measurements. Right: Typical recording of distal tubular chloride concentration (top trace) and proximal tubular pressure (bottom trace) (Reproduced from the publication [36] with the permission). Oscillations have the same frequency but phase shifted

which an increased flow resistance in the afferent arteriole of one nephron forces a higher blood flow to the neighbouring nephrons. A possible mechanism of nephron-nephron synchronisation is TGF-induced depolarisation of vascular smooth muscle cells that propagates from one nephron to the other.

13.3.2 Experimental Observations Experiments were performed on anaesthetised normotensive rats (rats with normal blood pressure) and spontaneously hypertensive rats (rats with high blood pressure). Experimental protocols included simultaneous paired measurements of tubular pressure from one or two surface nephrons of the kidney with the servo-nulling technique and monitoring of distal chloride concentration with special microelectrodes. Let us discuss the main experimental observations relevant to the model validation. Oscillatory Nephron Dynamics Figure 13.16 schematically displays an experiment on a single nephron. Measurements of distal Cl− concentration and proximal tubular pressure reveal oscillations with the same frequency. But Cl− oscillations are phase shifted by about 10 s with respect to proximal pressure (and flow) oscillations. This time is needed for filtrate to pass through the loop of Henle. This is in accordance with hypothesis that the oscillations are mediated by TGF: the high Cl− concentration would be followed by a minimum of proximal tubular pressure (flow) due to TGFmediated constriction of the afferent arteriole. Since half a period is about 15 s, a delay of 10 s between the proximal pressure and the distal Cl− oscillations indicates that the

delay in propagating the signal across the macula densa and constricting the afferent arteriole is about 5 s. Chaotic Dynamics While for normotensive rats the oscillations are regular self-sustained oscillations, highly irregular oscillations are observed for hypertensive rats (Fig. 13.17 (bottom panel)). Normotensive rats made hypertensive by clipping one renal artery have similar irregular tubular oscillations in the unclipped kidney. The most compelling evidence for a key role of the kidney in the development of hypertension comes from renal cross-transplantation studies [37, 38]. Transplantation of the kidneys from genetically hypertensive to normotensive rats results in hypertension in renal recipients. This was the case even if the kidney was collected from a hypertensive donor that had been on chronic antihypertensive treatment since birth and, therefore, had a normal blood pressure by the time the kidney was extracted. Since the two different experiments on induced hypertension show similar dynamics, one can infer that a transition from regulatory behaviour to a chaotic state is a common feature of renal vascular control during the development of disease. Cooperative Dynamics Nephrons can communicate with each other through the vascular network [39]. Nephrons pairs whose afferent arterioles originate from the same interlobular artery synchronise their oscillations: 29 out of 33 pairs (i.e., 80%). The interaction occurs through a common arterial segment. In contrast, nephron pairs not fulfilling this anatomical criterion only showed synchronous oscillations in one case out of 23 investigated pairs (i.e., 4%). Figure 13.17 displays the tubular pressure variations in pairs of neighbouring nephrons for normotensive rats (regular oscillations in the left panel) and for hypertensive rats (irregular oscillations

13.3 Modelling of Kidney Autoregulation

205

Pressure Pressure

out

Normotensive rat

14

Hypertensive rat

16

12

Pt, mmHg

Pt, mmHg

in Blood

14

12

10 0

200

400 t, s

600

800

10

0

200

400 t, s

600

800

Fig. 13.17 Top: Paired nephron experiment. Bottom: Examples of the tubular pressure variation in adjacent nephrons of normotensive and hypertensive rats. (Experimental data are used with the permission from N.-H. Holstein-Rathlou and D. Marsh.)

in the right panel). One can easily see a certain degree of synchronisation between the interacting nephrons.

Arteriolar pressure   Glomerular pressure

13.3.3 Model of Nephron Autoregulation 13.3.3.1 Formulation of the Model The tubuloglomerular feedback system is a mechanism that operates in each nephron to stabilise nephron blood flow, single nephron glomerular filtration rate, and the tubular flow rate. We focus on the tubuloglomerular feedback mechanism rather than myogenic response. Figure 13.18 shows a simplified causal loop diagram. The feedback is negative: increasing arterial pressure leads to higher filtration rate and higher tubule flow that result in higher NaCl concentration at the macula densa; the afferent arteriole constricts reducing filtration rate. Due to the delay, the regulation becomes unstable and produces self-sustained oscillations. We focus on computational implementation of the model. Physiological justification for all equations and expressions are given in Ref. [40]. Figure 13.19 represents the structure of the model. The system consists of three models: tubule model (blue box), vascular model (red box), and connecting feedback (yellow box). The figure indicates state and auxiliary variables as well as logical links between them.

 Glomerular filtration rate 

Afferent arteriole resistance   NaCl concentration at macula densa 

 Proximal tubule pressure

Flow into the loop of Henle  Fig. 13.18 A causal loop for tubuloglomerular feedback. Plus or minus indicates positive or negative effect in the arrow direction. Altogether the loop represents a negative feedback

13.3.3.2 Equations The tubular pressure Pt changes in response to differences between the in and out flows: dPt = [FF ilt − Freab − FH en ] /Ctub . dt

(13.9)

206

13 Modelling Complex Phenomena in Physiology

VASCULAR MODEL Afferent arteriole radius r

Ra, R Afferent arteriole resistance Pa, Ra0, β

Rate dr/dt

TUBULOGLOMERULAR FEEDBACK

Pav Afferent average Afferent pressure

Afferent arteriole contraction rate

v

Rate dv/dt ω, K Pg Glomerular capilary pressure

Peq Equilibrium pressure

 Vascular smooth muscle activation maxmin eq

Ce Eff Efferent protein i concentration Ha, Ca, Pv, Re, a, b

Delay X1, X2, X3

Proximal tubular pressure

F filt Glomerular filtration

Pt

F Hen Flow in Henle loop RHen, PHen0, Pd

Ctub

Proximal reabsorption Freab TUBULAR MODEL

Fig. 13.19 Forrester diagram of the model. Grey squares represent state variables, and circles represent auxiliary variables (functions). Solid lines indicate fluxes, and dotted curves indicate a functional de-

pendency between two variables. For definitions of individual symbols, see text and table of parameters.

Experiments have shown that arterioles tend to perform damped, oscillatory contractions in response to external stimuli. This may be described by means of a second-order differential equation:

arteriolar wall—is modelled by means of three first-order coupled ordinary differential equations:

d 2r dr Pav − Peq dr +K − = 0, i.e, = vr and 2 dt dt ω dt dvr Pav − Peq = − K ∗ vr . (13.10) dt ω TGF delay—resulting both from the transit time through the Henle loop and from the cascaded processes between the macula densa cells and the smooth muscle cells in the

dX1 3 = FH en − ∗ X1 , dt T dX2 3 = ∗ (X1 − X2 ), dt T dX3 3 = ∗ (X2 − X3 ). dt T

(13.11) (13.12) (13.13)

13.3.3.3 Functions All functions used to calculate derivatives have physiological meaning.

13.3 Modelling of Kidney Autoregulation

207

• Flow in the loop of Henle is determined by the difference between the proximal (Pt ) and the distal (Pd ) tubular pressures and by the flow resistance RH en in the loop of Henle: Pt − Pd FH en = . (13.14) RH en • Resistance of afferent arteriole. The afferent arteriole is divided into two coupled sections. The first (representing a fraction β of the total length) is assumed to have a fixed haemodynamic resistance, while the second (closer to the glomerulus) can vary its diameter (and hence the flow re-

sistance) in response to activation of the tubuloglomerular feedback: 1−β Ra = Ra0 ∗ (β + ). (13.15) r4 • Protein concentration in efferent arteriole is used to calculate glomerular filtration rate Ff ilt that depends on arterial, hydrostatic, and osmotic pressures. Ce is obtained from the equation: A ∗ Ce3 + B ∗ Ce2 + C ∗ Ce + D = 0,

(13.16)

where

A=b+

Re ∗ b ∗ Ha , Ra

(13.17)

B =a+

Re Re ∗ b ∗ Ca ∗ (1 − Ha ) + ∗ a ∗ Ha , Ra Ra

(13.18)

C = Pt − Pv +

Re Re ∗ a ∗ Ca ∗ (1 − Ha ) + ∗ (Pt − Pa ) ∗ Ha , Ra Ra

D = (Pt − Pa ) ∗

Re ∗ Ca ∗ (1 − Ha ). Ra

For physiologically relevant values of parameters, this equation has only one positive root (concentration must be positive). • Glomerular pressure is determined by the distribution of the arterial–venous pressure drop between the afferent and efferent arteriolar resistances: Pg = b ∗ Ce2 + a ∗ Ce + Pt .

(13.21)

• Average pressure in the active part of the afferent arteriole: Pav =

1 Ra0 ∗ (Pa − (Pa − Pg ) ∗ β ∗ + Pg ). (13.22) 2 Ra

• Glomerular filtration depends on arterial, hydrostatic, and osmotic pressures: FF ilt = (1 − Ha ) ∗ (1 −

Ca Pa − Pg )∗ . Ce Ra

(13.23)

• Activation of afferent arteriole. The glomerular feedback is introduced as sigmoidal relation between the muscle activation Ψ of the afferent arteriole and the delayed Henle flow X3 : Ψ = Ψmax −

Ψmax − Ψmin 1+

Ψeq −Ψmin Ψmax −Ψeq

.

3∗X3 ∗ exp(α ∗ ( T ∗F − 1)) H en0 (13.24)

(13.19) (13.20)

• Pressure in the passive part of afferent arteriole: Ppas = 1.6 ∗ (r − 1) + 2.4 ∗ e(10∗(r−1.4)) .

(13.25)

• Pressure in the active part of afferent arteriole: Pact =

4.7 + 7.2 ∗ r + 6.3. 1 + e(13∗(0.4−r))

(13.26)

• Equilibrium pressure is the sum of passive and active responses of the afferent arteriole: Peq = Ppas + Ψ ∗ Pact .

(13.27)

• Electrical coupling is modelled as additive contribution of the activation level of one nephron to the activation level of the neighbouring nephron: Ψ1,2 = Ψ1,2 + γ ∗ Ψ2,1 .

(13.28)

13.3.3.4 Parameters All parameters are listed in Table 13.3. Initial conditions are Pt (0) = 2.0, r(0) = 1.2, vr (0) = −0.2, X1 (0) = 1.4, X2 (0) = 1.3, and X3 (0) = 1.0. 13.3.3.5 Simulations The model reproduces well experimental observations [41]. The model captures the main features of autoregulation: (1)

208

13 Modelling Complex Phenomena in Physiology

Table 13.3 Constant and control parameters. Note control parameters change in simulations

Parameter Ctub Ha Pv Pd FH en0 Freab RH en Ra0 Re ω K β Ψmin Ψmax Ψeq Ca a b Pa α T γ

Meaning Compliance of nephron tubule [nL/kPa] Arterial haematocrit Efferent arterial pressure [kPa] Distal tubular hydrostatic pressure [kPa] Henle’s loop equilibrium flow [nL/s] Proximate tubule reabsorption [nL/s] Hydrodynamic resistance of the loop of Henle [kPa·s/nL] Afferent arterial equilibrium resistance [kPa·s/nL] Efferent arterial resistance [kPa·s/nL] Damped oscillator parameter [kPa·s2 ] Damped oscillator parameter [1/s] Non-variable fraction of efferent arterial Lower activation limit Upper activation limit Equilibrium activation Afferent plasma protein concentration [g/L] Protein concentration parameter [kPa·L/g] Protein concentration parameter [kPa· L2 /g2 ] Arterial blood pressure [kPa] Tubuloglomerular feedback strength Delay of the loop of Henle [s] Strength of electrical coupling 0.65 Mean flow F filt, nL/s

Mean radius r

1.35 1.25 1.15 1.05 0.95

Value 3.0 0.5 1.3 0.6 0.2 0.3 5.3 2.4 1.9 20.0 0.04 0.67 0.20 0.44 0.38 54.0 21.7e−3 0.39e−3 Control parameter Control parameter Control parameter Control parameter

0.55 0.45 0.35 0.25

8

9

10

11 12 Pa, kPa

13

14

15

8

9

10

11 12 Pa, kPa

13

14

15

Fig. 13.20 Left panel: An increase in pressure is balanced by a reduction of the radius. Right panel: Constant nephron flow within a wide range of arterial pressures (α = 10.0, T = 16.0 s)

myogenic response shows how arteriole reacts to an increase of blood pressure to keep the blood flow within the blood vessel constant (Fig. 13.20 (left panel)) and (2) tubuloglomerular feedback stabilises single nephron glomerular filtration rate (plateau in Fig. 13.20 (right panel)). With physiologically realistic mechanisms and with independently determined parameters, the model explains how pressure oscillations can appear via a Hopf bifurcation (Fig. 13.21 (left panel)) and how complex chaotic dynamics can arise as the feedback strength increases (Fig. 13.21 (right panel)). A cross-talk between nephrons is modelled by additive contribution of the muscular activation of one nephron to the activation of the other (Eq. 13.28). If uncoupled nephrons are not identical and differ, for instance, by the length of the loop

of Henle (delay parameters are T1 = 15.7 s and T2 = 16 s), the oscillations in each subsystem are periodic although the system as a whole behaves quasi-periodically: phase portrait shows complex oscillations with different frequencies and phases (if the frequency and phase angle of the two curves are identical, the resultant is a straight line diagonal to the coordinate axes), and power spectrum indicates two independent frequencies (Fig. 13.22 (left panel)). When coupling is introduced, the nephrons synchronise their oscillations (Fig. 13.22 (right panel)) through frequency/phase locking mechanism. If the afferent arteriole of one nephron is affected by the TGF mechanism to contract, the vascularly propagating signals almost immediately reach the neighbouring nephron and cause it to contract as well, leading to in-phase synchronisation.

13.4 Brain Project

209 1.9

2.2 2

Pt, kPa

Pt, kPa

1.8 1.7

1.6

1.2

1.6 1.5 2000

2050

2100 t, s

2150

0.8 2000

2200

2050

2100 t, s

2150

2200

2.2

1.9

2

Pt, kPa

Pt, kPa

1.8 1.7

1.2

1.6 1.5 0.9

1.6

0.95

1

1.05

1.1

1.15

r

0.8 0.4

0.6

0.8

1 r

1.2

1.4

1.6

Fig. 13.21 Temporal variation of the proximal tubular pressure Pt and the corresponding tubular pressure–radius phase plot for periodic dynamics (left panel) at α = 12.0 and chaotic dynamics (right panel)

at α = 35.0 (T = 16.0 s, Pa = 13.3 kPa, 1 kPa = 7.5 mmHg). These results are in a good agreement with experimentally observed behaviour

13.3.4 Questions

study of the nervous system, or simplified as the science of the brain. The human brain has been considered as the most complex among all the objects in the universe. An adult human brain contains about 86 billion neurons and a greater number of glial cells, as well as blood vessels forming complex neurovascular networks. Brains are so unique not only in the total number of cells but also in the neural network connected via numerous synapses (each neuron has on the order of 1000– 10,000 synaptically coupled partner neurons). The roughly three-pound mass of grey and white matter is the source of our perceptions, thoughts, emotions, and actions, and it defines who we are. In neuroscience, the dedicate structural information of mouse brain has been investigated (Fig. 13.23, reprinted from [42]). Despite recent advances in both science and technology, the structure and functions of the human brain remain largely unknown. Francis S. Collins, NIH Director, says: “The human brain is the most complicated biological structure in the known universe. We’ve only just scratched the surface in understanding how it works”. We have known the basic functions of the brain: receiving sensory input, processing input and making decision, and outputting signal to the effector organs. However, we do not know how does the neural activity give rise to the human thought. In addition, we ever mentioned the grand challenge questions in Science: What is the biological basis

These exercises should be performed with computer simulations: • implement the model (Eqs. 13.9–13.13); • investigate how TGF parameter α causes a transition from periodic to chaotic Pt oscillations; • investigate how delay of the loop of Henle T changes period of Pt oscillations; • investigate autoregulation properties of the model and how myogenic response (change of radius) and filtrate rate depend on arterial pressure Pa ; • introduce coupling between nephrons (Eq. 13.28) and investigate how nephron dynamics changes with increasing coupling strength γ .

13.4

Brain Project

13.4.1 Mystery of Brain In physiology, we may often talk about the nervous system that includes the central nervous system (CNS) and the peripheral nervous system (PNS). Brain is the command centre of the nervous system. The term Neuroscience is the scientific

210

2.2

1.9

2 Pt1,2, kPa

Pt1,2, kPa

1.8 1.7 1.6 1.5 4500

1.8 1.6 1.4

4600

4700

4800

4900

1.2 4500

5000

4600

1.9

4900

5000

2

2.2

2 Pt1,2, kPa

1.7 1.6

1.8 1.6 1.4

1.6

1.7 Pt1, kPa

1.8

1.2 1.2

1.9

60

350

50

300

1.4

1.6 1.8 Pt1, kPa

250 S(f)

40 S(f)

4800

2.2

1.8

1.5 1.5

4700 t, s

t, s

Pt1,2, kPa

Fig. 13.22 Temporal variation of the proximal tubular pressure Pt of two neighbouring nephrons, the corresponding phase plots, and Fourier power spectrum. Left panel: Uncoupled nephrons (γ = 0.0). Time series are not identical, and the phase portrait on the plane (Pt1 , Pt2 ) shows quasi-periodic behaviour with different frequencies and phases. The spectral peaks are well separated. Right panel: Coupled nephrons (γ = 0.05) synchronise their oscillations (T1 = 15.7 s, T2 = 16 s ,α1,2 = 12.0, Pa = 13.3 kPa). These results are in a good agreement with experimental observations

13 Modelling Complex Phenomena in Physiology

30 20

200 150 100

10

50

0 0.01 0.015 0.02 0.025 0.03 0.035 0.04

0 0.01 0.015 0.02 0.025 0.03 0.035 0.04

f, Hz

f, Hz

of consciousness? How are memories stored and retrieved? All these questions are far more unknown. Michael Tarr ever said: “The study of the mind and brain is the last frontier in science”.

13.4.2 Brain Projects Although recent advances have enriched our knowledge of brain structure and functions, there is still far away to a comprehensive understanding of fundamental principles of brain function, and their translation into treatments for brain disorders. No doubt, brain science will benefit the applications in basic science, public health, and industrial applications [44], such as artificial intelligence and machine learning. The inherent complexity of brain demands multidisciplinary endeavour and global collaboration [43]. Neuroscience research has become more collaborative and interdisciplinary with partnerships between industry and

academia. Both governments and private organisations worldwide have initiated several large-scale brain projects, to further understand how the brain works at the systems levels. We will introduce some of them as follows. Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative The BRAIN Initiative® is a collaborative, public-private research initiative announced by the Obama administration on April 2, 2013, which is aimed at revolutionising our understanding of the human brain. Brief introduction is presented in the official website (https://braininitiative.nih.gov/): By accelerating the development and application of innovative technologies, researchers will be able to produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space. Long desired by researchers seeking new ways to treat, cure, and even prevent brain disorders, this picture will fill major gaps in our current knowledge and provide unprecedented opportunities for exploring exactly how the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.

13.4 Brain Project

211

Fig. 13.23 Anatomy of mouse brain with individual cells and capillary vessels. (a) 3D rendering of the distribution of blood vessels in mouse brain, grey for brain contours and red for blood vessels. (b) The distribution of blood vessels in the coronary section of the mouse brain. (c) The distribution of blood vessels and nerve cells in the barrel cortex,

where the green dot is the central point of the nerve cell. (d) Spatial distribution of nerve cells (grey) and blood vessels (red) in the layer 4 of the cortex. Reprinted with permission from the work of Prof. Luo’s group [42]

Human Brain Project The Human Brain Project (HBP) is an EU-funded Future and Emerging Technologies (FET) Flagship Initiative started on October 1, 2013. The short overview is shown in the official web (https://www.humanbrainproject.eu/en/):

Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) The Brain/MINDS is a national brain project launched by Japan in June 2014. The key goal is mapping the brain of the common marmoset for integrating new technologies and clinical research. In the official website, it announces the three objectives:

HBP is building a research infrastructure to help advance neuroscience, medicine and computing. Six ICT research Platforms form the heart of the HBP infrastructure: Neuroinformatics (access to shared brain data), Brain Simulation (replication of brain architecture and activity on computers), High Performance Analytics and Computing (providing the required computing and analytics capabilities), Medical Informatics (access to patient data, identification of disease signatures), Neuromorphic Computing (development of brain-inspired computing) and Neurorobotics (use of robots to test brain simulations).The HBP also undertakes targeted research and theoretical studies, and explores brain structure and function in humans, rodents and other species. In addition, the Project studies the ethical and societal implications of HBP’s work.

... to focus on studies of non-human primate brains that will directly link to better understanding of the human brain; to elucidate the neural networks involved in such brain disorders as dementia and depression; and to promote close cooperation between basic and clinical research related to the brain.

Australian Brain Initiative The Australian Brian Initiative (ABI) was proposed with the aim of “cracking the brain’s code” in 2016. In the of-

212

13 Modelling Complex Phenomena in Physiology

Fig. 13.24 Framework of the China Brain Project featured with brain science and brain-inspired intelligence. Reprinted with permission from the Neuron paper [47]

ficial website (https://www.brainalliance.org.au/), the ABI releases its mission: Make major advances in understanding healthy, optimal brain function. Create advanced industries based on this unique understanding of the brain. Identify causes and develop novel treatments for debilitating brain disorders. Produce sustainable, collaborative networks of frontline brain researchers with the capacity to unlock the mysteries of the brain and ensure the social, health and economic benefit of all Australians.

Brain Canada Brain Canada is a national non-profit organisation founded in 1999, which supports innovative, paradigm-changing brain research across Canada (https://braincanada.ca/). The mission of Brain Canada has been introduced in a recent Neuron paper [45]: Brain Canada’s vision is to understand the brain, in health and illness, to improve lives and achieve societal impact. Brain Canada is achieving its vision by: Increasing the scale and scope of funding to accelerate the pace of Canadian brain research; Creating a collective commitment to brain research across the public, private, and voluntary sectors; Delivering transformative, original, and outstanding research programs.

Korea Brain Initiative The Korea Brain Initiative was announced on May 30, 2016, which is centred on deciphering the brain functions and mechanisms that mediate the integration and control of brain functions [46]. The vision and goals are summarised in the webpage (http://www.kbri.re.kr/new/pages-eng/main/): The Korea Brain Initiative includes an expected role of brain science to drive the fourth industrial revolution and aims at understanding the principles of high brain function, producing a new dynamic picture of healthy and diseased brains, developing personalised treatment for mental and neurological disorders by extrapolating the concept of precision medicine, and stimulating the interaction between scientific institutes, academia, and industry.

China Brain Project The China Brain Project was introduced in 2016, which covers both basic research on neural mechanisms underlying cognition and translational research for the diagnosis and intervention of brain diseases as well as for brain-inspired intelligence technology [47] (see Fig. 13.24). Chinese President Xi Jinping said [49]: “Research on the brain atlas based on connectional architecture is at the cutting-edge of work on human brain function and probes into the essence of consciousness. Besides scientific significance, such exploration will guide the prevention and treatment of brain diseases and the development of intelligent technology.” All the above-mentioned national or international brain projects have been accelerating progress in understanding the brain. The last but not the least, the Allen Institute for Brain Science is a unique non-profit private institution dedicated to basic brain science (https://alleninstitute.org/what-we-do/ brain-science/).“The Allen Institute for Brain Science was established to answer some of the most pressing questions in neuroscience, grounded in an understanding of the brain and inspired by our quest to uncover the essence of what makes us human”.

13.4.3 Brain Simulation There is an interesting quote from Emerson M. Pugh: “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t”. Truly understanding the human brain is very far away. We need a practical and rational way to approach the ultimate goal of brain research. In BRAIN 2025 [48], there are several areas identified as high priorities from BRAIN initiative: cell type, circuit diagrams, monitoring neural activity, interventional tools, theory and data analysis tools, human neuroscience, and integrated approaches. The No. 5 priority aims to identify

13.4 Brain Project

213

Fig. 13.25 Workflow of the reconstruction and simulation of neocortical microcircuitry. Reprinted with permission from the Cell paper [50])

fundamental principles of brain by integrating theory, modelling, statistics, and computation with experimentation. In one of the six platforms of the Human Brain Project, Brain Simulation Platform (BSP) aims to replicate a brain on a computer. For another example, the Center for Brains, Minds and Machines (https://cbmm.mit.edu/) is a multi-institutional NSF Science and Technology Center dedicated to answering the fundamental questions on intelligence: How the brain produces intelligent behaviour and how we may be able to replicate intelligence in machines? Brain simulation means

creating a computational model of a brain or part of a brain, which aims to replicate the human intelligence. In 2015, Henry Markram and his colleagues presented the first draft but most detailed digital reconstruction and simulation of the microcircuitry of somatosensory cortex of juvenile rat [50] (see Fig. 13.25). The digital model consists of a neocortical volume of 0.29 ± 0.01 mm3 containing about 31,000 neurons, 55 layer-specific morphological, and 207 morphoelectrical neuron subtypes, 8 million connections with about 37 million synapses. The simulations reproduced

214

13 Modelling Complex Phenomena in Physiology

a wealth of in vitro and in vivo experiments, such as spontaneous activity, neuronal responses to whisker stimulation, and the sharp transition from synchronous to asynchronous activity. More than 100 years ago, Ramon y Cajal has suggested to interpret construction plans of the brain by observing the morphology of individual neurons. In fact, the detailed morphology is important to build a functional neuron model. In 2018, the researchers from Allen Institute for Brain Science reported a systematic work to generate biophysically detailed models of 170 individual neurons in the Allen Cell Types Database [51]. The models were built from 3D morphologies and somatic electrophysiological responses measured in the same cells by fitting the electrophysiological features with a genetic algorithm to optimise the densities of active somatic conductances and additional parameters. The optimised models with the experimental data and the code for optimisation and simulation are distributed for open access. To implement brain simulation, it is better to know some popular simulators: NEURON, GENESIS, Brian, and NEST [52]. 1. The NEURON simulation environment (https://www. neuron.yale.edu/neuron/) is widely used around the world for building and using computational models of neurons and networks of neurons. 2. GENESIS (the GEneral NEural SImulation System) is a general purpose simulation platform. It was developed to facilitate the simulation of neural systems ranging from subcellular components and biochemical reactions

to complex models of single neurons, simulations of large networks, and even system-level models (http://genesissim.org/). 3. Brian is an open source Python package for developing simulations of networks of spiking neurons (http:// briansimulator.org/). 4. NEST is a simulation software for spiking neural network models, including large-scale neuronal networks (https:// www.nest-initiative.org/). Recently, Prof. Luo has coined Brainsmatics as a new subject to bridge the brain science and brain-inspired artificial intelligence [53]. “ Brainsmatics refers to brain-spatial information science, which is the integrated, systematic approach of tracing, measuring, analyzing, managing and displaying cross-level brain spatial data with multi-scale resolution. Based on big data of three-dimensional fine structural and functional imaging of neuron types, neural circuits and networks, vascular network et al, with definite temporalspatial resolution and specific spatial locations, brainsmatics makes it possible to better decipher the brain function and disease and promote the brain-inspired artificial intelligence by extracting cross-level and multi-scale temporal-spatial characteristics of brain connectivity.” Aided with the unique technique of micro-optical sectioning tomography [54], Prof. Luo’s group has provided very detailed structural information for mouse brain (Fig. 13.26). In a word, the multi-level brain modelling and simulation are crucial for understanding the brain.

Fig. 13.26 Detailed information of neuron morphology, neural circuit, cytoarchitecture, and vasculature in a mouse brain acquired by MOST. Reprinted with permission from the work of Prof. Luo’s group [42]

Neuron morphology

Neural circuit

Cytoarchitecture

Vasculature

13.4 Brain Project

13.4.4 Mammalian Brain as a Network of Networks The human brain is certainly the most complex system in the known universe. Now it has been recognised as a network of communicating compartments, functioning in an integrated way as a whole [55].3 From a reductionist aspect, the brain consists of neurons, glial cells, ion channels, chemical pumps, and proteins. In an integrative view, neurons are connected via synaptic junctions to compose networks of carefully arranged circuits. Although the reductionist approach has created some of the most important contributions to our understanding of the brain [56], it is yet to be discovered how the brain completes such complex integrated operations that no artificial system has been yet realised to rival humans in recognising faces, understanding languages, or learning from experience [57]. The goal of computational neuroscience is to fill this gap by understanding the brain sufficiently well to be able to mimic its complex functions, in other words to construct it. Because of the tremendous complexity of the brain, understanding its function would be impossible without systems approach translated from the theory of complex systems. Here, we propose that this systems approach would only work if it were based on the correct understanding of brain as an integrated structure including an overlap and interaction of multiple networks. In the previous review [55], we have discussed the hypothesis that the brain is a system of communicating neural and glial networks with a neural network inside each neuron, and all of these overlapped and interacting networks play a crucial role in encoding a vast and continuous flow of information processed by the mammals every second of their conscious living.

13.4.4.1 Brain as a Neural Attractor Network Since the ancient times, people have been trying to understand how the behaviour is produced. Some of the oldest accounts of hypotheses were proposed to explain behaviour date back as far as ca. 500 BC (e.g., Alcmaeon of Croton and Empedocles of Acragas). Aristotle (348–322 BC), the Greek philosopher, was the first person to advance a formal theory of behaviour. He raised that a nonmaterial psyche, or soul, was responsible for human thoughts, perceptions, and emotions as well as desires, pleasure, pain, memory, and reason. In Aristotle’s theory, the psyche was independent of the body, but worked through the heart to generate behaviour, while the brain’s main function was simply to cool down the blood. This philosophical position that human behaviour is governed by a person’s mind is called mentalism. The Aristotle’s mentalistic explanation stayed almost entirely unquestioned up until the 1500s when Descartes (1596–1650) 3 Reprinted

whole article with permission.

215

pioneered to think about the idea that the brain might be involved in behaviour control. He suggested a theory that the soul controls movement through its influence on the pineal body and also believed that the fluid in the ventricles was put under different pressures by the soul to produce movement. Although his hypotheses were consistently falsified later, as well as originated in the well-known mind-body problem (How can a nonmaterial mind produce movements in a material body?), Descartes was one of the first people to claim that changes in the brain dynamics may be followed by changes in behaviour. After Descartes, the study of behaviour by focusing on the brain went a long way from early ideas such as phrenology (the study of the shape and size of the skull as a supposed indication of character and mental abilities.) developed by Gall (1758–1828) to the first efforts to build behaviourlike processes by use of artificial networks [58, 59]. At the beginning, classical engineering and computer approaches for artificial intelligence have tried to mimic the essential processes such as speech synthesis and face recognition. Some of these ideas and attempts have led to the invention of software that now allows people with disabling diseases like Steven Hawking to communicate their ideas and thoughts using the speech synthesisers. However, even the best commercially available speech systems cannot yet convey emotions in their speech, through a range of tricks that the human brains use essentially effortlessly [59]. The modern study of neural networks was greatly advanced by the discovery of biological small-scale hardware in the brain, i.e., the specialised nerve cells in the brain— neurons. The very basic notion that computational power can be used to simulate brains came from appreciating the vast number, structure, and complex connectivity of neurons. There are about 86 billion neurons in the human brain. Individual neuron has a number of dendrites receiving information, the soma integrating received impulses and making a decision, and one axon sending the output to the next neuron. The inputs and outputs signals in neurons are electrochemical flows. In nature, they are moving ions and can be stimulated by environmental stimuli such as touch or sound. The changes of ion flux result from the activation of a short lasting event in which the membrane potential of a cell suddenly reverses (e.g., action potential) in contrast to its resting state, which further results in propagation of the action potential. In practical, there is no intermediate state of this activation: the electrical threshold is either reached allowing the propagation of signal or is not. This feature allows for a straightforward formulation of a computational output in terms of binaries, that is 1 or 0. Further insights from neuroscience on the artificial intelligence work came from the idea that information can be stored by modifying the connections between communicating brain cells, causing the formation of associations. Donald Hebb was the first one

216 Fig. 13.27 The architecture of attractor network. Reprinted with permission from [55]

13 Modelling Complex Phenomena in Physiology

External Input SynapƟc Weights Output Firing

Output to develop this idea by suggesting that the modifications to connections between the brain cells only happen if the both neurons are simultaneously active [60]. That is the famous Hebb’s rule: “Cells that fire together, wire together” [61]. Neurobiologists have in fact discovered that some neurons in the brain have a modifiable structure [62] stirring the neural network research to developing the theoretical properties of ideal neural and actual properties of information storage and manipulation in the brain. The first attempt to provide a model of artificial neurons that are linked together was made by McCulloch and Pitts in 1943. They initially developed their binary threshold unit model based on the previously mentioned on-off property of neurons and signal propagation characteristics, suggesting that the activation of the output neuron is determined by multiplying the activation of each active input and checking whether this total activation is enough to exceed the threshold for firing an electrical signal [58, 59]. Later neuronal models based on the McCullochPitts neuron adopted the Hebb’s assumption and included the feature that the connections can change their strength according to experience so that the threshold for activation can either increase or decrease. By creating artificial neurons, researchers face an advantage of reducing the biological neurons, which are very complex, to their component parts that are less complex, allowing us to explore how individual neurons represent information but also examine the highly complex behaviours of neural networks. Furthermore, creating artificial neural networks and providing them with the problems that a biological neural network might face may provide some understanding of how the function arises from the neural connectivity in the brain. Recently, an inspiring computational concept of attractor networks has been developed to contribute to the understanding of how space is encoded in the mammalian brains. The

concept of attractor networks originates from the mathematics of dynamical systems. When the system consisting of interacting units (e.g., neurons) receives a fixed input over time, it tends to evolve over time towards a stable state. Such a stable state is called on attractor because small perturbations have a limited effect on the system only, and shortly after the system tends to converge to the stable coherent state. Therefore, when the term “attractor” is applied to neural circuits, it refers to dynamical states of neural populations that are selfsustained and stable against small perturbations. As opposed to the more commonly used perceptron neuronal network [63, 64], where the information is transferred strictly via the feed-forwarding processing, the attractor network involves associative nodes with feedback connections, meaning the flow of information is recurrent and thus allows modification of strength of connections. The architecture of an attractor network is further shown in Fig. 13.27. Due to the recent discoveries of cells specialised in encoding and representing space, studying spatial navigation is now widely regarded as a useful model system to study the formation and architecture of cognitive knowledge structures, a function of the brain no machine is yet able to simulate. The hippocampal CA3 network, widely accepted to be involved in episodic memory and spatial navigation, has been suggested to operate as a single attractor network [65]. The anatomical basis for this is that the recurrent collateral connections between the CA3 neurons are very widespread and have a chance of contacting any other CA3 neurons in the network. The theory behind this is that the widespread interconnectivity allows for a mutual excitation of neurons within the network and thus can enable a system to gravitate to the stable state, as suggested by the attractor hypothesis. Moreover, since attractor properties have been suggested to follow from the pattern of recurrent synaptic connections, attractor

13.4 Brain Project

networks are assumed to learn new information presumably through the Hebb’s rule. The Hebb’s rule was suggested by Donald Hebb in 1949 in his famous Organization of Behavior book where he claimed that the persistence of activity tends to induce long lasting cellular changes such that “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased”. The hippocampus is in fact highly plastic, as evidenced by the tendency for its synapses to display changes in strength, either downwards (long-term depression) or upwards (longterm potentiation), in response to activity patterns in its afferents [66]. In 1971, John O’Keefe and Jonathan Dostrovsky recorded the activity of individual cells in the hippocampus when an animal freely moved around a local environment and found that certain neurons fired at a high rate only when a rat was in a particular location. As the firing of these cells correlated so strongly with the rat’s location in the environment, they were named “place cells”. May-Britt Moser and Edvard Moser further investigated the place cells’ inputs from the cells one synapse upstream in the entorhinal cortex and discovered a type of neurons that were active in multiple places that together formed a regular array of firing fields. The researchers have further suggested that these cells serve to maintain metric information and convey it to the place cells to allow them to locate their firing fields correctly in the space. Physiological observations of place and grid cells suggest the two kinds of dynamics: discrete and continuous [67]. Discrete attractor network enables an animal to respond to large changes in the environment but resist the small changes, whereas continuous dynamics enables the system to move steadily from one state to the next as the animal moves around the space. Since discrete system can account for the collective firing of cells at a given moment and the continuous system allows for a progression of activity from one state to another as the animal moves, Jeffery further suggested the two attractor systems must be either localised on the same neurons or be separate but interacting [67]. The possibility discussed here is that the source of the discrete dynamics lies in place cell network, whereas continuous dynamics originates upstream in the entorhinal grid cells. In discrete attractor network, the system moves from one state to another abruptly and seems to do so only when a large perturbation is present. Experimental evidence for the existence of discrete networks was originally provided by the phenomenon of remapping. Remapping is the phenomenon that is manifested by a modulation of spatial activity of place cells as a result of an abrupt change in the environment, and when the whole observed place cell population alters their activity simultaneously in response to a highly salient change, it is referred to as “complete” or “global” remapping.

217

Wills et al. manipulated the geometry of the enclosure where the rat was placed by altering its squareness or circularity [68]. The global remapping only occurred when cumulative changes were sufficiently great and no incremental changes in place cell firing were observed in the intermediate stages, suggesting that the place cell system required significantly salient input to change its state. These findings comprise the evidence for the discrete attractor dynamics in the system, as the system can resist the small changes because the perturbation is not large enough for a system to change from one state to another. On the other hand, Leutgeb et al. [69] used a similar procedure and found that the gradual transformation of the enclosure from circular to squared and vice versa resulted in gradual transitions of place cell firing pattern from one shape to another. The existence of these transitional representations does not, however, disprove the existence of attractor networks but suggests that the place cells can incorporate new information, which is incongruent with previous experience into a well-learned spatial representation. Furthermore, a different attractor dynamics is able to represent this property of the place cells—the continuous attractor. This continuous attractor can be conceptualised as the imaginary ball rolling across a smooth surface, rather than the hilly landscape. The “attractors” in this network are now the activity patterns that pertain across the active cells when the animal is at one single place in the environment, and any neuron that is supposed to be a part of this state, in that particular place, will be pulled into it and held by the activity of other neurons that it is related to [67]. The challenge for the attractor hypotheses comes from the notion of partial remapping. Partial remapping, analogously to global remapping, occurs when environmental change is presented in the environment, but contrary to global remapping, only some cells change their firing patterns, whereas others maintain their firing pattern unchanged. The difficulty that partial remapping introduces to the attractor model lies in the fact that the defining feature of the attractor network is the coherence of the network function, whereas partial remapping represents a certain degree of disorder. Touretzky and Muller [70] suggested a resolution to this problem by supposing that under certain circumstances attractors can break into subunits so that some of the neurons belong to one attractor system and others to the second one. However, Hayman et al. [71] varied the contextual environment by manipulating both colour and odour and found place cells to respond to different combinations of these contexts essentially arbitrarily. The five cells that they recorded in their study clearly responded as individuals and thus at least five attractor subsystems were needed to explain this behaviour. Nevertheless, Jeffery suggested that once there is a need to propose, as many attractors as there are cells, the attractor hypothesis starts to lose its explanatory power [67]. Jeffery

218

proposed that one solution to this is to suppose that attractors are normally created in the place cell system but the setup of the experiment, where no two contextual elements ever occurred together, has created the pathological situation resulting in the ability to discover attractor states to be undermined [67]. Thus, the fragmented nature of the environment in this experiment could have disrupted the ability of neurons to act coherently, and thus acting independently, suggesting partial remapping to be a reflection of a broken attractor system. Nevertheless, partial remapping reflects a disruption in the discrete system only, whereas continuous dynamics seems to be intact and the activity of the cells remains to be able to move smoothly from one set of neurons to the other, despite the fact that some of the neurons seem to belong to one network and some to another. Therefore, it appears that the continuous and discrete attractor dynamics might originate in different networks. One potential hub for the continuous attractor dynamics origin has been suggested to be in the grid cells of the entorhinal cortex [67]. Grid cells have certain characteristics that make them perfect nominees for the continuous attractor system. Firstly, grid cells are continuously active in any environment and, as far as we know, preserve their specific firing, regardless of the location with respect to the world outside. The following suggests the dynamics where the activity of these cells is modulated by movement but is also simultaneously reinforced by inherent connections in the grid cell network itself. Additionally, the evidence suggests that each place cell receives about 1200 synapses from grid cells [72], and the spatial nature of grid cells makes these cells ideal candidates to underlie place field formation and providing the continuous attractor dynamics to the subsequent behavioural outcomes. Like place cells, grid cells also have the tendency to remap, although the nature of this remapping is very different. They for instance do not switch their fields on and off as the place cells do. Rather, remapping in grid cells is characterised by the rotation of field. Interestingly, experiments have shown that grid cell remapping only occurs when the large changes are made to the environment and is accompanied by global place cell remapping, whereas small changes caused no remapping at all in grid cells and rate remapping (intensity of firing) in place cell system [73]. These results highlight some of the problems of the theory that place fields are generated from the activity of grid cells. Firstly, if grid cells are the basis for the place cell generation, the coherency of grid cell remapping should be accompanied by homogeneity of place fields remapping. Furthermore, the rotation and translation of a grid array should, respectively, cause an analogous rotation and transition in the global place field population, which clearly does not happen. Also, the partial remapping problem discussed earlier in this essay seems hard to explain by grid cells. If the place cell activity is generated from the grid cells, then there should be evidence

13 Modelling Complex Phenomena in Physiology

for partial remapping in the grid cell population. Nonetheless, there is no published data to support the existence of partial remapping in the grid cell system yet [74]. Jeffery and Hayman proposed a possible solution for these problems in 2008 [75]. They presented a “contextual gating” model of grid cell/place cell connectivity that may be able to explain how the heterogeneous behaviour of place cells may arise from the coherent activity of grid cells. In this model, the researchers have suggested that in addition to grid cell projections to place cell, there are also a set of contextual inputs converging on the same cells. The function of these inputs is to interact with the spatial inputs from the grid cells and decide which inputs should project further onto the place cell. Thus, when the animal is presented with a new set of spatial inputs, a different set of context inputs is active and these gate a different set of spatial inputs, whereas when a small change is made, some spatial inputs will alter whereas others will remain the same producing partial remapping. The most important feature of this model is thus that it explains how partial remapping may occur in place cell population while no change is seen in the grid cell activity. This model thus allows for contextual tuning of individual cells. Hayman and Jeffery modelled this proposal by simulating networks of grid cells that project to place cells and then altering the connection to each place cell in a context-dependent manner and found that it was in fact possible to slowly shift activity from one place field to another as the context is gradually altered. To conclude, it appears that attractor dynamics is present throughout at least two systems responsible for some aspects of encoding of space: place cell and grid cell systems. The “contextual gating” model explains how the partial remapping occurs in the place cell system without a coinciding remapping in the grid cell system. Thus, it supports the hypothesis that two different attractor dynamics are present in place and grid cell populations. The grid system preserves the continuous dynamics by maintaining their relative firing location, and although it directly influences the place cell activity, it is proposed to be further modulated by contextual inputs, resulting in discrete attractor dynamics in the place cell system. Here we have focused on reviewing the level of description of attractor networks, which may underlie possible technical abilities of the computer simulations. Nevertheless, it is necessary to conduct further research where such networks can be trained on specific examples to enable them implement memories of specific action patterns that an animal in an experimental room would do.

13.4.4.2 Glial Network and Glial-Neural Interactions Similar to neurons, glial cells also organise networks and generate calcium oscillations and waves. Neural and glial networks overlap and interact in both directions. Mechanisms

13.4 Brain Project

and functional role of calcium signalling in astrocytes have not been well understood yet. It is particularly hypothesised that their correlated or synchronised activity may coordinate neuronal firings in different spatial sites by the release of neuroactive chemicals. Brain astrocytes were traditionally considered as the cells supporting neuronal activity. However, recent experimental findings have demonstrated that the astrocytes may also actively participate in information transmission processes in nervous system. In contrast with neurons, these cells are not electrically excitable but can generate chemical signals regulating synaptic transmission in neighbouring synapses. Such regulation is associated with calcium pulses (or calcium spikes) inducing the release of neuroactive chemicals. The idea of astrocytes being important in addition to the preand postsynaptic components of the synapse has led to the concept of a tripartite synapse [76, 77]. A part of the neurotransmitter released from the presynaptic terminals (i.e., glutamate) can diffuse out of the synaptic cleft and bind to metabotropic glutamate receptors (mGluRs) on the astrocytic processes that are located near the neuronal synaptic compartments. The neurotransmitter activates G-protein mediated signalling cascades that result in phospholipase C (PLC) activation and inositol-1,4,5-trisphosphate (IP3 ) production. The IP3 binds to IP3 receptors in the intracellular stores and triggers Ca2+ release into the cytoplasm. Such an increase in intracellular Ca2+ can trigger the release of gliotransmitters [78] (e.g., glutamate, ATP, D-serine, and GABA) into the extracellular space. A gliotransmitter can affect both the pre- and postsynaptic parts of the neuron. By binding to presynaptic receptors, it can either potentiate or depress presynaptic release probability. In addition to presynaptic feedback signalling through the activation of astrocytes, there is feed-forward signalling that targets the postsynaptic neuron. Astrocytic glutamate induces slow inward postsynaptic currents (SICs) [79–81]. Thus, astrocytes may play a significant role in regulation of neuronal network signalling by forming local elevations of gliotransmitters that can guide excitation flow [82, 83]. By integration of neuronal synaptic signals, astrocytes provide coordinated release of gliotransmitters affecting local groups of synapses from different neurons. This action may control the level of coherence in synaptic transmission in neuronal groups (e.g., by means of above-mentioned SICs). Moreover, astrocytes organise networks by means of gap junctions providing intercellular diffusion of active chemicals [84] and may be able to propagate such effect even further. In the extended case, the intracellular propagation of calcium signals (calcium waves) has also attracted great interest [85] when studying many physiological phenomena including calciuminduced cell death and epileptic seizers [86]. In some cases, calcium waves can also be mediated by the extracellular diffusion of ATP between cells [87]. Gap junctions between

219

astrocytes are formed by specific proteins (Connexin 43) that are permeable selectively to IP3 [84]. Thus, theoretically, astrocytes may contribute in regulation of neuronal activity between distant network sites. Several mathematical models have been proposed to understand the functional role of astrocytes in neuronal dynamics: a model of the “dressed neuron” that describes the astrocyte-mediated changes in neural excitability [86, 88], a model of the astrocyte serving as a frequency selective “gatekeeper” [89], and a model of the astrocyte regulating presynaptic functions [90]. It has been demonstrated that gliotransmitters can effectively control presynaptic facilitation and depression. In the following paper [91], the researchers considered the basic physiological functions of tripartite synapses and investigated astrocytic regulation at the level of neural network activity. They illustrated how activations of local astrocytes may effectively control a network through combination of different actions of gliotransmitters (presynaptic depression and postsynaptic enhancement). A bi-directional frequency dependent modulation of spike transmission frequency in a network neuron has been found. A network function of the neuron implied the presence of correlation between neuron input and output reflecting feedback formed by synaptic transmission pathways. The model of the tripartite synapse has recently been employed to demonstrate the functions of astrocytes in the coordination of neuronal network signalling, in particular, spike-timingdependent plasticity and learning [92, 93]. In models of astrocytic networks, communication between astrocytes has been described as Ca2+ wave propagation and synchronisation of Ca2+ waves [94, 95]. However, due to a variety of potential actions that may be specific for brain regions and neuronal subtypes, the functional roles of astrocytes in network dynamics are still a subject of debate. Moreover, single astrocyte represents a spatially distributed system of processes and cell soma. Many experiments have shown that calcium transients appeared spontaneously in cell processes can propagate on very short distances comparable with the spatial size of the event itself [96]. This is not wave propagation in its classical sense. In other words, calcium events in astrocytes appear spatially localised. However, from time to time, global events involving synchronously most of the astrocyte compartments occur. Interestingly, that the statistics of calcium events satisfies power law indicating that system dynamics develops in the mode of self-organised criticality [97, 98]. Still the origin and functional role of such spatially distributed calcium self-oscillations have not yet been clearly understood in terms of dynamical models.

13.4.4.3 Possible Intracellular Intelligence in Brain Neural networks in brains demonstrate intelligence, principles of which have been mathematically formulated in the

220

study of artificial intelligence, starting from basic summating and associative perceptrons. In this sense, construction of networks with artificial intelligence has mimicked the function of neural networks. Principles of the brain functioning are not yet fully understood, but it is without any doubt that perceptron intelligence, based on the plasticity of intercellular connections, is in one or another way implemented in the structure of mammalian brain. But could this intelligence be implemented on a genetic level inside one neuron, functioning as an intracellular perceptron? Are principles of artificial intelligence used by nature on a new, much smaller scale inside the brain cells? As a proof-of-principle, one can refer to early works where it was shown that a neural network can be built on the basis of chemical reactions just on molecular level [99]. Both simple networks and Turing machines can be implemented on this scale. Noteworthy, properties of intelligence, such as ability to learn, cannot be considered separated from the evolutionary learning provided by genetic evolution. Following this paradigm, Bray has demonstrated that a cellular receptor can be considered as a perceptron with weights learned via genetic evolution [100]. Later Bray has also discussed the usage of proteins as computational elements in living cells [101]. So the question arises: could neural network perception and computations be organised at the genetic level? Qian et al. have experimentally shown that it is possible, e.g., a Hopfield-type memory can be implemented with the DNA gate architecture based on DNA strand displacement cascades [102]. Finally, in Chap. 12.6, we have discussed how a genetic intelligence can be implemented inside the cell. But does human brain use such an intracellular intelligence? This is now an open question.

13.4.4.4 Medical Applications Neurodegenerative diseases are chronic diseases that are specific only to members of Homo sapiens; such diseases cause a progressive loss of functions, structure, and number of neural cells, which result in brain atrophy and cognitive impairment. The development of neurodegenerative diseases may vary greatly depending on their causes, be it an injury (arising from physical, chemical, or infectious exposure), genetic background, a metabolic disorder, a combination of these factors, or an unknown cause available. The situation with multiple cellular and molecular mechanisms of neurodegeneration appears to be similar; their complexity often makes a single key mechanism impossible to identify. At the cellular level, neurodegenerative processes often involve anomaly in the processing of proteins, which trigger the accumulation of atypical proteins (both extra- and intracellularly), such as amyloid beta, tau, and alpha-synuclein [103]. Neurodegenerative processes bring about loss of homeostasis in the brain, which leads to the deterioration of structural and functional connectivity in neuronal networks, further aggravated by

13 Modelling Complex Phenomena in Physiology

degradation in signal processing. Neurodegeneration begins with attenuation and imbalance in synaptic transmission, which influence the flow of information through the neural network. Functional disorders build up when neurodegeneration progresses, making synaptic contacts collapse, connections between cells change, and neuronal subpopulations be lost in the brain. These structural and functional changes reflect general brain atrophy accompanied by cognitive deficiency [104–107]. A traditional view on neurodegenerative diseases considers neurons as the main element responsible for their progression. Since the brain is a highly organised structure based on interconnected networks, neurons, and glia, increasing evidence exists that glial, in particular, astrocytic cells as the main type of glial cells, does not only participate in providing important signalling functions of the brain, but also in its pathogenesis. Astrocytes, like neurons, form a network called syncytium consisting of intercellular gap junctions. Communication between cells is predominantly local (with the nearest neighbours), and the cells as such, according to experimental research, occupy specific areas that do not almost overlap. Such spatial arrangement and a unique morphology of astrocytes allow them not only to receive signals but also to have a significant impact on the activity of neural networks, both in physiological conditions and in the pathogenesis of the brain [104–107]. Disturbance of such functional activity of astrocytes and their interaction with neuronal cells, as shown by recent studies, forms the basis of a large spectrum of brain disorders (ischaemia, epilepsy, schizophrenia, Alzheimer’s disease, Parkinson’s disease, Alexander’s disease, etc.) [108]. Pathological changes in astroglia occurring in neurodegeneration include astroglial atrophy, morphological and functional changes, and astrogliosis [109]. These two mechanisms of pathological reactions are differently expressed at different stages of the neurodegeneration process; astroglia often loses its functions at the early stages of the disease, whereas specific damages (such as senile plaques) and neuronal death cause astrogliosis to develop at the advanced stages. Thus, when developing new treatments, correcting pathological conditions and neurodegenerative disorders of the brain, the role of glial networks interacting closely with neural networks should necessarily be taken into account, as the brain of higher vertebrates is an elaborate internetwork structure. Such an approach shall allow developing treatments for neurodegenerative diseases more accurately later on, and this will directly contribute to the development of medicine in general in the nearest future.

13.4.4.5 Open Questions in Dynamics of Brain Super-Network A traditional view on brain structure considers neurons as the main element responsible for their functionality. This network operates as a whole in an integrated way, and thus, the

13.4 Brain Project

Fig. 13.28 Mammalian brain as network of networks. The top neural layer (individual neuron shown in brown) interacts bi-directionally with a layer of astrocytes (green), which are coupled with a kind of diffusive connectivity. Each neuron may contain a network at the molecular level operating with the same principles as the neural network itself. Reprinted with permission from [55]

mammalian brain is an integrated neural network. However, in our review, we have also discussed that this network cannot be considered separately from the underlying and overlapping network of glial cells as well as a molecular network inside each neuron. Hence, mammalian brain appears to be a network of networks (Fig. 13.28) and should be investigated in the frame of the integrated approach considering an interaction of all these interconnected networks. The function and structure of the complicated mammalian brain network can be understood only if investigated using the principles recently discovered in the theory of complex systems. This means following several concepts: (1) using integrated approach and integrated measures to characterise a system, e.g., integrated information; (2) investigating collective dynamics and properties emerging from the interaction of different elements; and (3) considering complex and counterintuitive dynamical regimes in which a system demonstrates unexpected behaviour, e.g., noise-induced ordering or delayed bifurcations. Concept of Integrated Information First of all, one should use an integrated approach to prove that the complicated structure of the brain was motivated by the necessity of providing evolutionary advantages. We believe in the hypothesis that an additional spatial encoding of information, provided by astrocytes network and by intelligent networks inside each neuron, maximises the extent of integrated information generated by the interaction of neural and glial networks. From this point of view, development of the neuro-glial interaction has increased fitness by improving the information processing and, hence, provided an evolutionary advantage

221

for higher mammals. Indeed, the role of astrocytes in the processing of signals in brains is not completely understood. We know that astrocytes add some kind of spatial synchronisation to the time series generated by neurons. This spatial encoding of information processing occurs due to calcium events of the complex form determined by the astrocyte morphology. On the one hand, the time scale of these events is large because calcium events develop much slower than transmission of information between neurons. On the other hand, if a calcium event has occurred, it is able to involve affected neurons almost instantaneously because it simultaneously interacts with all the neurons linked to this particular astrocyte. Additionally, we know that distribution of calcium events undergoes power law distribution, with a certain or a range of exponents, as observed in experiments. We assume that such evolutionary systems of two interacting networks appeared because there was a need to maximise the integrated information generated. Importantly, a theory of integrated information has been developed to formalise the measure of consciousness that is a property of higher mammals. Despite the long interest in the nature of self-consciousness [110] and information processing behind, until recently no formal measure has been introduced to quantify the level of consciousness. Recently, psychiatrist and neuroscientist Gulio Tononi from the University of Wisconsin-Madison formulated the Integrated Information theory, a framework intended to formalise and measure the level of consciousness. In his pioneering paper [111], Tononi considered the brains of higher mammals as an extraordinary integrative device and introduced a notion of Integrated Information. Later, this concept has been mathematically formalised in [112–115] and expanded to discrete systems [116]. Other authors have suggested newly developed measures of integrated information applicable to practical measurements from time series [117]. The theory of integrated information is a principled theoretical approach. It claims that consciousness corresponds to a system’s capacity to integrate information and proposes a way to measure such capacity [112]. The integrated information theory can account for several neurobiological observations concerning consciousness, including: (1) the association of consciousness with certain neural systems rather than with others; (2) the fact that neural processes underlying consciousness can influence or be influenced by neural processes that remain unconscious; (3) the reduction of consciousness during dreamless sleep and generalised seizures; and (4) the time requirements on neural interactions that support consciousness [112]. The measure of integrated information captures the information generated by the interactions among the elements of the system, beyond the sum of information generated independently by its parts. This can be provided by the simultaneous observation of

222

two conditions: (1) there is a large ensemble of different states, corresponding to different conscious experiences and (2) each state if integrated, i.e., appears as a whole and cannot be decomposed into independent subunits. Hence, to understand the functioning of distributed networks representing the brain of higher mammals, it is important to characterise their ability to integrate information. To our knowledge, nobody has applied the measures of integrated information to neural networks with both temporal and spatial encoding of information, and hence, nobody has utilised this approach to illustrate the role of astrocyte in neural-astrocyte network processing of information. Emergence of Collective Dynamics Complex systems represented by a network of simple interacting elements can demonstrate dynamical and multistable regimes with properties not possessed by single elements. Without any doubt, mammalian brain as a network of networks may demonstrate plenty of these unexpected regimes, and it would be impossible to understand its function without methodology recently developed in nonlinear dynamics. We have seen this point with a system of repressilators with cell-to-cell communication as a prototype for complex behaviour emerging only due to interaction (see Chap. 12.2). Complex and Counterintuitive Dynamics as a Result of Nonlinearity Recent advances in nonlinear dynamics have provided us with surprising and unexpected complex behaviour as a result of nonlinearity possessed by the dynamical systems. One should certainly note here such effects as appearance of deterministic chaos [118,119] and a huge variety of different manifestations of synchronisation [120, 121]. Here, however, as more relevant to neural genetic dynamics, we will discuss phenomena of noise-induced ordering and delayed bifurcations. Recently, it has been demonstrated that gene expression is genuinely a noisy process [122]. Stochasticity of a gene expression, both intrinsic and extrinsic, has been experimentally measured, e.g., in [123], and modelled either with stochastic Langevin type differential equations or with Gillespie-type algorithms to simulate the single chemical reactions underlying this stochasticity [124]. Naturally, the question arises as to what the fundamental role of noise in intracellular intelligence is. Can stochastic fluctuations only corrupt the information processing in the course of learning or can they also help cells to “think”? Indeed, recently, it was shown that counterintuitively under certain conditions in nonlinear systems noise can lead to ordering, e.g., in the effect of stochastic resonance (SR) [125], which has found many manifestations in biological systems, in particular to improve the hunting abilities of the paddlefish [126], to enhance human balance control [127, 128], to help brain’s visual processing [129], or to increase the speed of memory retrieval [130].

13 Modelling Complex Phenomena in Physiology

Our investigations into the role of noise in the functioning of the associative perceptron have shown a significant improvement in two different measures of functionality due to noise [131]. In the first instance, we saw a marked improvement in the likelihood of eliciting a response from an input out of the memory range of the non-noisy system. Second, we noticed an increase in the effectiveness also when considering the ability to repeatedly respond to inputs. In both cases, there was a stochastic resonance bell curve demonstrating an optimal level of noise for the task. The same principles appeared to be working for summating perceptrons. In [132], we studied the role of genetic noise in intelligent decision making at the genetic level and showed that noise can play a constructive role helping cells to make a proper decision. To do this, we have outlined a design by which linear separation can be performed genetically on the concentrations of certain input protein concentrations. Considering this system in a presence of noise, we observed, demonstrated, and quantified the effect of stochastic resonance in linear classification systems with input noise and threshold spoiling. Another surprising effect has been reported by us in [133] and discussed in [134]. It is particularly relevant to the genetic dynamics inside the neuron because neuron’s ability to classify inputs can be linked to the function of the genetic switch. When the external signals are sufficiently symmetric, the circuit may exhibit bistability, which is associated with two distinct cell fates chosen with equal probability because of noise involved in gene expression. If, however, the input signals provide a transient asymmetry, the switch will be biased by the rate of the external signals. The effect of speed dependent cellular decision making can be observed [134] in which slow and fast decisions will result in a different probability to choose the corresponding cell fate. The speed at which the system crosses a critical region strongly influences the sensitivity to transient asymmetry of the external signals. For high speed changes, the system may not notice a transient asymmetry, but for slow changes, bifurcation delay may increase the probability of one of the states being selected [135]. In [136], these effects have also been extended to the pattern formation. How these different dynamical regimes in neuroscience are responsible for the certain brain functionality is an open question [137]. It is also not clear how these different dynamical regimes are linked to the information flow dynamics in the brain [138]. It is important to note that in recent works on attractor networks, synaptic dynamics (synaptic depression) was seen to convert attractor dynamics in neural network into heteroclinic dynamics—with solutions passing from vicinity of one “attractor ghost” to another and showing various time scales [139]. This has been suggested as a mechanism for a number of functional effects: from transient memories to long-term integration to slow–fast pseudo-periodic oscillations. Here it is worth to mention that probably recently

13.4 Brain Project

223

discovered chimera type dynamics can be found in the brain and be responsible for certain functionalities [140]. It is an intriguing question what role glia might play in controlling the potential passage of network dynamics between different dynamical regimes: we may speculate that this happens due to the fact that glia controls transient changes of network topology. In this section, we emphasised that mammalian brain works as a network of interconnected subnetworks. We further discussed that neurons interact with glial cells in a network like manner, potentially giving rise to some of complicated behaviours of higher mammals. One supporting neurobiological evidence for our claim comes from the recent discovery that the myelin sheath that functions to provide faster signal transmission between the neurons and that is formed by the actions of oligodendrocytes, a type of the glial cells, is not uniform across the brain structures and much less myelin is found in the higher levels of the cerebral cortex, which is the most recently evolved part [141]. The researcher suggested that as the neuronal diversity increases in the more evolved structures of the cerebral cortex, the neurons utilise the myelin to achieve the most efficient and complex behaviour that is only present in the higher level mammals. Thus, our speculations are consistent with some of the recent findings, suggesting that glial cells and neurons are conducting a complex conversation to achieve the most efficient function. Furthermore, here we suggest that an intracellular network exists on a genetic level inside each neuron, and its function is based on the principle of artificial intelligence. Hence, we propose that the mammalian brain is, in fact, a network of networks and suggest that future research should aim to incorporate the genetic, neuronal, and glial networks to achieve a more comprehensive understanding of how the brain does its complex operations to give rise to the most enigmatic thing of all the human mind.

contains Nxy = 2Nx bits. Entropy for joint event xy is H (xy) = −



p(xy) log p(xy),

(13.30)

xy

where the sum contains 2Nxy members. Mutual information between x and y is I (x, y) = H (x) + H (y) − H (xy).

(13.31)

Now consider a bipartition AB of x into two parts xA and xB defined by bit masks Amask and Bmask = ¬Amask , where ¬ is the bitwise logical complement: x = 10011001010011101 Amask = 11110000011111000 xA = 1001

10011

Bmask = 00001111100000111 xB =

10010

101.

Here, the number of bits in xA is equal to the number of ones in Amask , which we denote as |A|, and the number of bits in xB is equal to the number of ones in Bmask , which is |B| = Nx − |A|. The same partition is applied to y yielding parts yA and yB . Entropies and mutual information for parts of x and xy are defined according to (13.29), (13.59): H (xA ) = −



p(xA ) log p(xA ),

(13.32)

p(xB ) log p(xB ),

(13.33)

p(xA yA ) log p(xA yA ),

(13.34)

p(xB yB ) log p(xB yB ),

(13.35)

I (xA , yA ) = H (xA ) + H (yA ) − H (xA yA ),

(13.36)

I (xB , yB ) = H (xB ) + H (yB ) − H (xB yB ).

(13.37)

xA

H (xB ) = −

 xB

H (xA yA ) = −

 xA yA

13.4.5 Calculation of Integrated Information Let us review how to compute integrated information (II) for binary channels. Note that applying some threshold we can convert any time series into a set of binary channels. To do this, we consider a random event x. Assume that x consists of Nx random bits: x = 100110 . . . 01. Entropy of x is H (x) = −



p(x) log p(x).

(13.29)

x

The sum here contains 2Nx members. Consider another random event y and joint event xy. Assume that y also consists of Nx random bits. Then xy

H (xB yB ) = −



xB yB

Here, notations xA yA (xB yB ) mean aggregates of xA and yA (xB and yB ) with the total number of bits in the aggregate being 2|A| (2|B|). The total numbers of members in the sums over xA , xB , xA yA , and xB yB are, respectively, 2|A| , 2|B| , 22|A| , and 22|B| . “Effective information” between x and y for the specific bipartition AB is defined as Ieff (x, y; AB) = I (x, y) − (I (xA , yA ) + I (xB , yB )). (13.38)

224

13 Modelling Complex Phenomena in Physiology

Then, II between x and y is defined as effective information Iint (x, y) = Ieff (x, y; AB ∗ ) (13.39) calculated for a specific bipartition AB ∗ (“minimum information bipartition”) that minimises normalised effective information:  ' Ieff (x, y; AB) AB ∗ = argminAB . (13.40) min{H (xA ), H (xB )}

imately equal probabilities (“stochastic background”), and other states have a higher probability (“bursts”). As soon as x can be viewed as a composition of parts xA and xB : x = xA xB , and similarly, xy = xA yA xB yB , the probabilities for parts p(xA ), p(xB ), p(xA yA ), and p(xB yB ) can be obtained from probability tables p(x), p(xy) by proper summation: p(xA ) =



p(x),

(13.44a)

p(x),

(13.44b)

p(xy),

(13.44c)

p(xy).

(13.44d)

xB

Note that II itself is not normalised. The normalised effective information is used only as a target function, which is minimised by the “minimum information bipartition” AB ∗ . Typically, x and y are the states of the same stochastic process ξ(t) taken with a certain time interval τ : x = ξ(t),

y = ξ(t − τ ).

(13.41)

If the process is stationary (which is typically assumed), then x and y have the same probability distribution. This implies that all probabilistic characteristics calculated only for x are equal to the same characteristics of y. In particular, this applies to entropies H (y) = H (x),

H (yA ) = H (xA ),

H (yB ) = H (xB ). (13.42)

In order to calculate II directly using the above definitions, it is necessary to specify all probabilities that appear in the right-hand parts. Probabilities p(x) and p(xy) can be estimated from frequency tables p(y) = p(x) =

F R(x) , Mx

p(xy) =

F R(xy) , Mxy

p(xB ) =

 xA

p(xA yA ) =

 xB yB

p(xB yB ) =



xA yA

Technically, a sum over x (e.g., in (13.29)) is calculated by a cycle with a counter consisting of Nx bits running through 2Nx values from 00 . . . 0 to 11 . . . 1 incremented by one. Likewise, a sum over xy is implemented as a cycle with a counter consisting of Nxy = 2Nx bits. In the same sense, a sum, e.g., over xA , is implemented as a cycle, where only specific bits of x (selected by Amask ) act as an incremented counter, while other bits (which constitute xB ) remain unchanged. A dedicated function has been designed to implement this “increment by mask”. Summation over xB , xA yA , and xB yB is implemented in the same way. In fact, we can get access to different toolboxes of II from GitHub (https://github.com/topics/integrated-information).

(13.43)

where F R(x) is the actual number of occurrences of the specific state x (defined by the specific set of Nx bits) observed during the simulation in the stochastic process ξ(t), and F R(xy) is the same for a pair of states x and y sampled from the process with a time interval τ according to Eq. 13.41 for all available values of time t. Mx and Mxy  are the total numbers of samples Mx = x F R(x) and  Mxy = x F R(xy). The sizes of the frequency tables F R(x) and F R(xy) are 2Nx and 22Nx entries. In order to estimate frequencies from Eq. 13.43, it is necessary to fill in the tables with samples whose total numbers Mx and Mxy are at least significantly greater than the numbers of table entries. Moreover, in order to accurately estimate the smallest probabilities p(x) and p(xy), the necessary number of samples grows as 1/p and may become intractable (the known problem of numerical estimation of entropies). From this perspective, a tractable case would be when the majority of states have approx-

13.4.6 Astrocytes and Integrated Information Theory of Consciousness 13.4.6.1 Integrated Information Theory The concept of integrated information (II) introduced in [142] marked a milestone in the ongoing effort to describe activities of neural ensembles and brain by means of information theory.4 II was proposed as a quantitative measure of how tightly all parts of a system are interconnected in terms of information exchange (e.g., a combination of two noninteracting subsystems implies zero II. The ambitious aim of the II concept was to quantify consciousness [113]—in particular, for medical applications to detecting consciousness in a completely immobilised patient by electroencephalo4 Reprinted whole article with permission from Kanakov O, Gordleeva S, Ermolaeva A, Jalan S, Zaikin A. Astrocyte-induced positive integrated information in neuron-astrocyte ensembles. Physical Review E 99: 012418,2019. https://doi.org/10.1103/PhysRevE.99.012418. Copyright 2019 by the American Physical Society.

13.4 Brain Project

225

graphic data. Several mathematical definitions of II [117, 143–145] have been proposed since the original work, all in line with the initial idea. The perturbational complexity index, linked to II as its proxy, has reliably discriminated the level of consciousness in patients during wakefulness, sleep, anaesthesia, and even in patients who have emerged from coma with a minimal level of consciousness [146]. Although the relation of II to consciousness has been debated [147– 149], II itself is by now widely adopted as a quantitative measure for complex dynamics [150–152]. Accordingly, the understanding of particular mechanisms producing positive II in neural ensembles is of topical interest. The experiments have shown that astrocytes play an important role by regulating cellular functions and information transmission in the nervous system [153, 154]. It was proposed that astrocyte wrapping a synapse implements a feedback control circuit, which maximises information transmission through the synapse by regulating neurotransmitter release [155]. The involvement of astrocytes in neuro-glial network dynamics was quantified by estimating functional connectivity between neurons and astrocytes from time lapse Ca2+ imaging data [156]. In contrast with neuronal cells, the astrocytes do not generate electrical excitations (action potentials). However, their intracellular dynamics have shown similar excitable properties for changes of calcium concentration [157]. These signals can remarkably affect neuronal excitability and the efficiency of synaptic transmission between neurons by Ca2+ -dependent release of neuroactive chemicals (e.g., glutamate, ATP, D-serine, and GABA) [78]. Networks of astrocytes accompanying neuronal cells generate collective activity patterns that can regulate neuronal signalling by facilitating or by suppressing synaptic transmission [153, 154, 158]. In this study, we show that astrocytes may conduce to positive II in neuronal ensembles. We calculate II in a small neuro-astrocytic network with random topology by numerical simulation and find that positive II is conditioned by coupling of neurons to astrocytes and increases with spontaneous neuronal spiking activity. We explain this behaviour using simplified spiking-bursting dynamics, which we im-

Fig. 13.29 Schemes of neuro-astrocytic networks under study: (a) instance of random neuronal network topology; (b) all-to-all neuronal network. Inhibitory neuron is shown with an enlarged symbol and highlighted in red. Connections without arrows are bi-directional. Each astrocyte is coupled to one corresponding neuron and acts by modulating outgoing connections of the neuron

plement both in the neuro-astrocytic network model with all-to-all connectivity between neurons, showing astrocyteinduced coordinated bursting, and as well in a specially defined stochastic process allowing analytical calculation of II. The analytical and simulation results for the all-to-all network are in good agreement. That said, nontrivial dynamics of the random version of the network, although not being directly compatible with our analytical treatment, turns out to be even more favourable for positive II than the spikingbursting dynamics of the all-to-all network. We speculate that the presence of astrocytic regulation of neurotransmission may be essential for generating positive II in larger neuroastrocytic ensembles.

13.4.6.2 Neuron-Astrocyte Model Neural network under study consists of six synaptically coupled Hodgkin-Huxley neurons [159]. We use two variants of neural network architecture: (1) network of one inhibitory and five excitatory neurons with coupling topology obtained by randomly picking 1/3 of the total number of connections out of the full directed graph, excluding self-connections (the particular instance of random topology for which the presented data have been obtained is shown in Fig. 13.29a); (2) all-to-all network of six excitatory neurons (Fig. 13.29b). The membrane potential of a single neuron evolves according to the following ionic current balance equation: C

 dV (i) (i) (i) (ij ) = Ichannel + Iapp + Isyn + IP(i) ; dt j

(13.45)

where the superscript (i = 1, . . . , 6) corresponds to a neuronal index and (j ) corresponds to an index of input connection. Ionic currents (i.e., sodium, potassium, and leak currents) are expressed as follows: Ichannel = −gN a m3 h(V − EN a ) − − gK n4 (V − EK ) − gleak (V − Eleak ), dx = αx (1 − x) − βx x, dt

(a)

(13.46)

x = m, n, h.

(b)

4

5

2

6

4

3

1

5

2

6

3

226

13 Modelling Complex Phenomena in Physiology

Nonlinear functions αx and βx for gating variables are taken as in original Hodgkin-Huxley model with membrane potential V shifted by 65 mV. Throughout this paper, we use the following parameter values: EN a = 55 mV, EK = −77 mV, Eleak = −54.4 mV, gN a = 120 mS/cm2 , gK = 36 mS/cm2 , gleak = 0.3 mS/cm2 , C = 1 μF/cm2 . The (i) applied currents Iapp are fixed at constant value controlling the depolarisation level and dynamical regime that can be (i) either excitable, oscillatory, or bistable [160]. We use Iapp = 2 −5.0 μA/cm , which corresponds to excitable regime. The synaptic current Isyn simulating interactions between the neurons obeys the equation: (ij ) Isyn =

gsyneff (V (j ) − Esyn ) 1 + exp(

−(V (i) −θsyn ) ) ksyn

,

(13.47)

where Esyn = −90 mV for the inhibitory synapse and Esyn = 0 mV for the excitatory. Neural network composition of one inhibitory and five excitatory neurons is in line with the experimental data showing that the fraction of inhibitory neurons is about 20% [161]. Variable gsyneff describes the synaptic weight in mS/cm2 modulated by an astrocyte (as defined by Eq. 13.52 below), and parameters θsyn = 0 mV and ksyn = 0.2 mV describe the midpoint of the synaptic activation and the slope of its threshold, respectively. Each neuron is stimulated by a Poisson pulse train mimicking external spiking inputs IP(i) with a certain average rate λ. Each Poisson pulse has a constant duration 10 ms and a constant amplitude, which is sampled independently for each pulse from uniform random distribution on interval [−1.8, 1.8]. Sequences of Poisson pulses applied to different neurons are independent. Note that the time unit in the neuronal model (see Eqs. 13.45/13.46) is one millisecond. Due to a slower time scale, in the astrocytic model (see below), empirical constants are indicated using seconds as time units. When integrating the joint system of differential equations, the astrocytic model time is rescaled so that the units in both models match up. We consider an astrocytic network in the form of a twodimensional square lattice with only nearest-neighbour con-

nections [162]. Such topology for the Ca2+ - and IP3 -diffusion model is justified by experimental findings stating that astrocytes occupy “nonoverlapping” territories [163]. The neuroastrocyte network of real brain has a 3D structure with one astrocyte interacting with several neurons and vice versa. However, in our modelling, we use a simplified approach. The latter reflects the fact that, throughout the area CA1 of the hippocampus, pyramidal (excitatory) cells are arranged in a regular layer and surrounded by a relatively uniform scatter of astrocytes [164]. Accordingly to the experimental data [164, 165], modelled astrocytes are distributed evenly across the neural network, with a total cell number equalled to the number of neurons (due to small size of networks, astrocyte network is modelled as a 2-D lattice in our case study). Astrocytes and neurons communicate via a special mechanism modulated by neurotransmitters from both sides. The model is designed so that when the calcium level inside an astrocyte exceeds a threshold, the astrocyte releases neuromodulator (e.g., glutamate) that may affect the release probability (and thus a synaptic strength) at neighbouring connections in a tissue volume [166]. Single astrocyte can regulate the synaptic strength of several neighbouring synapses that belong to one neuron or several different neurons, but since we do not take into account the complex morphological structure of the astrocyte, we assume for simplicity that one astrocyte interacts with one neuron. In a number of previous studies, a biophysical mechanism underlying calcium dynamics of astrocytes has been extensively investigated [167,168]. Calcium is released from internal stores, mostly from the endoplasmic reticulum (ER). This process is regulated by inositol 1,4,5-trisphosphate (IP3 ) that activates IP3 channels in the ER membrane resulting in a Ca2+ influx from ER. IP3 acting as a second messenger is produced when neurotransmitter (e.g., glutamate) molecules are bound by metabotropic receptors of the astrocyte. In turn, IP3 can be regenerated depending on the level of calcium by the phospholipase C-δ (PLC-δ). State variables of each cell include IP3 concentration I P3 , Ca2+ concentration Ca, and the fraction of activated IP3 receptors h. They evolve according to the following equations: [167, 168]:

dCa (m,n) (m,n) (m,n) (m,n) (m,n) (m,n) = JER − Jpump + Jleak + Jin(m,n) − Jout + JCadiff ; dt dI P3(m,n) I P3∗ − I P3(m,n) (m,n) = + JPLC + JI(m,n) P 3diff ; dt τI P 3 ! " dh(m,n) I P3(m,n) + d1 m,n m,n m,n = a2 d2 (m,n) (1 − h ) − Ca h , dt I P3 + d3

(13.48)

13.4 Brain Project

227

with m = 1, . . . , 3, n = 1, 2. Current JER is the Ca2+ current from the ER to the cytoplasm, Jpump is the ATP pumping current, Jleak is the leak current, Jin and Jout describe calcium exchanges with extracellular space, and JPLC is the calciumdependent PLC-δ current and is expressed as follows: JER = c1 v1 Ca 3 h3 I P33 Jpump =

(c0 /c1 − (1 + 1/c1 )Ca) ; ((I P3 + d1 )(I P3 + d5 ))3

?

v3 Ca 2 ; k32 + Ca 2

Jleak = c1 v2 (c0 /c1 − (1 + 1/c1 )Ca); Jin = v5 +

gsyneff = (13.49)

v6 I P32 ; k22 + I P32

Jout = k1 Ca; JPLC =

v4 (Ca + (1 − α)k4 ) . Ca + k4

Biophysical meaning of all parameters in Eqs. 13.48, 13.49, and their values determined experimentally can be found in [167, 168]. For our purpose, we fix c0 = 2.0 μM, c1 = 0.185, v1 = 6 s−1 , v2 = 0.11 s−1 , v3 = 2.2 μMs−1 , v5 = 0.025 μMs−1 , v6 = 0.2 μMs−1 , k1 = 0.5 s−1 , k2 = 1.0 μM, k3 = 0.1 μM, a2 = 0.14 μM−1 s−1 , d1 = 0.13 μM, d2 = 1.049 μM, d3 = 0.9434 μM, d5 = 0.082 μM, α = 0.8, τI P 3 = 7.143 s, and I P3∗ = 0.16 μM, k4 = 1.1 μM.5 Parameter v4 describes the rate of I P3 regeneration and controls the dynamical regime of the model (see Eqs. 13.48/13.49) that can be excitable at v4 = 0.3 μMs−1 , or oscillatory at v4 = 0.5 μMs−1 [168]. Here, we limit ourselves to the oscillatory case. Currents JCadiff and JI P 3diff describe the diffusion of Ca2+ ions and IP3 molecules via gap junctions between astrocytes in the network and can be expressed as follows [162]: (m,n) JCadiff = dCa (ΔCa)(m,n) ; (m,n) JI(m,n) , P 3diff = dIP3 (ΔI P3 )

signalling molecules (“gliotransmitters”)in a Ca2+ dependent manner [154]. We proposed that each astrocyte from the network interacts to the one neuron from the neural network by modulation of the synaptic weight. For the sake of simplicity, the effect of astrocyte calcium concentration Ca upon synaptic weight of the affected synapses gsyneff (which appears in Eq. 13.47) has been described with the simple formalism based on earlier suggestions [169–171]:

(13.50)

where parameters dCa = 0.001 s−1 and dIP3 = 0.12 s−1 describe the Ca2+ and IP3 diffusion rates, respectively. (ΔCa)(m,n) and (ΔI P3 )(m,n) are discrete Laplace operators: (ΔCa)(m,n) = 1/2(Ca (m+1,n) + Ca (m−1,n) + Ca (m,n+1) + Ca (m,n−1) − 4Ca (m,n) ). (13.51) Astrocytes can modify release probability of nearby synapses in tissue volume [153], likely by releasing 5 For aligning the time units of the neuronal and astrocytic parts of the model, it is sufficient to re-express the numerical values of all dimensional constants using a time unit of 1 ms.

gsyn (1 + gastro Ca (m,n) ), gsyn ,

if Ca (m,n) > 0.2, otherwise; (13.52)

where gsyn = 0.04 mS/cm2 is baseline synaptic weight, parameter gastro > 0 controls the strength of synaptic weight modulation, and Ca (m,n) is the intracellular calcium concentration in the astrocyte (13.48). In general, phenomena of astrocytic neuromodulation are highly versatile and depend upon the actual gliotransmitter and its target [154], which in particular may lead to the inhibition of synaptic transmission instead of its potentiation. In this sense, the model of Eq. 13.52 is not universal, but we anticipate its least qualitative applicability to cases where synaptic strength potentiation by astrocytes has been confirmed experimentally. These include the impact of astrocytic glutamate upon presynaptic terminals leading to potentiating excitatory transmission in the hippocampal dentate gyrus [172], and both excitatory [166,173,174] and inhibitory [175, 176] synaptic transmission in the CA1 area of hippocampus. In addition, glutamate action on postsynaptic terminals was also shown to improve neuronal synchrony [177]. The time series of neuron membrane potentials V (i) (t) are converted into binary-valued discrete time processes according to [178] as follows. Time is split into windows of duration T , which become units of the discrete time. If inequality V (i) (t) > Vthr = −40.0 mV is satisfied for at least some t within a particular time window (essentially, if there was a spike in this time window), then the corresponding binary value (bit) is assigned 1, and 0 otherwise. The size of time window is chosen so that spontaneous spiking activity produces time-uncorrelated spatial patterns, but a burst shows as a train of successive 1’s in the corresponding bit. We use the definition of II according to [117] as follows. Consider a stationary stochastic process ξ(t) (binary vector process), whose instantaneous state is described by N = 6 bits. The full set of N bits (“system”) can be split into two nonoverlapping non-empty subsets of bits (“subsystems”) A and B, such splitting further referred to as bipartition AB. Denote by x = ξ(t) and y = ξ(t + τ ) two states of the process separated by a specified time interval τ = 0. States of the subsystems are denoted as xA , xB , yA , and yB .

228

13 Modelling Complex Phenomena in Physiology

Mutual information between x and y is defined as Ixy = Hx + Hy − Hxy ,

(13.53)

 where Hx = − x px log2 px is entropy (base 2 logarithm gives result in bits), Hy = Hx due to stationarity that is assumed. Next, a bipartition AB is considered, and “effective information” as a function of the particular bipartition is defined as Ieff (AB) = Ixy − IxA ,yA − IxB ,yB .

(13.54)

II is then defined as effective information calculated for a specific bipartition AB MIB (“minimum information bipartition”), which minimises specifically normalised effective information:

AB MIB

II = Ieff (AB MIB ), (13.55a) # $ Ieff (AB) = argminAB . (13.55b) min{H (xA ), H (xB )}

Note that this definition prohibits positive II, when Ieff turns out to be zero or negative for at least one bipartition AB. In essence, entropy Hx generalises the idea of measuring an effective number of independent bits in x. For example, if all N bits in x are independent and are “fair coins” (have equal probability 1/2 of getting 0 or 1), then Hx = N . If x consists of m independent groups of bits that are fully synchronised within each group (or all bits in x are uniquely expressed in terms of m < N independent fair coins), then Hx = m. In the same conceptual sense, mutual information Ixy in Eq. 13.53 measures the degree of dependence (effective number of dependent bits) between two random events x and y. In case of causality, when dependence is unidirectional, one can speak of degree of predictability instead. For example, if y exactly repeats x (full predictability) with all bits in x being independent fair coins, then in Eq. 13.53 Hx = Hy = Hxy = N , which gives Ixy = N . If instead y and x are totally independent (absence of predictability), then Hx = Hy = N , Hxy = 2N , and Ixy = 0. In an intermediate situation, when only m bits in y exactly repeat the corresponding bits in x (like perfect transmission lines), with other N − m bits in y (acting as randomly failing transmission lines) and all bits in x being independent fair coins, then again Hx = Hy = N , but now Hxy = 2N − m, because m out of total 2N bits in the combination xy are expressed in terms of others, which leaves 2N − m independent bits. According to Eq. 13.53, this yields Ixy = m. The definition of mutual information Eq. 13.53 generalises this idea, retaining its applicability in case of arbitrary dependence between two random events, even when this dependence can not be attributed to specific bits.

In turn, effective information Eq. 13.54 measures how much the system is more predictable as a whole than when trying to predict the subsystems separately. Trivial cases when Ieff is zero are (1) independent subsystems (then system as a whole is equally predictable as a combination of the parts) and (2) complete absence of predictability (when all mutual information are zero). When the system is fully synchronised (all bits are equal in any instance of time), for any bipartition, we get Ixy = IxA ,yA = IxB ,yB , which implies Ieff < 0 according to Eq. 13.54. From Eq. 13.55a/b, we conclude that II is zero or negative in the mentioned cases. The idea behind the choice of “minimal information bipartition” AB MIB in Eq. 13.55, according to [117], is to identify the worst-case partition in terms of information interconnection, but with preference to nontrivial partitions with roughly similarly sized subsystems, which is achieved by normalisation in Eq. 13.55b. For more detail on the rationale behind the used definition of II, we refer the reader to the original paper [117], and for the general concept of II—to the papers cited in the Introduction of our paper [179].

13.4.6.3 Integrated Information Generated by Astrocytes We calculated II directly, according to the definition above, using empirical probabilities from binarised time series of simulated neuro-astrocytic networks of both mentioned architectures in Fig. 13.29a,b. For each architecture, we performed two series of simulation runs: (i) with constant Poissonian stimulation rate λ (equal 15.0 Hz for the random network and 30.0 Hz for the all-to-all network) and neuroastrocytic interaction gastro varied, (ii) with constant gastro = 6.0 and λ varied, other model parameters as indicated above. Time window T used in binarisation and time delay τ used in computation of II are τ = T = 0.1 s for the random network, and τ = T = 0.2 s for the all-to-all network. The length of time series to calculate each point is 5 · 105 s, taken after 2 · 103 s transient time. The estimate of II shows convergence as the length of time series is increased. Error due to finite data (shown as half-height of error bar in the graphs) is estimated as maximal absolute difference between the result for the whole observation time and for each its half taken separately. Obtained dependencies of II upon gastro and λ are shown in Fig. 13.30. For the random topology (Fig. 13.30a), we observe that (1) positive II is greatly facilitated by nonzero gastro (i.e., by the effect of astrocytes), although small positive quantities, still exceeding the error estimate, are observed even at gastro = 0; (2) II generally increases with the average stimulation frequency λ that determines the spontaneous activity in the network.6 6 The abrupt

drop of II at high λ is associated with a change of minimum information bipartition and currently has no analytical explanation.

13.4 Brain Project

229

(a)

(b) 0.015 0.04

0.01

0.03

0.005 0

0.02 −0.005 0.01

−0.01

0

−0.015 0

5

10

15 0

0

5 10 15 20 25

5

10

15 20

25

30

35

Fig. 13.30 Dependence of II upon neuro-astrocytic interaction gastro and upon average stimulation rate λ: (a) in random network (instance shown in Fig. 13.29a); (b) in all-to-all network (Fig. 13.29b). Blue solid

lines with dot marks—direct calculation by definition from simulation data; red dashed lines—error estimation; green lines with cross marks— analytical calculation for spiking-bursting process with parameters estimated from simulation data

The visible impact of astrocytes on the network dynamics consists in the stimulation of space-time patterns of neuronal activity due to astrocyte-controlled increase in the neuronal synaptic connectivity on astrocyte time scale. An instance of such pattern of activation for the random network is shown as a raster plot in Fig. 13.31a. The pattern is rather complex, and we only assume that II must be determined by properties of this pattern, which in turn is controlled by astrocytic interaction (as well as by network topology and by external inputs to the neurons represented by Poissonian processes in the model). We currently do not identify specific properties of activation patterns linked to the behaviour of II in the random network; however, we do it (see below) for the allto-all network of identical (all excitatory) neurons, due to its simpler “spiking-bursting” type of spatiotemporal dynamics consisting of coordinated system-wide bursts overlaid upon background spiking activity, see raster plot in Fig. 13.31b. As seen in Fig. 13.30b, this network retains the generally increasing dependence of II upon gastro and λ, with the most notable difference being that II is negative until λ exceeds a certain threshold.

To confirm the capacity of II as a quantitative indicator for properties of complex dynamics in application to the system under study, we additionally consider graphs of mutual information Ixy in the same settings, see Fig. 13.32 (note a greater range over λ in Fig. 13.32b as compared to Fig. 13.30b). Comparing Fig. 13.30 to Fig. 13.32, we observe a qualitative difference in dependencies upon λ in case of allto-all network (Figs. 13.30b, 13.32b), while mutual information decreases with the increase of λ, II is found to grow, and transits from negative to positive values before reaching its maximum. It means that even while the overall predictability of the system is waning, the system becomes more integrated in the sense that the advantage in this predictability when the system is taken as a whole over considering it by parts is found to grow. This confirms the capability of II to capture features of complex dynamics that are not seen when using only mutual information. Our analytical consideration is based upon mimicking the spiking-bursting dynamics by a model stochastic process, which admits analytical calculation of effective information. We define this process ξ(t) as a superposition

(b)

400

400

380

380

360

360

t (s)

t (s)

(a)

340

340

320

320 300

300 1

2

3 4 Neuron No.

5

6

1

2

3 4 Neuron No.

5

6

Fig. 13.31 Raster plots of neuronal dynamics at gastro = 6, λ = 20: (a) in random network (instance shown in Fig. 13.29a); (b) in all-to-all network (Fig. 13.29b). White and black correspond to 0 and 1 in binarised time series

230

13 Modelling Complex Phenomena in Physiology

(b)

(a) 0.3

0.3

0.25

0.25

0.2

0.2

0.15

0.15

0.1

0.1

0.05

0.05 0

0 0

5

10

15 0

5 10 15 20 25

0

5

10

15 10 15 20 25 30 35

Fig. 13.32 Dependence of mutual information Ixy upon neuro-astrocytic interaction gastro and upon average stimulation rate λ: (a) in random network (instance shown in Fig. 13.29A); (b) in all-to-all network (Fig. 13.29B). Legend as in Fig. 13.30

of a time-correlated dichotomous component, which turns system-wide bursting on and off, and a time-uncorrelated component describing spontaneous activity, which occurs in the absence of a burst, in the following way. At each instance of time, the state of the dichotomous component can be either “bursting” with probability pb , or “spontaneous” (or “spiking”) with probability ps = 1 − pb . While in the bursting mode, the instantaneous state of the resulting process x = ξ(t) is given by all ones: x = 11 . . . 1 (further abbreviated as x = 1). In case of spiking, the state x is a random variate described by a discrete probability distribution sx , so that the resulting one-time state probabilities read p(x = 1) = ps sx , p(x = 1) = p1 ,

(13.56a) p1 = ps s1 + pb ,

(13.56b)

where s1 is the probability of spontaneous occurrence of x = 1 in the absence of a burst (all neurons spontaneously spiking within the same time discretisation window). To describe two-time joint probabilities for x = ξ(t) and y = ξ(t + τ ), we consider a joint state xy, which is a concatenation of bits in x and y. The spontaneous activity is assumed to be uncorrelated in time: sxy = sx sy . The time correlations of the dichotomous component are described by a 2 × 2 matrix of probabilities pss , psb , pbs , pbb , which denote joint probabilities to observe the respective spiking and/or bursting states in x and y. The probabilities obey psb = pbs (due to stationarity), pb = pbb + psb , ps = pss + psb , thereby allowing to express all one- and two-time probabilities describing the dichotomous component in terms of two quantities, for which we chose pb and correlation coefficient φ defined by psb = ps pb (1 − φ).

(13.57)

The two-time joint probabilities for the resulting process are then expressed as p(x = 1, y = 1) = pss sx sy ,

(13.58a)

p(x = 1, y = 1) = πsx ,

p(x = 1, y = 1) = πsy , (13.58b)

p(x = 1, y = 1) = p11 ,

(13.58c)

π = pss s1 + psb ,

p11 = pss s12 + 2psb s1 + pbb . (13.58d)

Note that the above notations can be applied to any subsystem instead of the whole system (with the same dichotomous component, as it is system-wide anyway). For this spiking-bursting process, the expression for mutual information of x and y Eq. 13.53 after substitution of probabilities Eqs. 13.56, 13.58, and algebraic simplifications reduces to Ixy = 2(1 − s1 ){ps } + 2{p1 } − (1 − s1 )2 {pss }− − 2(1 − s1 ){π} − {p11 } = J (s1 ; pb , φ),

(13.59)

where we denote {q} = −q log2 q for compactness. With expressions for p1 , p11 , and π from Eqs. 13.56b, 13.58d taken into account, Ixy can be viewed as a function of s1 , denoted in Eq. 13.59 as J (·), with two parameters pb and φ characterising the dichotomous (bursting) component. A typical family of plots of J (s1 ; pb , φ) versus s1 at pb = 0.2 and φ varied from 0.1 to 0.9 is shown in Fig. 13.33. Important particular cases are J (s1 = 0) = 2{ps } + 2{pb } − {pss } − 2{psb } − {pbb } > 0, which is the information of the dichotomous component alone; J (s1 = 1) = 0 (degenerate case—“always on”

13.4 Brain Project

231

Consider the case of independent spiking with 0.6 0.5

s1 =

0.4

Pi ,

(13.61)

i=1

0.3

where Pi is the spontaneous spiking probability for an indi@ @ vidual bit (neuron). Then, sA = i∈A Pi , sB = i∈B Pi , s1 = sA sB . Denoting sA = s1ν , sB = s11−ν , we rewrite Eq. 13.60 as

0.2 0.1 0 −0.1 0

N 4

0.2

0.4

0.6

0.8

Ieff (s1 ; ν) = J (s1 ) − J (s1ν ) − J (s11−ν ),

1

Fig. 13.33 Family of plots of J (s1 ; pb , φ) at pb = 0.2 and φ varied from 0.1 to 0.9 with step 0.1

deterministic state); J (s1 ) ≡ 0 for any s1 when pb = 0 or φ = 0 (absent or time-uncorrelated bursting). Otherwise, J (s1 ) is a positive decreasing function on s1 ∈ [0, 1). Derivation of Eq. 13.59 does not impose any assumptions on the specific type of the spiking probability distribution sx . In particular, spikes can be correlated across the system (but not in time). Note that Eq. 13.59 is applicable as well to any subsystem A (B), with s1 replaced by sA (sB ), which denotes the probability of a subsystem-wide simultaneous (within the same time discretisation window) spike xA = 1 (xB = 1) in the absence of a burst, and with same parameters of the dichotomous component (here pb , φ). Effective information Eq. 13.54 is then written as Ieff (AB) = J (s1 ) − J (sA ) − J (sB ).

(13.60)

Since as mentioned above pb = 0 or φ = 0 implies J (s1 ) = 0 for any s1 , this leads to Ieff = 0 for any bipartition, and, accordingly, to zero II, which agrees with our simulation results (left panels in Fig. 13.30a, b), where this case corresponds to the absence of coordinated activity induced by astrocytes (gastro = 0).

Fig. 13.34 Families of plots of Ieff (s1 ; ν = 0.5): A—at pb = 0.2 and φ varied from 0.1 to 0.9 with step 0.1; B—at φ = 0.2 and pb varied from 0.02 to 0.2 with step 0.02

(13.62)

where ν is determined by the particular bipartition AB. Figure 13.34 shows typical families of plots of Ieff (s1 ; ν = 0.5) at pb = 0.2 and φ varied from 0.1 to 0.9 in panel A (with increase of φ, maximum of Ieff (s1 ) grows), and at φ = 0.2 with pb varied from 0.02 to 0.2 in panel B (with increase of pb , root and maximum of Ieff (s1 ) shift to the right). Hereinafter assuming φ = 0 and pb = 0, we notice the following: firstly, Ieff (s1 = 0) = −J (0) < 0, which implies II < 0; secondly, Ieff (s1 = 1) = 0; thirdly, at φ > 0 function Ieff (s1 ) has a root and a positive maximum in interval s1 ∈ (0, 1). It implies that absent or insufficient spontaneous spiking activity leads to negative II, while the increase in spiking turns II positive. This is exactly observed in the all-to-all network simulation results, where spiking is determined by λ, see Fig. 13.30b (right panel). It can be additionally noticed in Fig. 13.34 that the root of Ieff (s1 ) (which is essentially the threshold in s1 for positive II) shows a stronger dependence upon the burst probability pb than upon correlation coefficient of bursting activity φ. Furthermore, expanding the last term of Eq. 13.62 in powers of ν yields Ieff = −J (s1ν ) + ν · s1 log s1 J  (s1 ) + . . . .

(13.63)

Consider the limit of large system N → ∞ and a special bipartition with subsystem A consisting of only one bit (neuron). Assuming that individual spontaneous spike

(a)

(b) 0.01

0.15 0.1

0.005 0.05 0

0 −0.05

−0.005 0

0.02

0.04

0.06

0.08

0.1

0

0.02

0.04

0.06

0.08

0.1

232

13 Modelling Complex Phenomena in Physiology

probabilities of neurons Pi in Eq. 13.61 retain their order of magnitude (in particular, do not tend to 0 or 1), we get s1 → +0,

s1ν = sA = O(1),

ν → +0,

(13.64)

and finally, Ieff < 0 from Eq. 13.63, which essentially prohibits positive II in the spiking-bursting model for large systems. The mentioned properties of Ieff dependence upon parameters can also be deduced from purely qualitative considerations in the sense of the reasoning in the end of Sect. 13.4.6.2. Absence of time-correlated bursting (pb = 0 or φ = 0), with only spiking present (which is time-uncorrelated), implies absence of predictability and thus zero II. Absence of spontaneous spiking (s1 = 0 in Eq. 13.62) implies complete synchronisation (in terms of the binary process), and consequently, highest overall predictability (mutual information), but negative II. The presence of spontaneous activity decreases the predictability of the system as a whole, as well as that of any subsystem. According to Eq. 13.54, favourable for positive Ieff (and thus for positive II) is the case when the predictability of subsystems is hindered more than that of the whole system. Hence, the increasing dependence upon s1 : since in a system with independent spiking we have s1 = sA sB < min{sA , sB }, spontaneous activity has indeed a greater impact upon predictability for subsystems than for the whole system, thus leading to an increasing dependence of Ieff upon s1 . This may eventually turn Ieff positive for all bipartitions, which implies positive II. In order to apply our analytical results to the networks under study, we fitted the parameters of the spiking-bursting process under the assumption of independent spiking (13.61) to the empirical probabilities from each simulation time series. The calculated values of s1 , pb , and φ in case of all-toall neuronal network are plotted in Fig. 13.35 versus gastro and λ (results for random network not shown due to an inferior adequacy of the model in this case, see below). As expected, spontaneous activity (here measured by s1 ) increases with the rate of Poissonian stimulation λ (Fig. 13.35a, right panel),

Fig. 13.35 Parameters of the spiking-bursting model s1 , pb , φ fitted to simulated time series in case of all-to-all neuronal network

and time-correlated component becomes more pronounced (which is quantified by a saturated increase in pb and φ) with the increase of astrocytic impact gastro (Fig. 13.35b, left panel). In Figs. 13.30 and 13.32, we plot the (semi-analytical) result of Eqs. 13.59, 13.60 with the estimates substituted for s1 , pb , φ, and with bipartition AB set to the actual minimum information bipartition found in the simulation. For the allto-all network (Figs. 13.30, 13.32b), this result is in good agreement with the direct calculation of Ixy and II (failing only in the region λ < 20, see Fig. 13.32b), unlike in case of random network (Figs. 13.30, 13.32a), where the spikingbursting model significantly underestimates both Ixy and II, in particular, giving negative values of II where they are actually positive. We have demonstrated that the generation of positive II in neuro-astrocytic ensembles as a result of interplay between spontaneous (time-uncorrelated) spiking activity and astrocyte-induced coordinated dynamics of neurons. The analytic result for spiking-bursting stochastic model qualitatively and quantitatively reproduces the behaviour of II in the all-to-all network with all excitatory neurons (Fig. 13.30b). In particular, the existence of analytically predicted threshold in spontaneous activity for positive II is observed. Moreover, the spiking-bursting process introduced in this paper may be viewed as a simplistic but generic mechanism of generating positive II in arbitrary ensembles. Complete analytic characterisation of this mechanism is provided. In particular, it is shown that time-correlated system-wide bursting and time-uncorrelated spiking are both necessary ingredients for this mechanism to produce positive II. Due to the simple and formal construction of the process, the obtained positive II must have no connection to consciousness in the underlying system, which may be seen as a counterexample to the original intent of II. That said, it was also shown that II of the spiking-bursting process is expected to turn negative when the system size is increased. Aside from consciousness considerations, it means at least that positive II in a large system requires a less trivial type of spatiotemporal patterns than one provided by the spiking-bursting model.

(a)

(b) 0.1

0.7 0.6

0.08

0.5 0.06

0.4

0.04

0.3 0.2

0.02

0.1

0 0

5

10

15 10 15 20 25 30 35

0 0

5

10

15 10 15 20 25 30 35

References

The increasing dependence of II upon neuro-astrocytic interaction gastro and upon the intensity of spiking activity determined by λ in a range of parameters is also observed in a more realistic random network model containing both excitatory and inhibitory synapses (Fig. 13.30a), for which our analysis is not directly applicable though. Remarkably, the decrease of λ in the random network, in contrast to the all-to-all network, does not lead to negative II. In this sense, the less trivial dynamics of the random network appears to be even more favourable for positive II than the spiking-bursting dynamics of the all-to-all network. This may be attributed to more complex astrocyte-induced spacetime behaviour, as compared to coordinated bursting alone, although we have not established specific connections of II with properties of activation patterns in the random network. Nonetheless, based on this observation, we also speculate that the limitation on the network size, which was predicted above for spiking-bursting dynamics may be lifted, thus allowing astrocyte-induced positive II in large neuro-astrocytic networks. This is in line with the hypothesis that the presence of astrocytic regulation of neurotransmission may be crucial in producing complex collective dynamics in brain. Note, however, that our conclusions are based upon (and, thus, limited to) the assumption of the positive impact of astrocytic calcium upon synaptic interactions Eq. 13.52, which is not universal, but was found to hold in certain areas of brain and for specific gliotransmitters [154]. The extension of our study to large systems is currently constrained by computational complexity of direct calculation of II that grows exponentially with the system size. Methods of entropy estimation by insufficient data [149,178] may prove useful in this challenge but will require specific validation for this task. Notification The Section of “Astrocytes and integrated information theory of consciousness” is originally published in Physical Review E [179]. The Section of “Mammalian brain as network of networks” is originally published in Opera Medica & Physiologica [55]. The related contents are reused with permission.

13.4.7 Questions 1. In 1950, Alan Turing proposed a famous question: “Can machines think?” What do you think about this question? 2. Check the reported maximum length of axon from an individual neuron. 3. Resort to any possible software and construct a simulational model on neural network. 4. Read the paper “A large-scale model of the functioning brain” [180], and tell your ideas on brain simulation.

233

References 1. Ayata C, Lauritzen M. Spreading depression, spreading depolarizations, and the cerebral vasculature. Physiol Rev. 2015;95(3):953–93. 2. Leao AAP. Spreading depression of activity in the cerebral cortex. J Neurophysiol. 1944;7:359–90. 3. Chen S, Li P, Luo W, Gong H, Zeng S, Luo Q. Time-varying spreading depression waves in rat cortex revealed by optical intrinsic signal imaging. Neurosci Lett. 2006;396(2):132–36. 4. Chen S, Li P, Luo W, Gong H, Zeng S, Luo Q. Origin sites of spontaneous cortical spreading depression migrated during focal cerebral ischemia in rats. Neurosci Lett. 2006;403(3):266–70. 5. Gorji A. Spreading depression: a review of the clinical relevance. Brain Res Rev. 2001;38(1–2):33–60. 6. Strong AJ, Fabricius M, Boutelle MG, et al. Spreading and synchronous depressions of cortical activity in acutely injured human brain. Stroke. 2002;33:2738–43. 7. Hadjikhani N, Sanchez Del Rio M, Wu O, et al. Mechanisms of migraine aura revealed by functional MRI in human visual cortex. Proc Natl Acad Sci. 2001;98(8):4687–92. 8. Karatas H, Erdener SE, Gursoy-Ozdemir Y, et al. Spreading depression triggers headache by activating neuronal Panx1 channels. Science. 2013;339(6123):1092–95. 9. Bures J, Buresova O, Krivanek J. The mechanism and applications of Leao’s spreading depression of electroencephalographic activity. New York: Academic; 1974. 10. Reggia JA, Montgomery D. Modeling cortical spreading depression. In Proceedings of symposium on computer applications in medical care, p. 873–7. 1994. 11. Tuckwell HC, Miura RM. A mathematical model for spreading cortical depression. Biophys J. 1978;23(2):257–76. 12. Shapiro BE. Osmotic forces and gap junctions in spreading depression: a computational model. J Comput Neurosci. 2001;10(1):99–120. 13. Kager H, Wadman WJ, Somjen GG, et al. Conditions for the triggering of spreading depression studied with computer simulations. J Neurophysiol. 2002;88(5):2700–12. 14. Makarova J, Makarov VA, Herreras O, et al. Generation of sustained field potentials by gradients of polarization within single neurons: a macroscopic model of spreading depression. J Neurophysiol. 2010;103(5):2446–57. 15. Reshodko LV, Bures J. Computer simulation of reverberating spreading depression in a network of cell automata. Biol Cybernet. 1975:181–189. 16. Chen S, Hu L, Li B, Xu C, Liu Q. Computational study on cortical spreading depression based on a generalized cellular automaton model. Proc SPIE. 2009;7186:71860H. 17. Tepley N, Wijesinghe RS. A dipole model for spreading cortical depression. Brain Topogr. 1996;8:345–53. 18. Monteiro LH, Paiva DC, Piqueira JR, et al. Spreading depression in mainly locally connected cellular automaton. J Biol Syst. 2006;14(04):617–29. 19. Chang JC, Brennan KC, He D, Huang H, Miura RM, et al. A mathematical model of the metabolic and perfusion effects on cortical spreading depression. PLoS One. 2013;8(8):e70469. 20. Ding H, Chen S, Zeng S, Zeng S, Liu Q, Luo Q. Computation and visualization of spreading depression based on reaction-diffusion equation with recovery. In: Proceedings of SPIE, vol. 7280. Seventh international conference on photonics and imaging in biology and medicine 2009. 21. Martins-Ferreira H, Nedergaard M, Nicholson C. Perspectives on spreading depression. Brain Res Rev. 2000;32(1):215–34. 22. Li B, Chen S, Yu D, Li P. Variation of repetitive cortical spreading depression waves is related with relative refractory period: a computational study. Quant Biol. 2015;3:145–56.

234 23. Li B, Chen S, Li P, et al. Refractory period modulates the spatiotemporal evolution of cortical spreading depression: a computational study. PLoS One. 2014;9(1):e84609. 24. Li B, Chen S, Zeng S, et al. Modeling the Contributions of Ca 2+ flows to spontaneous Ca 2+ oscillations and cortical spreading depression-triggered Ca 2+ waves in astrocyte networks. PLoS One. 2012;7(10):e48534. 25. Silverthorn DU, Ober WC, Garrison CW, et al. Human physiology: an integrated approach. San Francisco: Pearson/Benjamin Cummings; 2009. 26. Zheng X. Quantitative physiology (in Chinese). Hangzhou: Zhejiang University Press; 2013. 27. Wu B, Wang L, Liu Q, et al. Myocardial contractile and metabolic properties of familial hypertrophic cardiomyopathy caused by cardiac troponin I gene mutations: a simulation study. Exp Physiol. 2012;97(1):155–169. 28. Hunter PJ, Borg TK. Integration from proteins to organs: the Physiome Project. Nat Rev Mol Cell Biol. 2003;4(3): 237–43. 29. Noble D. A modification of the Hodgkin-Huxley equations applicable to Purkinje fibre action and pace-maker potentials. J Physiol. 1962;160(2):317–52. 30. Shim EB, Leem CH, Abe Y, et al. A new multi-scale simulation model of the circulation: from cells to system. Philos Trans R Soc Lond A: Math Phys Eng Sci. 2006;364(1843):1483–1500. 31. Takahashi-Yanaga F, Morimoto S, Harada K, Minakami R, Shiraishi F, Ohta M, Lu QW, Sasaguri T, Ohtsuki I. Functional consequences of the mutations in human cardiac troponin I gene found in familial hypertrophic cardiomyopathy. J Mol Cell Cardiol. 2001;33:2095–107. 32. Livshitz LM, Rudy Y. Regulation of Ca 2+ and electrical alternans in cardiac myocytes: role of CAMKII and repolarizing currents. Am J Physiol-Heart Circ Physiol. 2007;292(6): H2854–66. 33. Rice JJ, Wang F, Bers DM, et al. Approximate model of cooperative activation and crossbridge cycling in cardiac muscle using ordinary differential equations. Biophys J. 2008;95(5): 2368–2390. 34. Luo R, Liao S, Tao G, et al. Dynamic analysis of optimality in myocardial energy metabolism under normal and ischemic conditions. Mol Syst Biol. 2006;2:2006.0031. 35. Casellas D, DuPont M, Bouriquet N, Moore LC, Artuso A, Mimran A. Anatomic pairing of afferent arterioles and renin cell distribution in rat kidneys. Am J Physiol-Renal Physiol. 1994;267:F931–36. 36. Holstein-Rathlou NH and Marsh DJ. Oscillations of tubular pressure, flow, and distal chloride concentration in rats. Am J PhysiolRenal Physiol. 1989;256:F1007–14. 37. Rettig R, Folberth CG, Stauss H, Kopf D, Waldherr R, Unger T. Role of the kidney in primary hypertension: a renal transplantation study in rats. Am J Physiol-Renal Physiol. 1990;258:F606– 11. 38. Rettig R, Folberth CG, Stauss H, Kopf D, Waldherr R, Baldauf G, Unger T. Hypertension in rats induced by renal grafts from renovascular hypertensive donors. Hypertension. 1990;15:429– 35. 39. Holstein-Rathlou NH. Synchronization of proximal intratubular pressure oscillations: evidence for interaction between nephrons. Pflüfigers Arch. 1987;408:438–43. 40. Barfred M, Mosekilde E, Holstein-Rathlou NH. Bifurcation analysis of nephron pressure and flow regulation. Chaos. 1996;6:280– 7. 41. Sosnovtseva O, Postnov DE, Mosekilde E, Holstein-Rathlou NH. Synchronization of tubular pressure oscillations in interacting nephrons. Chaos, Solitons and Fractals. 2003;15:343–69.

13 Modelling Complex Phenomena in Physiology 42. Gong H, Li X, Yuan J, Lv X, Li A, Chen S, Yang X, Zeng S, Luo Q. 3D imaging and visualizing the fine structure of mammals’ whole-brain neurovascular network based on direct measurement. In: Annuals 2014 of new biology (in Chinese). Beijing: Science Press; 2015. 43. Fairhall A, Svoboda K, Nobre AC, et al. Global collaboration, learning from other fields. Neuron. 2016;92(3):561–3. 44. Lo CC, Chiang AS. Toward whole-body connectomics. J Neurosci. 2016;36(45):11375–83. 45. Jabalpurwala I. Brain Canada: one brain one community. Neuron. 2016;92(3):601–6. 46. Jeong SJ, Lee H, Hur EM, et al. Korea Brain Initiative: integration and control of brain functions. Neuron. 2016;92(3):607–11. 47. Poo M, Du J, Ip NY, et al. China brain project: basic neuroscience, brain diseases, and brain-inspired computing. Neuron. 2016;92(3):591–6. 48. Bargmann C, Newsome W, Anderson A, et al. BRAIN 2025: a scientific vision. Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Working Group Report to the Advisory Committee to the Director, NIH; 2014. 49. Xi J. Xi Jinping: The Governance of China II. Beijing: Foreign Languages Press; 2017. 50. Markram H, Muller E, Ramaswamy S, et al. Reconstruction and simulation of neocortical microcircuitry. Cell. 2015;163(2):456– 92. 51. Gouwens NW, Berg J, Feng D, et al. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat Commun. 2018;9:710. 52. Tikidji-Hamburyan RA, Narayana V, Bozkus Z, et al. Software for brain network simulations: a comparative study. Front Neuroinformat. 2017;11:46. 53. Luo QM. Brainsmatics–bridging the brain science and braininspired artificial intelligence. Sci Sin Vit. 2017;47(10):1015–24. 54. Li A, Gong H, Zhang B, Wang Q, Yan C, Wu J, Liu Q, Zeng S, Luo Q. Micro-optical sectioning tomography to obtain a high-resolution atlas of the mouse brain. Science. 2010;330(6009):1404–08. 55. Samborska V, Gordleeva S, Ullner E, Lebedeva A, Kazantzev V, Ivancheno M, Zaikin A. Mammalian brain as networks of networks. Opera Med Physiol. 2016;1:23–38. 56. Kandel ER, Schwartz JH, Jessell TM. Principles of neural science, vol. 4. New York: McGraw-hill; 2000. 57. Granger R. How brains are built: Principles of computational neuroscience; 2017. arXiv:1704.03855. 58. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115–33. 59. Mimms C. Why synthesized speech sounds so awful. 2010. http:// www.technologyreview.com/view/420354/why-synthesizedspeech-sounds-so-awful/. 60. Do H. The organization of behavior: a neuropsychological theory. Science editions; 1962. 61. Shatz CJ. The developing brain. Sci Am. 1992;267(3):60–7. 62. Bliss TVP, Lømo T. Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. J Physiol. 1973;232(2):331– 56. 63. Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65(6):386. 64. Rosenblatt F. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Technical report, Cornell Aeronautical Lab Inc, Buffalo NY; 1961. 65. Rolls ET. An attractor network in the hippocampus: theory and neurophysiology. Learn Mem. 2007;14(11):714–31. 66. Malenka RC, Bear MF. LTP and LTD: an embarrassment of riches. Neuron. 2004;44(1):5–21.

References 67. Jeffery KJ. Place cells, grid cells, attractors, and remapping. Neural Plast. 2011:182602. 68. Wills TJ, Lever C, Cacucci F, Burgess N, O’Keefe J. Attractor dynamics in the hippocampal representation of the local environment. Science. 2005;308(5723):873–6. 69. Leutgeb JK, Leutgeb S, Treves A, Meyer R, Barnes CA, McNaughton BL, Moser MB, Moser EI. Progressive transformation of hippocampal neuronal representations in “morphed” environments. Neuron. 2005;48(2):345–58. 70. Touretzky DS, Muller RU. Place field dissociation and multiple maps in hippocampus. Neurocomputing. 2006;69(10–12):1260– 3. 71. Hayman RMA, Chakraborty S, Anderson MI, Jeffery KJ. Context-specific acquisition of location discrimination by hippocampal place cells. Eur J Neurosci. 2003;18(10):2825–34. 72. de Almeida L, Idiart M, Lisman JE. The input–output transformation of the hippocampal granule cells: from grid cells to place fields. J Neurosci. 2009;29(23):7504–12. 73. Fyhn M, Hafting T, Treves A, Moser MB, Moser EI. Hippocampal remapping and grid realignment in entorhinal cortex. Nature. 2007;446(7132):190. 74. Hafting T, Fyhn M, Molden S, Moser MB, Moser EI. Microstructure of a spatial map in the entorhinal cortex. Nature. 2005;436(7052):801. 75. Hayman RM, Jeffery KJ. How heterogeneous place cell responding arises from homogeneous grids—a contextual gating hypothesis. Hippocampus. 2008;18(12):1301–13. 76. Mazzanti M, Sul JY, Haydon PG. Book review: glutamate on demand: astrocytes as a ready source. Neuroscientist. 2001;7(5):396–405. 77. Araque A, Parpura V, Sanzgiri RP, Haydon PG. Tripartite synapses: glia, the unacknowledged partner. Trends Neurosci. 1999;22(5):208–15. 78. Parpura V, Zorec R. Gliotransmission: exocytotic release from astrocytes. Brain Res Rev. 2010;63:83–92. 79. Parpura V, Haydon PG. Physiological astrocytic calcium levels stimulate glutamate release to modulate adjacent neurons. Proc Natl Acad Sci. 2000;97(15):8629–34. 80. Parri HR, Gould TM, Crunelli V. Spontaneous astrocytic Ca 2+ oscillations in situ drive NMDAR-mediated neuronal excitation. Nat Neurosci. 2001;4(8):803. 81. Fellin T, Pascual O, Haydon PG. Astrocytes coordinate synaptic networks: balanced excitation and inhibition. Physiology. 2006;21(3):208–15. 82. Semyanov A. Can diffuse extrasynaptic signaling form a guiding template? Neurochem Int. 2008;52(1–2):31–33. 83. Giaume C, Koulakoff A, Roux L, Holcman D, Rouach N. Astroglial networks: a step further in neuroglial and gliovascular interactions. Nat Rev Neurosci. 2010;11(2):87. 84. Bennett MVL, Contreras JE, Bukauskas FF, Sáez JC. New roles for astrocytes: gap junction hemichannels have something to communicate. Trends Neurosci. 2003;26(11):610–17. 85. Cornell-Bell AH, Finkbeiner SM, Cooper MS, Smith SJ. Glutamate induces calcium waves in cultured astrocytes: long-range glial signaling. Science. 1990;247(4941):470–3. 86. Nadkarni S, Jung P. Spontaneous oscillations of dressed neurons: a new mechanism for epilepsy? Phys Rev Lett. 2003;91(26):268101. 87. Bennett MR, Farnell L, Gibson WG. A quantitative model of purinergic junctional transmission of calcium waves in astrocyte networks. Biophys J. 2005;89(4):2235–50. 88. Nadkarni S, Jung P. Modeling synaptic transmission of the tripartite synapse. Phys Biol. 2007;4(1):1. 89. Volman V, Ben-Jacob E, Levine H. The astrocyte as a gatekeeper of synaptic information transfer. Neural Comput. 2007;19(2):303–26.

235 90. De Pittà M, Volman v, Berry H, Ben-Jacob E. A tale of two stories: astrocyte regulation of synaptic depression and facilitation. PLoS Comput Biol. 2011;7(12):e1002293. 91. Gordleeva SY, Stasenko SV, Semyanov AV, Dityatev AE, Kazantsev VB. Bi-directional astrocytic regulation of neuronal activity within a network. Front Comput Neurosci. 2012;6:92. 92. Postnov DE, Ryazanova LS, Sosnovtseva OV. Functional modeling of neural–glial interaction. BioSystems. 2007;89(1–3):84–91. 93. Wade JJ, McDaid LJ, Harkin J, Crunelli V, Kelso JAS. Bidirectional coupling between astrocytes and neurons mediates learning and dynamic coordination in the brain: a multiple modeling approach. PloS One. 2011;6(12):e29445. 94. Ullah G, Jung P, Cornell-Bell AH. Anti-phase calcium oscillations in astrocytes via inositol (1, 4, 5)-trisphosphate regeneration. Cell Calcium. 2006;39(3):197–208. 95. Kazantsev VB. Spontaneous calcium signals induced by gap junctions in a network model of astrocytes. Phys Rev E. 2009;79(1):010901. 96. Grosche J, Matyash V, Möller T, Verkhratsky A, Reichenbach A, Kettenmann H. Microdomains for neuron–glia interaction: parallel fiber signaling to Bergmann glial cells. Nat Neurosci. 1999;2(2):139. 97. Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. J Neurosci. 2003;23(35):11167–77. 98. Wu Y, Tang X, Arizono M, Bannai H, Shih P, Dembitskaya Y, Kazantsev V, Tanaka M, Itohara S, Mikoshiba S, et al. Spatiotemporal calcium dynamics in single astrocytes and its modulation by neuronal activity. Cell Calcium. 2014;55(2):119–29. 99. Hjelmfelt A, Weinberger ED, Ross J. Chemical implementation of neural networks and Turing machines. Proc Natl Acad Sci. 1991;88(24):10983–7. 100. Bray D, Lay S. Computer simulated evolution of a network of cell-signaling molecules. Biophys J. 1994;66(4):972–7. 101. Bray D. Protein molecules as computational elements in living cells. Nature. 1995;376(6538):307. 102. Qian L, Winfree E, Bruck J. 2011. Neural network computation with DNA strand displacement cascades. Nature. 475(7356): 368. 103. Jellinger KA. Neuropathological aspects of Alzheimer disease, Parkinson disease and frontotemporal dementia. Neurodegenerat Dis. 2008;5(3–4):118–121. 104. Terry RD. Cell death or synaptic loss in Alzheimer disease. J Neuropathol Exp Neurol. 2000;59(12):1118–9. 105. Selkoe DJ. Alzheimer’s disease: genes, proteins, and therapy. Physiol Rev. 2001;81(2):741–66. 106. Knight RA, Verkhratsky A. Neurodegenerative diseases: failures in brain connectivity. Cell Death Differ. 2010;17(7):1069–70. 107. Palop JJ, Mucke L. Amyloid-β–induced neuronal dysfunction in Alzheimer’s disease: from synapses toward neural networks. Nat Neurosci. 2010;13(7):812. 108. Verkhratsky A, Steardo L, Parpura V, Montana V. Translational potential of astrocytes in brain disorders. Progr Neurobiol. 2016;144:188–205. 109. Phatnani H, Maniatis T. Astrocytes in neurodegenerative disease. Cold Spring Harbor Perspect Biol. 2015;7(6):a020628. 110. Sturm T, Wunderlich F. Kant and the scientific study of consciousness. Hist Hum Sci. 2010;23(3):48–71. 111. Tononi G, Edelman GM, Sporns O. Complexity and coherency: integrating information in the brain. Trends Cogn Sci. 1998;2(12):474–84. 112. Tononi G. Consciousness, information integration, and the brain. Progr Brain Res. 2005;150:109–26. 113. Tononi G. Consciousness as integrated information: a provisional manifesto. Biol Bull. 2008;215(3):216–42. 114. Balduzzi D, Tononi G. Qualia: the geometry of integrated information. PLoS Comput Biol. 2009;5(8):e1000462.

236 115. Tononi G. The integrated information theory of consciousness: an updated account. Arch Ital Biol. 2011;150(2/3):56–90. 116. Balduzzi D, Tononi G. Integrated information in discrete dynamical systems: motivation and theoretical framework. PLoS Comput Biol. 2008;4(6):e1000091. 117. Barrett AB, Seth AK. Practical measures of integrated information for time-series data. PLoS Comput Biol. 2011;7:e1001052. 118. Lorenz EN. Deterministic nonperiodic flow. J Atmos Sci. 1963;20(2):130–41. 119. Sklar L, Kellert SH. In the wake of chaos: unpredictable order in dynamic systems. Chicago: The University of Chicago Press; 1997. 120. Rosenblum MG, Pikovsky AS, Kurths J. Phase synchronization of chaotic oscillators. Phys Rev Lett. 1996;76(11):1804. 121. Pikovsky A, Rosenblum M, Kurths J. Synchronization: a universal concept in nonlinear sciences. London: Cambridge University Press; 2003. 122. McAdams HH, Arkin A. It’sa noisy business! genetic regulation at the nanomolar scale. Trends Genet. 1999;15(2):65–9. 123. Elowitz MB, Levine AJ, Siggia ED, Swain PS. Stochastic gene expression in a single cell. Science. 2002;297(5584):1183–86. 124. Gillespie DT, Hellander A, Petzold LR. Perspective: stochastic algorithms for chemical kinetics. J Chem Phys. 2013;138(17):05B201_1. 125. Gammaitoni L, Hänggi P, Jung P, Marchesoni F. Stochastic resonance. Rev Modern Phys. 1998;70(1):223. 126. Russell DF, Wilkens LA, Moss F. Use of behavioural stochastic resonance by paddle fish for feeding. Nature. 1999;402(6759):291. 127. Priplata A, Niemi J, Salen M, Harry J, Lipsitz LA, Collins JJ Noise-enhanced human balance control. Phys Rev Lett. 2002;89(23):238101. 128. Priplata AA, Niemi JB, Harry JD, Lipsitz LA, Collins JJ. 2003. Vibrating insoles and balance control in elderly people. The Lancet. 362(9390):1123–24. 129. Mori T, Kai S. Noise-induced entrainment and stochastic resonance in human brain waves. Phys Rev Lett. 2002;88(21):218101. 130. Usher M, Feingold M. Stochastic resonance in the speed of memory retrieval. Biol Cybernet. 2000;83(6):L011–6. 131. Bates R, Blyuss O, Alsaedi A, Zaikin A. Stochastic resonance in an intracellular genetic perceptron. Phys Rev E. 2014;89(3):032716. 132. Bates R, Blyuss O, Alsaedi A, Zaikin A. Effect of noise in intelligent cellular decision making. PloS One. 2015;10(5): e0125079. 133. Nene NR, Garca-Ojalvo J, Zaikin A. Speed-dependent cellular decision making in nonequilibrium genetic circuits. PloS One. 2012;7(3):e32779. 134. Ashwin P, Zaikin A. Pattern selection: the importance of “how you get there”. Biophys J. 2015;108(6):1307. 135. Ashwin P, Wieczorek S, Vitolo R, Cox P. Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system. Philos Trans R Soc A: Math Phys Eng Sci. 2012;370(1962):1166–84. 136. Palau-Ortin D, Formosa-Jordan P, Sancho J, Ibañes M. Pattern selection by dynamical biochemical signals. Biophys J. 2015;08(6):1555–65. 137. Rabinovich MI, Varona P, Selverston AI, Abarbanel HDI. 2006. Dynamical principles in neuroscience. Rev Modern Phys. 2012;78(4):1213. 138. Rabinovich MI, Afraimovich VS, Bick C, Varona P. Information flow dynamics in the brain. Phys Life Rev. 2012;9(1): 51–73. 139. Rabinovich MI, Varona P, Tristan I, Afraimovich VS. Chunking dynamics: heteroclinics in mind. Front Comput Neurosci. 2014;8:22.

13 Modelling Complex Phenomena in Physiology 140. Panaggio MJ, Abrams DM. Chimera states: coexistence of coherence and incoherence in networks of coupled oscillators. Nonlinearity. 2015;28(3):R67. 141. Fields RD. Myelin—more than insulation. Science. 2014;344(6181):264–266. 142. Tononi G. An information integration theory of consciousness. BMC Neurosci. 2004;5:42. 143. Tononi G. The integrated information theory of consciousness: an updated account. Arch Ital Biol. 2012;150:293. 144. Oizumi M, Albantakis L, Tononi G. From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0. PLoS Computat Biol. 2014;10(5). 145. Tegmark M. Improved measures of integrated information. PLoS Comput Biol. 2016;12(11):e1005123. 146. Casali AG, Gosseries O, Rosanova M, Boly M, Sarasso S, Casali KR, Casarotto S, Bruno MA, Laureys S, Tononi G, Massimini M. A theoretically based index of consciousness independent of sensory processing and behavior. Sci Transl Med. 2013;5(198):198ra105. 147. Peressini A. Consciousness as integrated information a provisional philosophical critique. J Consci Stud. 2013;20:180. 148. Tsuchiya N, Taguchi S, Saigo H. Using category theory to assess the relationship between consciousness and integrated information theory. Neurosci Res. 2016;107:1. 149. Toker D, Sommer FT. Information integration in large brain networks; 2017. arXiv:1708.02967. 150. Tononi G, Boly M, Massimini M, Koch C. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci. 2016;17:450. 151. Engel D, Malone TW. Integrated information as a metric for group interaction: analyzing human and computer groups using a technique developed to measure consciousness. 2017. arXiv:1702.02462. 152. Norman RL, Tamulis A. Quantum entangled prebiotic evolutionary process analysis as integrated information: From the origins of life to the phenomenon of consciousness. J Comput Theor Nanosci. 2017;14:2255. 153. Perea G, Araque A: GLIA modulates synaptic transmission. Brain Res Rev. 2010;63:93. 154. Araque A, Carmignoto G, Haydon PG, Oliet SH, Robitaille R, Volterra A. Gliotransmitters travel in time and space. Neuron. 2014;81:728. 155. Nadkarni S, Jung P, Levine H. Astrocytes optimize the synaptic transmission of information. PLoS Comput Biol. 2008;4:e1000088. 156. Nakae K, Ikegaya Y, Ishikawa T, Oba S, Urakubo H, Koyama M, Ishii S. A statistical method of identifying interactions in neuronglia systems based on functional multicell Ca2+ imaging. PLoS Comput Biol. 2014;10:e1003949. 157. Nadkarni S, Jung P. Spontaneous oscillations of dressed neurons: a new mechanism for epilepsy? Phys Rev Lett. 2003;91: 268101. 158. Pitta MD, Brunel N, Volterra A. Astrocytes: orchestrating synaptic plasticity? Neuroscience. 2016;323:43. 159. Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117:500. 160. Kazantsev VB, Asatryan SY. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks. Phys Rev E. 2011;84:031913. 161. Braitenberg V, Sch¨uz A. Anatomy of the Cortex. Berlin/Heidelberg: Springer; 1991. 162. Kazantsev VB (2009) Spontaneous calcium signals induced by gap junctions in a network model of astrocytes. Phys Rev E. 2009;79:01090.

References 163. Halassa MM, Fellin T, Takano H, Dong JH, Haydon PG. Synaptic islands defined by the territory of a single astrocyte. J Neurosci. 2007;27:6473. 164. Ferrante M, Ascoli GA. Distinct and synergistic feedforward inhibition of pyramidal cells by basket and bistratified interneurons. Front Cell Neurosci. 2015;9:439. 165. Savtchenko LP, Rusakov DA. Moderate AMPA receptor clustering on the nanoscale can efficiently potentiate synaptic current. Philos Trans R Soc B: Biol Sci. 2014;369. 166. Navarrete M, Araque A. Endocannabinoids potentiate synaptic transmission through stimulation of astrocytes. Neuron. 2010;68:113. 167. De Young GW, Keizer J. A single-pool inositol 1,4,5trisphosphate-receptor-based model for agonist-stimulated oscillations in Ca2+ concentration. Proc Natl Acad Sci. 1992;89:9895. 168. Ullah G, Jung P, Cornell-Bell A. Anti-phase calcium oscillations in astrocytes via inositol (1, 4, 5)-trisphosphate regeneration. Cell Calcium. 2006;39:197. 169. Volman V, Ben-Jacob E, Levine H. The astrocyte as a gatekeeper of synaptic information transfer. Neural Comput. 2007. 19:303. 170. De Pitta M, Volman V, Berry H, Ben-Jacob E. A tale of two stories: astrocyte regulation of synaptic depression and facilitation. PLOS Comput Biol. 2011;7:1. 171. Gordleeva SY, Stasenko SV, Semyanov AV, Dityatev AE, Kazantsev VB. Bi-directional astrocytic regulation of neuronal activity within a network. Front Comput Neurosci. 2012;6. 172. Jourdain P, Bergersen LH, Bhaukaurally K, Bezzi P, Santello M, Domercq M, Matute C, Tonello F, Gundersen V, Volterra A. Glutamate exocytosis from astrocytes controls synaptic strength. Nat Neurosci. 2007;10:331.

237 173. Navarrete M, Perea G, de Sevilla, DF, G´omez-Gonzalo M, u˜nez AN, Mart´n ED, Araque A. Astrocytes mediate in vivo cholinergic-induced synaptic plasticity. PLoS Biology. 2012;10:e1001259. 174. Perea G, Araque A. Astrocytes potentiate transmitter release at single hippocampal synapses. Science. 2007;317:1083. 175. Kang J, Jiang L, Goldman SA, Nedergaard M. Astrocytemediated potentiation of inhibitory synaptic transmission. Nat Neurosci. 1998;1:683. 176. Liu Q, Xu Q, Arcuino G, Kang J, Nedergaard M. Astrocytemediated activation of neuronal kainate receptors. Proc Natl Acad Sci. 2004;101:3172. 177. Fellin T, Pascual O, Gobbo S, Pozzan T, Haydon PG, Carmignoto G. Neuronal synchrony mediated by astrocytic glutamate through activation of extrasynaptic NMDA receptors. Neuron. 2004;43:729. 178. Archer EW, Park IM, Pillow JW. Bayesian entropy estimation for binary spike train data using parametric prior knowledge. Adv Neural Inf Process Syst. 2013;1700–1708. 179. Kanakov O, Gordleeva S, Ermolaeva A, Jalan S, Zaikin A. Astrocyte-induced positive integrated information in neuronastrocyte ensembles. Phys Rev E. 2019;99:012418. 180. Eliasmith C, Stewart TC, Choo X, et al. A large-scale model of the functioning brain. Science. 2012;338(6111):1202–05.