Introduction to Mathematics for Computational Biology (Techniques in Life Science and Biomedicine for the Non-Expert) 3031365658, 9783031365652

This introductory guide provides a thorough explanation of the mathematics and algorithms used in standard data analysis

129 86 6MB

English Pages 274 [268] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Part I Biological Networks and Graph Theory
1 Introduction to Graph Theory
1.1 Definitions and Examples
1.2 Spectral Graph Theory
1.3 Centrality Measures
1.3.1 Geometric Centralities
1.3.1.1 Clustering Coefficient
1.3.2 Closeness
1.3.2.1 Lin's Index
1.3.2.2 Harmonic Centrality
1.3.3 Path-Based Centralities
1.3.3.1 Betweenness Centrality
1.3.3.2 Subgraph Centrality
1.3.3.3 Information Centrality
1.3.4 Spectral Measures
1.3.4.1 Eigenvector Centrality
1.3.4.2 Hub Centrality
1.3.4.3 Katz's Index
1.3.4.4 Vibrational Centrality
1.4 Axioms for Centrality
1.4.1 The Size Axiom
1.4.2 The Density Axiom
1.4.3 The Score-Monotonicity Axiom
2 Biological Networks
2.1 Networks: The Representation of a System at the Basis of Systems Biology
2.2 Biochemical Networks
2.2.1 Metabolic Networks
2.2.2 Protein–Protein Interaction Networks
2.2.3 Genetic Regulatory Networks
2.2.4 Neural Networks
2.3 Phylogenetic Networks
2.4 Signalling Networks
2.5 Ecological Networks
2.6 Challenges in Computational Network Biology
3 Network Inference for Drug Discovery
3.1 How Network Biology Helps Drug Discovery
3.2 Computational Methods
3.2.1 Classifier-Based Methods
3.2.2 Reverse Engineering Methods
3.2.3 Integrating Static and Dynamic Data: A PromisingVenue
Part II Calculus and Chemical Reactions
4 An Introduction to Differential and Integral Calculus
4.1 Derivative of a Real Function
4.2 Examples of Derivatives
4.3 Geometric Interpretation of the Derivative
4.4 The Algebra of Derivatives
4.5 Definition of Integral
4.6 Relation Between Integral and Derivative
4.7 Methods of Integration
4.7.1 Integration by Parts
4.7.2 Integration by Substitution
4.7.3 Integration by Partial Fraction Decomposition
4.7.4 The Reverse Chain Rule
4.7.5 Using Combinations of Methods
4.8 Ordinary Differential Equations
4.8.1 First-Order Linear Equations
4.8.2 Initial Value Problems
4.9 Partial Differential Equations
4.10 Discretization of Differential Equations
4.10.1 The Implicit or Backward Euler Method
4.10.2 The Runge–Kutta Method
4.11 Systems of Differential Equations
5 Modelling Chemical Reactions
5.1 Modelling in Systems Biology
5.2 The Different Types of Mathematical Models
5.3 Chemical Kinetics: From Diagrams to Mathematical Equations
5.4 Kinetics of Chemical Reactions
5.4.1 The Law of Mass Action
5.4.2 Example 1: the Lotka–Volterra System
5.4.2.1 Equilibrium
5.4.3 Example 2: the Michaelis–Menten Reactions
5.5 Conservation Laws
5.6 Markov Processes
5.7 The Master Equation
5.7.1 The Chemical Master Equation
5.8 Molecular Approach to Chemical Kinetics
5.8.1 Reactions Are Collisions
5.8.2 Reaction Rate
5.8.3 Zeroth-, First-, and Second-Order Reactions
5.8.4 Higher-Order Reactions
5.9 Fundamental Hypothesis of Stochastic Chemical Kinetics
5.10 The Reaction Probability Density Function
5.11 The Stochastic Simulation Algorithms
5.11.1 Direct Method
5.11.2 First Reaction Method
5.11.3 Next Reaction Method
5.12 Spatio-Temporal Simulation Algorithms
5.13 Ordinary Differential Equation Stochastic Models: the Langevin Equation
5.14 Hybrid Algorithms
6 Reaction–Diffusion Systems
6.1 The Physics of Reaction–Diffusion Systems
6.2 Diffusion of Non-charged Molecules
6.2.1 Intrinsic Viscosity and Frictional Coefficient
6.2.2 Calculated Second Virial Coefficient
6.3 Algorithm and Data Structures
6.4 Drug Release
6.4.1 The Higuchi Model
6.4.2 Systems with Different Geometries
6.4.2.1 Case II Radial and Axial Release from a Cylinder
6.4.3 The Power-Law Model
6.5 What Drug Dissolution Is
6.6 The Diffusion Layer Model (Noyes and Whitney)
6.7 The Weibull Function in Dissolution
6.7.1 Inhomogeneous Conditions
6.7.2 Drug Dissolution Is a Stochastic Process
6.7.3 The Inter-facial Barrier Model
6.7.4 Compartmental Model
Part III Linear Algebra and Modelling
7 Linear Algebra Background
7.1 Matrices
7.1.1 Introduction
7.1.2 Special Matrices
7.1.3 Operation on Matrices
7.1.3.1 Sum of Matrices
7.1.3.2 Scalar Multiplication
7.1.3.3 Matrix Subtraction
7.1.3.4 Product of Matrices
7.1.3.5 Product of a Matrix Times a Vector
7.1.4 Transposition and Symmetries
7.2 Linear Systems
7.2.1 Introduction
7.2.2 Special Linear Systems
7.2.3 General Linear Systems
7.2.4 The Gaussian Elimination Method
7.2.4.1 A 33 Example
7.2.4.2 The General n n Case
7.2.4.3 Gaussian Elimination with Pivoting
7.2.4.4 Partial Pivoting
7.2.4.5 Gauss–Jordan Method
7.2.4.6 Interim Conclusions
7.2.5 Gaussian Elimination for Rectangular Systems
7.2.5.1 Echelon Matrices
7.2.5.2 Reduced Echelon Form
7.2.6 Consistency of Linear Systems
7.2.7 Homogeneous Linear Systems
7.2.8 Nonhomogeneous Linear Systems
7.3 Least-Squares Problems
7.4 Permutations and Determinants
7.5 Eigenvalue Problems
7.5.1 Introduction
7.5.2 Computing the Eigenvalues and the Eigenvectors
8 Regression
8.1 Regression as a Geometric Problem
8.1.1 Standard Error on Regression Coefficients
8.2 Regression via Maximum-Likelihood Estimation
8.3 Regression Diagnostic
8.4 How to Assess the Goodness of the Model
8.5 Other Types of Regression
8.6 Case Study 1: Regression Analysis of Sweat Secretion Volumes in Cystic Fibrosis Patients
8.6.1 The Experiments
8.6.2 The Multilinear Model
8.6.3 Results
8.7 Nonlinear Regression
8.8 Case Study 2: Inference of Kinetic Rate Constants
8.8.1 Parameter Space Restriction
8.8.2 Variance of the Estimated Parameters
9 Cardiac Electrophysiology
9.1 The Bidomain Model
9.2 Adaptive Algorithms
9.3 Iterative Methods for Linear Systems
9.4 Krylov Subspace Methods
9.5 Parallel Implementation
References
Index
Recommend Papers

Introduction to Mathematics for Computational Biology (Techniques in Life Science and Biomedicine for the Non-Expert)
 3031365658, 9783031365652

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Techniques in Life Science and Biomedicine for the Non-Expert Series Editor: Alexander E. Kalyuzhny

Paola Lecca Bruno Carpentieri

Introduction to Mathematics for Computational Biology

Techniques in Life Science and Biomedicine for the Non-Expert Series Editor Alexander E. Kalyuzhny, University of Minnesota, Minneapolis, MN, USA

The goal of this series is to provide concise but thorough introductory guides to various scientific techniques, aimed at both the non-expert researcher and novice scientist. Each book will highlight the advantages and limitations of the technique being covered, identify the experiments to which the technique is best suited, and include numerous figures to help better illustrate and explain the technique to the reader. Currently, there is an abundance of books and journals offering various scientific techniques to experts, but these resources, written in technical scientific jargon, can be difficult for the non-expert, whether an experienced scientist from a different discipline or a new researcher, to understand and follow. These techniques, however, may in fact be quite useful to the non-expert due to the interdisciplinary nature of numerous disciplines, and the lack of sufficient comprehensible guides to such techniques can and does slow down research and lead to employing inadequate techniques, resulting in inaccurate data. This series sets out to fill the gap in this much needed scientific resource.

Paola Lecca • Bruno Carpentieri

Introduction to Mathematics for Computational Biology

Paola Lecca Faculty of Engineering Free University of Bozen-Bolzano Bolzano, Italy

Bruno Carpentieri Faculty of Engineering Free University of Bozen-Bolzano Bolzano, Italy

ISSN 2367-1114 ISSN 2367-1122 (electronic) Techniques in Life Science and Biomedicine for the Non-Expert ISBN 978-3-031-36565-2 ISBN 978-3-031-36566-9 (eBook) https://doi.org/10.1007/978-3-031-36566-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Paper in this product is recyclable. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This book stems from the authors’ intention to present to an audience of students interested in bioinformatics and computational biology, but also to an audience of practitioners and young researchers entering these disciplines, some basic tools for analysing biological data of medical and clinical relevance. The work posed two major challenges, the first consisting in the choice of topics to be presented, and the second in adopting a linguistic and expressive register that did not compromise scientific rigour, but was comprehensible to non-experts or those approaching the topics for the first time. To meet these challenges, the authors drew from their teaching experiences at university level in the field of mathematics and applied physics, as well as from their experience as researchers, the topics concerning the basic tools most frequently used in data analysis and focused the book on systems biology, a paradigm that emerged around the year 2000. Systems biology has constituted a real scientific paradigm shift in that it has shifted the focus from the study of the physical and chemical properties of the components of a biological system (such as genes, molecules, proteins, functional complexes, etc.) to the study of their interactions, which are responsible for the evolution of a system over time, its adaptation to the environment, and its responses to stimuli, external stresses, or pharmacological treatments. Given the primary focus on the concept of a system rather than a system component, systems biology finds its main cornerstone in network biology. The book, which is divided into three parts, begins with a presentation of the concept of biological networks and graphs as the main mathematical tool in the description of systems of interacting components and the foundation of numerical simulation algorithms for system dynamics. In the second part, the book presents the mathematical tools for modelling dynamic networks, such as differential and integral calculus, systems of differential equations, tools that are then shown at work on systems of chemical reactions and reactiondiffusion systems, and the main drivers of interactions between biological entities and between biological entities and drugs. Finally, in the third part, the book presents the concepts of linear algebra, preparatory to regression techniques, i.e. the creation of models from data, and shows case studies relevant to biology, medicine, and physiology. v

vi

Preface

Although the authors pay constant attention to clarity of exposition and ease of understanding for readers with a non-mathematical and/or computer science background, they have not shied away from explaining the various concepts in a rigorous manner and in mathematical language, since only through the appropriation of a mathematical-analytical language and metaphor can even non-experts and young researchers approaching computational systems biology and bioinformatics make conscious and appropriate use of data analysis tools. Aware that computational biology has become a very vast discipline and that its data analysis techniques as well as its lines of research and the numerical and computer tools that it constantly develops and refines are innumerable, the authors hope that this book will constitute a starting point to solicit interests and curiosity to deepen studies in the sector. Bolzano, Italy Bolzano, Italy May 2023

Paola Lecca Bruno Carpentieri

Contents

Part I Biological Networks and Graph Theory 1

Introduction to Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Spectral Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Centrality Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Geometric Centralities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Closeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Path-Based Centralities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.4 Spectral Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Axioms for Centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 The Size Axiom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 The Density Axiom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 The Score-Monotonicity Axiom . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 6 8 9 9 10 12 15 16 17 17

2

Biological Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Networks: The Representation of a System at the Basis of Systems Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Biochemical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Metabolic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Protein–Protein Interaction Networks. . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Genetic Regulatory Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Phylogenetic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Signalling Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Ecological Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Challenges in Computational Network Biology . . . . . . . . . . . . . . . . . . . . .

19

Network Inference for Drug Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 How Network Biology Helps Drug Discovery . . . . . . . . . . . . . . . . . . . . . . . 3.2 Computational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Classifier-Based Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29 29 31 32

3

19 20 20 21 21 22 23 24 25 26

vii

viii

Contents

3.2.2 3.2.3

Reverse Engineering Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Integrating Static and Dynamic Data: A Promising Venue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34 38

Part II Calculus and Chemical Reactions 4

An Introduction to Differential and Integral Calculus . . . . . . . . . . . . . . . . . . 4.1 Derivative of a Real Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Examples of Derivatives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Geometric Interpretation of the Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 The Algebra of Derivatives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Definition of Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Relation Between Integral and Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Methods of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Integration by Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Integration by Partial Fraction Decomposition . . . . . . . . . . . . . 4.7.4 The Reverse Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.5 Using Combinations of Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 First-Order Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.2 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Discretization of Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 The Implicit or Backward Euler Method . . . . . . . . . . . . . . . . . . . 4.10.2 The Runge–Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Systems of Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41 41 43 47 48 49 52 54 54 56 56 58 59 61 62 64 65 66 69 70 71

5

Modelling Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.1 Modelling in Systems Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.2 The Different Types of Mathematical Models . . . . . . . . . . . . . . . . . . . . . . . 74 5.3 Chemical Kinetics: From Diagrams to Mathematical Equations . . . . 75 5.4 Kinetics of Chemical Reactions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.4.1 The Law of Mass Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.4.2 Example 1: the Lotka–Volterra System . . . . . . . . . . . . . . . . . . . . . 81 5.4.3 Example 2: the Michaelis–Menten Reactions . . . . . . . . . . . . . . 85 5.5 Conservation Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.6 Markov Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.7 The Master Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.7.1 The Chemical Master Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.8 Molecular Approach to Chemical Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.8.1 Reactions Are Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.8.2 Reaction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.8.3 Zeroth-, First-, and Second-Order Reactions . . . . . . . . . . . . . . . 99 5.8.4 Higher-Order Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.9 Fundamental Hypothesis of Stochastic Chemical Kinetics . . . . . . . . . . 102

Contents

5.10 5.11

The Reaction Probability Density Function . . . . . . . . . . . . . . . . . . . . . . . . . . The Stochastic Simulation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Direct Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.2 First Reaction Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.3 Next Reaction Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spatio-Temporal Simulation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ordinary Differential Equation Stochastic Models: the Langevin Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hybrid Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

104 105 105 107 108 109

Reaction–Diffusion Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 The Physics of Reaction–Diffusion Systems . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Diffusion of Non-charged Molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Intrinsic Viscosity and Frictional Coefficient . . . . . . . . . . . . . . . 6.2.2 Calculated Second Virial Coefficient . . . . . . . . . . . . . . . . . . . . . . . 6.3 Algorithm and Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Drug Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 The Higuchi Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Systems with Different Geometries . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 The Power-Law Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 What Drug Dissolution Is . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 The Diffusion Layer Model (Noyes and Whitney) . . . . . . . . . . . . . . . . . . . 6.7 The Weibull Function in Dissolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Inhomogeneous Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Drug Dissolution Is a Stochastic Process . . . . . . . . . . . . . . . . . . . 6.7.3 The Inter-facial Barrier Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.4 Compartmental Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115 115 117 122 123 123 125 126 129 131 132 132 133 133 134 136 137

5.12 5.13 5.14 6

ix

111 113

Part III Linear Algebra and Modelling 7

Linear Algebra Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Special Matrices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Operation on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Transposition and Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Linear Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Special Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 General Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 The Gaussian Elimination Method . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Gaussian Elimination for Rectangular Systems . . . . . . . . . . . . 7.2.6 Consistency of Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.8 Nonhomogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Least-Squares Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

141 141 141 143 145 152 153 153 156 157 158 169 172 173 176 181

x

Contents

7.4 7.5

8

9

Permutations and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Computing the Eigenvalues and the Eigenvectors . . . . . . . . . .

187 192 192 193

Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Regression as a Geometric Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Standard Error on Regression Coefficients . . . . . . . . . . . . . . . . . 8.2 Regression via Maximum-Likelihood Estimation . . . . . . . . . . . . . . . . . . . 8.3 Regression Diagnostic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 How to Assess the Goodness of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Other Types of Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Case Study 1: Regression Analysis of Sweat Secretion Volumes in Cystic Fibrosis Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 The Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 The Multilinear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Nonlinear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Case Study 2: Inference of Kinetic Rate Constants . . . . . . . . . . . . . . . . . . 8.8.1 Parameter Space Restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Variance of the Estimated Parameters. . . . . . . . . . . . . . . . . . . . . . .

197 197 199 201 203 206 208

Cardiac Electrophysiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 The Bidomain Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Adaptive Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Iterative Methods for Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Krylov Subspace Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Parallel Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

233 233 236 238 243 245

213 214 215 218 221 224 228 230

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

Part I

Biological Networks and Graph Theory

Chapter 1

Introduction to Graph Theory

1.1 Definitions and Examples In this section, we give the definitions of graphs, graphs’ properties, and the data structures that serve to contain information on the graph nodes and topology and that are used by almost all graph analysis algorithms. Definition 1.1 (Graph) An undirected graph is an ordered pair .G = (V , E) of sets, where E ⊂ {(v, w)|v, w ∈ V , v = w}.

.

The elements of V are called vertices (or nodes) of the graph G, and the elements of E are called edges. So in a graph G with vertex set .{v1 , v2 , . . . , vn }, .(vi , vj ) ∈ E if and only if there is a line in G that connects the two vertices .vi and .vj . A directed graph is similar to an undirected graph, except for the fact that in a directed graph the elements of the edge set are ordered pairs of nodes, i.e., .E ∈ V × V . The order of the pairs indicates the direction of the edge, e.g., .v1 , v2 ) ∈ V × V , means that the edge connected .v1 to .v2 starts from v1 and ends in n2. Definition 1.2 (Complete Graph) In a graph .G = (V , E), two vertices .vi , vj ∈ V are adjacent or neighbours if .(vi , vj ) ∈ E. If all the vertices of G are pairwise adjacent, then we say G is complete. A complete graph with n vertices is denoted as .K n . For example, the graph of a triangle is .K 3 , the complete graph with three vertices. An undirected complete graph is a connected graph, but the vice versa is not true, see in this regard Fig. 1.1. Graph B. is connected, but it is not complete. Graph A. instead is complete. Definition 1.3 (Degree) The degree .d(v) of a vertex .v is the number of vertices in G that are adjacent to v. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Lecca, B. Carpentieri, Introduction to Mathematics for Computational Biology, Techniques in Life Science and Biomedicine for the Non-Expert, https://doi.org/10.1007/978-3-031-36566-9_1

3

4

1 Introduction to Graph Theory

Fig. 1.1 (a) A complete undirected graph of five vertices. (b) A connected undirected graph of five nodes

In particular, out-degree of a node is the number of edges starting from it. The in-degree of a node is instead the number of edges ending to it. The total degree (simply said “degree”) of a node is then the sum of out- and in-degrees of it. We can get two matrices from a graph G. One is called adjacency matrix, which we denote as .AG . The other is called Laplacian matrix, which we denote as .LG . Let us assume that a graph G has the vertex set .V = {1, 2, . . . , n}. The adjacency matrix, incidence matrix, and the Laplacian matrix of G are defined as follows: Definition 1.4 (Adjacency Matrix for an Undirected Graph) The entries of the adjacency matrix .AG of the graph G are aij =

 1

.

0

if (i, j ) ∈ E otherwise.

Definition 1.5 (Adjacency Matrix for a Directed Graph) The entries of the adjacency matrix .AG of the graph G are ⎧ ⎪ ⎪ ⎨−1 .mij = 1 ⎪ ⎪ ⎩0

if j → i if i → j otherwise.

Definition 1.6 (Laplacian Matrix) The entries of the Laplacian matrix .LG of the graph G are

Lij =

.

⎧ ⎪ ⎪ ⎨−1

if (j, j ) ∈ E

d(i) ⎪ ⎪ ⎩0

otherwise.

if i = j

For an undirected graph, the adjacency matrix is equal to the incidence matrix.

1.1 Definitions and Examples

5

Fig. 1.2 .G1 and .G2 are isomorphic under the correspondences .e ↔ n, f ↔ m, g ↔ ha ↔ ob ↔ ic ↔ l. Figure adapted from [207]

Definition 1.7 (Incident Edge) An edge .(vj , vj ) ∈ E is incident with a vertex v ∈ V if .v = vi or .v = vj .

.

Definition 1.8 (Isomorphism) Two graphs .G1 and .G2 are isomorphic if there is a one–one correspondence between the vertices of .G1 and those of .G2 such that the number of edges joining any two vertices of .G1 is equal to the number of edges joining the corresponding vertices of G2. Thus the two graphs shown in Fig. 1.2 are isomorphic under the correspondence e ↔ n, f ↔ m, g ↔ h, a ↔ o, b ↔ i, c ↔ l [207].

.

Definition 1.9 (Walk and Path) A walk on a graph is a sequence of vertices that alternate with edges, beginning and ending with a vertex. In walk, each edge is incident with the vertex immediately preceding it and the vertex immediately following it. A path is a walk in which vertices are all distinct from each other. Theorem 1.1 A walk between two vertices u and v is called a .u − v walk. The length of a walk is the number of edges it has. For a graph G with vertex set .V = {1, 2, . . . , m}, the entry .aijn of the matrix .AnG obtained by taking the nth power of the adjacency matrix .AG equals the number of .i − j walks of length n. Proof The theorem can be proved by induction: 1. Base step. When .n = 1, .aij = 1 if .(i, j ) ∈ E. By definition, .i(i, j )j is then an .i − j walk of length 1 and this is the only one. So the statement is true for .n = 1. 2. Inductive step. Now, we assume the statement is true for .n = k and then prove the statement is also true for .k + 1. Since .Ak+1 = Akij Aij , ij aijn+1 =

m 

.

k=1

anik akj .

(1.1)

6

1 Introduction to Graph Theory

anik akj is the number of those walks .i − j of length n joined by the edge .(k, j ) because .aki = 0 if .(k, i) ∈ / E and .aki = 0 if .(k, i) ∈ E. Noting that all walks from i to j of length .n + 1 are of this form for some vertex k, .aijn+1 in Eq. (8.54) is the total number of walks .i − j of length .n + 1. This proves the statements for .n + 1.  

.

We dedicate the end of this section to a special case of graph: the tree. Definition 1.10 (Tree) A tree is a connected graph with no cycle. If we remove any one edge from the spanning tree, it will make it disconnected. If we add one edge in a spanning tree, then it will create a cycle. Definition 1.11 (Spanning Graph and Spanning Tree) A connected subgraph G = (V , E ) of G is a spanning subgraph of G if .V = V . It is a spanning tree if it is a tree and a spanning subgraph of G.

.

A disconnected graph does not have any spanning tree. Definition 1.12 (Complexity) The complexity of a graph G, denoted with .κ(G), is the number of spanning trees of G. If a graph is a complete graph G with .|V | vertices, its complexity is κ(G) = |V |(|V |−2) .

.

(1.2)

The formula (1.2) is known as Cayley’s formula. If a graph is connected but not complete, the number of spanning trees is equal to any cofactor of the Laplacian matrix (Kirchhoff’s theorem). Finally, we recall the following properties hold: • All spanning trees have .|V | − 1 edges. • To find a spanning tree from a selected node, we can use the minimum spanning tree on an unweighted graph. • A disconnected graph does not have any spanning tree.

1.2 Spectral Graph Theory Spectral graph theory is the study of properties of the Laplacian matrix or adjacency matrix associated with a graph. This theory is of particular interest because it highlights the connection between the eigenvalues of the Laplacian matrix and graph connectivity. Following what was suggested in Jiang [109], we should consider a definition of the Laplacian that differs from (but is equivalent to) Definition 1.1 given in the previous section. Jiang proposes the following definition.

1.2 Spectral Graph Theory

7

Definition 1.13 For a graph .G = (V .E), LG =



.

lG{u,v} .

{u,v}∈E

Using this definition, it is easy to verify that .LG is a self-adjoint matrix, i.e., .LG is equal to its conjugate transpose, and consequently, its eigenvalues are all real. Moreover, .LG is positive semi-definite, i.e., its eigenvalues are all positive. With the help of this definition, we will show how the Laplacian contains the information concerning the connectivity of a graph. We first recall that a path is in fact a walk with no repeating vertices and the following definition. Definition 1.14 A non-empty graph G is called connected if any two of its vertices are contained in a path in G. The eigenvalue 0 of the graph Laplacian is related to the connectedness of the graph, as stated by the Lemma 1.1 This Lemma has been proved by J. Jiang in [109]. We report here the proof of J. Jiang using the same notation of the author. Lemma 1.1 For any graph .G, λ1 = 0 for .LG . If .G = (V , E) is a connected graph where .V = {1, 2, . . . , n}, then .λ2 > 0. Proof Let .x = (1, 1, . . . , 1) ∈ Rn . Then the entry .m1 of the matrix .M = LG x is m1 =

n 

.

l1k .

k=1

From Definition 1.1 of .LG , it follows that .mi = 0 since the row entries of .LG add up to zero. So .LG x = 0. Therefore, 0 is an eigenvalue of .LG . Since .0 ≤ λ1 ≤ λ2 ≤ · · · ≤ λn , it follows that .λ1 = 0. To show that .λ2 > 0 for a connected graph, we proceed in the following way. Let .z be a nonzero eigenvector of 0. Then z T LG z = z T · 0 = 0.

.

So zT LG z =



.

(zu − z∇ )2 = 0.

{u,v}∈E

This implies that for any .{u, v} such that .{u, v} ∈ E, zu = zv . Since G is connected, this means .zi = zj for all .i, j ∈ V . Therefore, ⎛ ⎞ 1 ⎜1⎟ ⎜ ⎟ .z = α ⎜ . ⎟ , ⎝ .. ⎠ 1

8

1 Introduction to Graph Theory

where .α is some real number. So, Uλ1 = Span((1, 1, . . . , 1)),

.

where .Uλ1 is the eigenspace of .λ1 . Therefore, the multiplicity of eigenvalue 0 is 1. It follows that .λ2 = 0, so2 > 0.   Indeed, the multiplicity of the eigenvalue 0 of .LG tells us the number of connected components in the graph G. Let us define rigorously what a connected component of a graph is. Definition of a graph .G= (V , E) is a subgraph  1.15 A connected component  G = V , E , V ⊂ V , E = {x, y} ∈ E | x, y ∈ V , in which any two vertices .i, j ∈ V are connected, while for any .i ∈ V and .k ∈ V \V , i, k are not connected.

.

From Lemma 1.1, Corollary 1.1 follows. Corollary 1.1 Let .G = (V , E) be a graph. Then the multiplicity of 0 as an eigenvalue of .LG equals the number of connected components of G. Proof Suppose .G1 = (V1 , E1 ) , G2 = (V2 , E2 ) , . . . , Gk = (Vk , Ek ) are the

i be defined by connected components of G. Let .w  .

(w1 )j =

1

if j ∈ Vi

0

otherwise .

Then, it follows from the previous lemma that if .x ∈ Rn is a nonzero eigenvector of 0, then .xi = xj for any .(i.j ) ∈ V such that .i, j are in the same connected component. So Uλ1 = Span({w1 , w2 , . . . , wk })

.

w1 , w2 . . . . , wk are linearly independent, and therefore, the multiplicity of the null eigenvalue of .LG is the number of connected components in .G.  

.

1.3 Centrality Measures Node centrality measures can be classified into three categories: geometric centrality measures, path-based centrality measures, spectral measures, and dynamic measures. Measures that belong to the first category quantify the importance of a node based on the geometry of its connections with its first neighbours; measures of centrality that belong to the second category describe and quantify the importance of a node based on the quantitative properties of the paths that pass through it;

1.3 Centrality Measures

9

measures that belong to the third category are derived from the spectral properties of graph matrices; finally, the central measures belonging to the fourth category quantify the importance of a node based on the intensity of the node’s response to external stresses exerted on the node.

1.3.1 Geometric Centralities Among the most commonly used measures of geometric centrality are the clustering coefficient, the closeness, the Lin’s index, and the harmonic centrality. Here is a brief description and reference to the author’s papers that introduced them.

1.3.1.1

Clustering Coefficient

The clustering coefficient [206] .Ci of a vertex i is the frequency of pairs of neighbours of i that are connected by an edge, that is, it is the ratio of the number .mi of pairs of neighbours of i that are connected and the number of possible pairs of neighbours of i, which is .ki (ki − 1)/2, where .ki is the degree of i: Ci =

.

2mi . ki (ki − 1)

(1.3)

Ci is said to be local clustering coefficient and can be interpreted as the probability that a pair of neighbours of i are connected. It quantifies how close its neighbours are to being a clique (complete graph). Two versions of this measure exist: the global and the local. The global version was designed to give an overall indication of the clustering in the network, whereas the local gives an indication of the embeddedness of single nodes.

.

1.3.2 Closeness The closeness centrality of node i was introduced by Bavelas [16] and is defined by Si = 

.

j

1 , d(i, j )

(1.4)

where .d(i, j ) is the distance between node i and node j , i.e., the number of edges in a shortest path connecting them. Nodes with a high closeness score have the shortest distances to all other nodes. It is assumed that nodes with an empty co-reachable set have closeness centrality 0 by definition.

10

1 Introduction to Graph Theory

The normalized form of the closeness centrality is the average length of the shortest paths between two nodes. It is given by N −1 , Si =  j d(i, j )

(1.5)

.

where N is the number of nodes in the graph.

1.3.2.1

Lin’s Index

Nan Lin [133] reformulated the definition of closeness provided by Bavelas [16] for graphs with infinite distances by weighting closeness using the square of the number of co-reachable nodes and provided this definition for the centrality of a node i with a non-empty co-reachable: .

1.3.2.2

|{j | d(i, j ) < ∞}|2  . d(i,j ) ct ) and resistance (.c < −ct ). The genes connected to general chemosensitivity and resistance were analysed using PathwayAssist software [157] to identify signalling pathways. Genes linked to cell cycle regulation and proliferation, such as cell division cycle 25A and signal transducer and activator of transcription 5A, were found to be associated with general drug sensitivity, in contrast to a significant number of proapoptotic and antiapoptotic genes, such as signal transducer and activator of transcription 1, mitogen-activated protein kinase 1, and focal adhesion kinase, which were found to be associated with drug resistance. In a resistance-based cell line panel, Rickardson et al. demonstrated how combining data from drug activity and gene expression can reveal new information about the genes implicated in anticancer drug resistance and serve as a useful tool for drug development. The work of E. C. Gunther et al. [86] in which a novel algorithm for drug efficacy profiling, called sampling over gene space (SOGS), is proposed and applied to drug-treated human cortical neuron 1A cell line, is mentioned as a recent effort directed to predict efficacy and toxicity with a classifier-based algorithm. This process is based on supervised classification techniques such as support vector machines (SVM) [30] and linear discriminant analysis (LDA) [106]. But using LDA or SVM to iteratively sample random sets of features, SOGS constructs multiple classifier methods. The final classification is based on the classification that occurred the most frequently across all iterations. According to the authors, combining stochastic feature evaluation with stable LDA and SVM modelling techniques reduces overfitting while boosting prediction accuracy. A model that fits the training set of data too well is said to be overfit. When a model learns the information and noise in the training data to the point where it adversely affects the model’s performance on fresh data, this is known as overfitting. We will mention and define more formally overfitting in Chap. 7—Part III. Finally, F. Cheng et al. [40] proposed three methods for drug–target interaction. Based on complex network theory, the authors developed two supervised similaritybased inference methods to predict drug–target interaction and use them for drug repositioning. These methods are called drug-based similarity inference (DBSI) and target-based similarity inference (TBSI). The basic idea of DBSI is that if a drug interacts with a target, then other drugs similar to the drug will be recommended to the target. For a drug–target pair .(di , tj ), a linkage between .DI and .tj is determined by the value of the following score. Finally, three approaches to drug–target interaction were suggested by F. Cheng et al. [40]. The authors created two supervised similarity-based inference algorithms

34

3 Network Inference for Drug Discovery

for medication repositioning based on complex network theory to anticipate drug– target interaction. DBSI and TBSI are two names for the identical approach. DBSI’s core tenet is that if a drug interacts with a target, other drugs that are similar to it will be suggested to the target. The value of the following score determines a linkage between a drug and target pair .(di , tj ). n D .νij

l=1,l=j

= n

Sc (di , dl )aij

l=1,l=j

Sc (di , dl )

,

(3.1)

where .Sc (di , dl ) is a measure 2D chemical similarity between drug .di and drug .dl . aij is the element .(i, j ) of a .n × m matrix representing the drug–target bipartite network. The core tenet of TBSI is that if a drug interacts with a target, it will be suggested to additional targets that have genomic sequences that are similar to the target. The target’s .tj and the drug’s .di similarity score is defined as

.

n T .νij

l=1,l=j

== n

Sg (tj , tl )ail

l=1,l=j

Sg (di , dl )

,

(3.2)

where .Sg (ti , tl ) is a measure of the genomic sequence similarity between targets .tj and .ti . In the three methods, all .tj ’s unconnected drugs that are sorted in a descending order constitute the recommendation list of the target .tj . The drugs with the high predictive score in the list are more likely to interact with target .tj .

3.2.2 Reverse Engineering Methods Reverse engineering, which aims to map gene, protein, and metabolite interactions in the cell and elucidate the regulatory circuits employed by the cell for its function and their breakdown during diseases, is assumed to be of special value in the field of drug discovery. Inference of large-scale causal gene regulatory interactions is currently the focus of most computational efforts because, in the opinion of modern medicine and pharmacology, it can help us better understand all facets of normal cell physiology, development, and pathogenesis. The influence approach and the physical approach are two distinct reverse engineering techniques. In the former, the objective is to identify the transcription factors (TFs) and the DNA binding sites to which the factor binds using data on RNA expression. The relationships between TFs and the promoters of the regulated genes are thus implied to be real physical relationships. In the latter, it is desired to identify regulatory interactions between RNA transcripts that are not always of the TF-DNA binding site variety. The concept is to infer the network of connections between genes, proteins, and metabolites from gene expression profiles after various cell perturbations. Thus, measurements of

3.2 Computational Methods

35

transcript concentrations in response to perturbations such as gene knockout, drug administration, gene overexpression, and changes in RNA regulatory transcripts are used in reverse engineering algorithms. Influence networks are useful for: • Identifying functional modules, that is, identify the subset of genes that regulate each other with multiple (indirect) interactions, but have few regulations to other genes outside the subset • Predicting the behaviour of the system following perturbations, that is, gene network models can be used to predict the response of a network to an external perturbation and to identify the genes directly targeted by the perturbation, a situation often encountered in the drug discovery process, where one needs to identify the genes that are directly interacting with a compound of interest • Identifying real physical interactions by integrating the gene network with additional information from sequence data and other experimental data (i.e., chromatin immunoprecipitation, yeast two-hybrid assay, etc.) [14] A gene network inference tool (based on the “influence” approach) is ARACNE (Algorithm for the Reconstruction of Accurate Cellular Networks) of Basso et al. [29, 139]. It has been successfully used to identify pharmacological targets in mammalian systems. Microarray expression profiles from perturbation experiments are used as input for ARACNE. Although it is specifically made to scale up to the complexity of regulatory networks in mammalian cells, it is still sufficiently all-encompassing to handle a wider variety of network deconvolution issues. By estimating pairwise gene expression profile mutual information, or .I (gi , gj ), a measure of relatedness based on information theory that is zero if and only if .P (gi , gj ) = P (gi )P (gj ), the tool can find potential interactions between genes. .(gi , gj ) is a pair of genes. We recall that the mutual information for a pair of discrete random variables x and y is defined as I (x, y) = S(x) + S(y) − S(x, y),

.

(3.3)

where .S(·) indicates the entropy. For a give discrete stochastic variable C, the entropy is defined as S(t) =



.

Pr(t = ti ) log(Pr(t = ti )).

(3.4)

i

Monte Carlo simulations are used to estimate the probability .Pr(·). Remember that when the variable t is uniformly distributed, the entropy is at its highest level. A p-value that was once more derived using Monte Carlo simulations is associated with each value of the mutual information .I (x, y). The p-value’s null hypothesis is that two nodes are cut off from the network and from one another. The authors of ARACNE used the Data Inequality Principle to claim that if both .(x, y) and .(y, z) are directly interacting and .(x, y)is indirectly interacting through y, then .I (x, z) ≤ I (x, y) and .I (y, z) ≤ I (x, y). This was done to

36

3 Network Inference for Drug Discovery

reduce the number of false positives (i.e., indirect interactions between two genes that are not direct interactions). This requirement is sufficient but not required, i.e., some direct interaction can be eliminated by employing this approach and the trimming phase. However, ARACNE achieves incredibly low error rates and significantly outperforms traditional methods such as Relevance Networks and Bayesian Networks on artificial data. In [139], the researchers used ARACNE to deconvolute the genetic networks in human B cells, and they showed that it could determine verified transcriptional targets for the cMYC proto-oncogene. The work of R. de Matos Simoes et al. [46, 47] and that of R. Bonneau et al. (cite:bonneau) are the other two techniques to gene regulator network inference. In order to infer causal gene regulatory networks from massive amounts of gene expression data, R. de Matos Simoes et al. developed a brand-new technique called BC3NET [47]. As an ensemble method built on bagging the C3NET algorithm, BC3NET is comparable to a Bayesian approach with uninformative priors. The authors show that BC3NET is capable of sensibly collecting biochemical interactions from transcription regulation and protein–protein interactions for a variety of simulated and biological gene expression data of S. cerevisiae. The algorithm Inferelator was created by R. Bonneau et al. This algorithm selects parsimonious, predictive models for the expression of a gene or cluster of genes as a function of the amounts of TFs, environmental influences, and interactions between these factors using standard regression and model shrinkage (L1 shrinkage) approaches. The method can simultaneously simulate equilibrium and time course expression levels, allowing the resulting models to predict both kinetic and equilibrium expression levels. The technique can identify causal links by explicitly incorporating information about time and gene knockouts. Bansal et al. [14] demonstrated that, at least when perturbation experiments are carried out in accordance with the algorithm’s requirements, reverse engineering algorithms are capable of correctly inferring regulatory connections among genes. These algorithms have achieved a distinct performance for being practically helpful, despite the need for additional advancements, and are superior to conventional clustering algorithms for the aim of identifying regulatory relationships among genes. Methods and techniques for network inference have also been created to infer the network of interactions between proteins and metabolites from steady-state data as well as from time series data on metabolites and protein concentration. D. M. Hendricks et al. [93] released a survey of recent material and a critical evaluation of the viability of reverse engineering metabolic networks. In Table 3.1, we list the most significant recent contributions made to the various stages of the drug development process by computational inference approaches of gene, metabolic, and protein–protein interaction networks. In Table 3.2, we show instead a categorization with respect to the input data and the type of inferred network. New strategies of metabolic network inference are the works of P. Bandaru et al. [13], Y. Yamanishi et al. [210] M. D. Schmidt [173], J. P. Vert et al. [203], E. Lee et al. [131], and E. Panteris et al. [156]. Here, we briefly linger over the publicly available tool SEBINI (Software Environment for BIological Network Inference) [174, 195–

3.2 Computational Methods

37

Table 3.1 Division of inference techniques into groups based on how they are applied during the various stages of drug discovery (adapted and updated from [130]) Drug discovery phase Target validation and identification

Classifier F. Cheng e al. [40]

Mechanism of action

Ç. Tunahan et al. [199] Y. Yamanishi et al. [210] D. Bellomo et al. [18] J. P. Vert et al. [203] E. Lee et al. [131] E. Panteris et al. [156] P. Antczak et al. [7] L. Rickardson et al. [166]

Toxicity and efficacy

Reverse engineering ARACNE [29] F. Cheng e al. [40] Bandaru et al. [13] M. D. Schmidt [173] Inferelator [26] SEBINI [174, 195, 196] R. de Matos Simoes et al. [46, 47] Perkins et al. [160]

Table 3.2 Classification of inference tools and strategies with respect their inputs and outputs (adapted and updated from [130]) Reference Aracne [29] Ç. Tunahan et al. [199] Y. Yamanishi et al. [210] D. Bellomo et al. [18] J. P. Vert et al. [203]

E. Lee et al. [131] E. Panteris et al. [156] P. Bandaru et al. [13] M. D. Schmidt [173] Inferelator [26] SEBINI [174, 195, 196]

R. de Matos Simoes et al. [46, 47]

Input data Microarray expression profiles (static data) Metabolome data (steady-state data) Genomic data and chemical information (static data) Metabolites concentration (steady-state data) Heterogeneous data sets (protein sequence, gene expression, etc.) Metabolite concentration (steady-state data) Gene expression data (static data) Metabolic profiling data (steady state) Metabolites concentrations (time series) Gene expression data (static data) Heterogeneous data sets (protein sequence, gene expression, etc.) Gene expression data (static data)

Type of inferred network Gene regulatory network Metabolic network Enzyme network Metabolic network Gene, metabolic, and protein–protein networks Metabolic network Gene regulatory network Metabolic network Metabolic network Gene regulatory network Metabolic, protein–protein network, gene regulatory and signalling networks Gene regulatory networks

38

3 Network Inference for Drug Discovery

197]. In order to provide an interactive setting for the testing and deployment of algorithms for reconstructing the structure of biological regulatory and interaction networks, SEBINI was developed. On artificial networks and simulated data from gene expression perturbations, SEBINI can be used to compare and train network inference techniques. Additionally, it enables the analysis of experimental highthroughput expression data using a variety of (trained) inference techniques within the same framework.

3.2.3 Integrating Static and Dynamic Data: A Promising Venue Approaches to integrate static and dynamic data to infer gene regulatory network and metabolic network inspire to the network inference using chemical kinetics method proposed by C. Oatis et al. [150, 151]. An interaction that is quantitatively characterized by a kinetic model is summarized by a reaction graph G. Using the marginal likelihood .p(D|G), candidate graphs G are evaluated in a Bayesian framework against observed time course data D. Evidence in favour of graph edges is captured by the posterior probability of the edge, which is calculated by averaging over the space of graphs. Network inference is combined with nonlinear dynamical models using this technique. To do this, we take a dynamical system .fG that is dependent on a reaction graph G that represents the system’s entire set of biological reactions. The reaction graph G can be conceived of as a network N, where the latter is a function .N(G) of the former. Let .d(X(t))/dt = fG (X(t); k) signify a state vector defining the configuration of the system at time t, where .k gathers all unknown parameters, including the reaction rates. Inference within a Bayesian framework is carried out to generate a posterior distribution over reaction graphs G given time course data .D.  p(G|G ) ∝ p(G)

.

p(D|G, k)p(k|G)dk.

(3.5)

Gains in network structure estimation may be possible by basing inference on more complex dynamical models. According to C. Oatis et al. [151], the integration of interventional data is facilitated, and the scientific interpretability of results is improved since this model of inference respects the mechanistic functions of individual variables. In the class of data-driven approaches, we can also include recent contributions from artificial intelligence. Although machine learning and deep learning algorithms are currently rather lacking for the prediction of causal links, research in this area is active as the works in [1, 110].

Part II

Calculus and Chemical Reactions

Chapter 4

An Introduction to Differential and Integral Calculus

4.1 Derivative of a Real Function Suppose a projectile is fired straight up from the ground with an initial velocity of .v0 meters per second. Neglect friction, and assume the projectile is influenced only by gravity. Let .f (t) denote the height in meters that the projectile attains t seconds after firing. If the force of gravity were not acting on it, the projectile would continue to move upward with a constant velocity, travelling a distance of .v0 meters every second, and at time t, we would have .f (t) = v0 t. In practice, gravity causes the projectile to slow down until its velocity decreases to zero and then it drops back to the Earth. Physical experiments suggest that as long as the projectile is aloft, its height .f (t) is given by the formula F (t) = v0 t − at 2 ,

.

(4.1)

where .a ∈ R. The term .−at 2 is due to the influence of gravity. Note that .f (t) = 0 when .t = 0 and when .t = v0 /a. This means that the projectile returns to the Earth after .v0 /a seconds, and it is to be understood that formula (8.54) is valid only for .0 < t < v0 /a. The problem we wish to consider is this: to determine the velocity of the projectile at each instant of its motion. Before we can understand this problem, we must decide on what is meant by the velocity at each instant. To do this, we introduce first the notion of average velocity during a time interval, say from time t to time .t + h, which h may be positive or negative (but not zero). This is the difference quotient v=

.

f (t + h) − f (t) , h

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Lecca, B. Carpentieri, Introduction to Mathematics for Computational Biology, Techniques in Life Science and Biomedicine for the Non-Expert, https://doi.org/10.1007/978-3-031-36566-9_4

(4.2)

41

42

4 An Introduction to Differential and Integral Calculus

where f (t + h) = v0 (t + h) − a(t + h)2 = v0 t + v0 h − at 2 − ah2 − 2ath.

.

Therefore, .

(v0 − 2at)h − ah2 f (t + h) − f (t) = v0 − 2at − ah. = h h

(4.3)

When h approaches zero (i.e., when the interval .[t, y + h] becomes smaller and smaller), the expression on the right in Eq. (4.3) approaches .v0 − 2at as a limit, and this limit is defined to be the instantaneous velocity at time t. If we denote the instantaneous velocity by .v(t), we may write v(t) = v0 − at.

.

(4.4)

The formula in (4.4) for the velocity .v(t) defines a new function v that tells us how fast the projectile is moving at each instant of its motion. The limit process by which v(t) is obtained from the difference quotient is written symbolically as follows: v(t) = lim

.

h→0

f (t + h) − f (t) . h

(4.5)

In general, consider a function .f : (x1 , x2 ) → R, then choose a fixed point x ∈ (x1 , x2 ), and introduce the difference quotient

.

.

f (x + h) − f (x) , h

(4.6)

where the number h, which may be positive or negative (but not zero), is such that x + h also lies in .(x1 , x2 ). The numerator of this quotient measures the change in the function f when x changes from x to .x + h. The quotient itself is referred to as the average rate of change of f in the interval from x to .x + h. Now we let h approach zero and see what happens to this quotient. If the quotient approaches some definite value as a limit (which implies that the limit is the same whether h approaches zero through positive values or through negative values), then this limit is called the derivative of f at x and is denoted by the symbol .f  (x), or by the notation . dfdx(x) . Thus, the formal definition of .f  (x) may be stated as follows:

.

f (x + h) − f (x) h→0 h

f  (x) = lim

.

(4.7)

provided the limit exists. The number .f  (x) is called derivative of .f (x), or the rate of change of f at x.

4.2 Examples of Derivatives

43

We see that the concept of instantaneous velocity is merely an example of the concept of derivative. The velocity .v(t) is equal to the derivative .f  (t), where f is the function that measures position. This is often described by saying that velocity is the rate of change of position with respect to time. In general, the limit process that produces.f  (x) from .f (x) gives us a way of obtaining a new function .f  from a given function f . The process is called differentiation, and .f  is called the first derivative of f . If .f  , in turn, is defined on an open interval, we can try to compute its first derivative, denoted by .f  and called the second derivative of f . Similarly, the n-th derivative of f , denoted by (n) , is defined to be the first derivative of f.(n−1) . We make the convention that .f (0) = f , that is, the zeroth derivative is the function itself. .f

4.2 Examples of Derivatives 1. Derivative of a constant function. Suppose f is a constant function, say f (x) = c ∀x ∈ R. The difference quotient is .

c−c f (x + h) − f (x) = = 0; h h

(4.8)

therefore, f  (x) = 0 ∀x ∈ R. Indeed, note that since the quotient is 0 ∀h = 0, its limit, f  (x), is also 0 for every x. 2. Derivative of a linear function. Suppose f is a linear function, say f(x) = mx + q for all real x. If h = 0, we have that the difference quotient is .

m(x + h) + q − mx − q) f (x + h) − f (x) = = m; h h

(4.9)

therefore, f  (x) = m, ∀x ∈ R. 3. Derivative of a positive integer power function. Consider the case f (x) = x n , where n is a positive integer. The difference quotient becomes .

(x + h)n − x n f (x + h) − f (x) = . h h

(4.10)

From elementary algebra, we have the identity: a − b = (a − b)

.

n

n

n−1 

a k bn−k−1 .

(4.11)

k=0

If we take a = x + h and b = x and divide both sides by h, this identity becomes

44

4 An Introduction to Differential and Integral Calculus

 (x + h)n − x n = (x + h)k x n−k−1 . h n−1

.

k=0

There are n terms in the sum. As h approaches 0, (x + h)k approaches x k , the k-th term approaches x k x n−k−1 = x n−1 , from this, it follows that f  (x) = nx n−1 , ∀x ∈ R.

(4.12)

.

4. Derivative of exponential functions. Consider the case f (x) = ex , where a ∈ R. The difference quotient becomes .

f (x + h) − f (x) eh − 1 = ex lim . h→0 h→0 h h lim

(4.13)

The limit on the right-hand side of Eq. (4.13) is an undetermined form   eh − 1 0 . . lim = h→0 0 h We can proceed as in the following. Denote y = eh − 1. Therefore, y + 1 = eh and h = ln(y + 1). Then .

y = lim y→0 y→0 ln(y + 1) lim

1 1 y

ln(y + 1)

= lim

y→0

1 1

ln(y + 1) y

,

the last limit of this equation is just the definition of the Neper number e, i.e., 1

.

lim (y + 1) y = e,

y→0

and therefore, the limit in Eq. (4.13) is .

1 eh − 1 = h→0 h ln limy→0 lim

1 1 (y+1) y



=

1 = 1, ln e

(4.14)

and finally, we have that f  (x) = ex .

.

(4.15)

As another example of the derivative of an exponential function, consider the derivative of the function f (x) = a x (a ∈ R), that, by analogy with the previous case, we can calculate as follows:

4.2 Examples of Derivatives

45

ah − 1 = a x ln a. h→0 h

f  (x) = a x lim

.

(4.16)

5. Derivative of the logarithm. Consider f (x) = ln x. Then   h ln(x + h) − ln(x) 1 . = lim ln 1 + . lim h→0 h→0 h x h Without loss of generality, we can denote Eq. (4.18) becomes .

h x

=

1 n,

(4.17)

where n ∈ N. Therefore,

      h 1 1 n 1 1 n ln 1 + = lim ln 1 + = lim ln 1 + n→∞ x h→0 h x n x n→∞ n  n 1 . (4.18) = ln lim 1 + n→∞ n lim

The limit in brackets in Eq. (4.18) converges to the famous transcendental number e (Neper number). Consequently, since ln e = 1, we get f  (x) =

.

1 . x

(4.19)

For a logarithm of a generic base a, we have that the derivative of f (x) = loga x is f  (x) = x1 loga e = x ln1 a . 6. Derivative of the sine function. Let s(x) = sin x. The difference quotient is .

sin(x + h) − sin x) s(x + h) − s(x) . = h h

Using the trigonometric identity .

sin y − sin x = 2 sin

y+x y−x cos , 2 2

where y ≡ x + h, we obtain   sin h2 sin(x + h) − sin x x . = h cos y + . 2 h 2 Since .

lim

x→0

sin x =1 x

(4.20)

46

4 An Introduction to Differential and Integral Calculus

and .

lim cos(x + h) = cos x,

h→0

the limit for h → 0 of the difference quotient in formula (4.20) is cos(x). Therefore, f  (x) = cos(x)

(4.21)

.

for every x. The derivative of the sine function is the cosine function. 7. Derivative of the cosine function. Let c(x) = cos x. It is left to the student to demonstrate that c (x) = − sin x;

(4.22)

.

that is, the derivative of the cosine function is minus the sine function. The demonstration can be done following the same steps as in the previous section for the sine function and using the following trigonometric identity: .

cos y − cos x = −2 sin

y+x y−x sin , 2 2

where y ≡= x + h. 1 8. Derivative of the nth-root function. Consider f (x) = x n , where n ∈ N. The difference quotient is 1

.

1

(x + h) n − x n f (x + h) − f (x) = . h h

(4.23)

Denoting 1

u ≡= (x + h) n ,

.

1

v = xn

so that un = x + h,

.

v n = x,

we obtain that the difference quotient is 1

1

u−v (x + h) n − x n = n , . u − vn h which, by using formula (4.11), becomes

(4.24)

4.3 Geometric Interpretation of the Derivative 1

47

1

(x + h) n − x n 1 u−v . = n−1 . = n h u − vn u + un−2 v + · · · + uv n−2 + v n−1 (4.25) Observe that as h → 0, u → v. Therefore, each term in the denominator on the right side in (4.25) has the limit v n−1 as h → 0. Since there are n terms 1 1−n altogether, the difference quotient has the limit v n . Since v = x n , this proves that F  (x) =

.

1 1 −1 xn . n

(4.26)

4.3 Geometric Interpretation of the Derivative The procedure used to define the derivative has a geometric interpretation that leads in a natural way to the idea of a tangent line to a curve. A portion of the graph of a function f is shown in Fig. 4.1A. Two of its points P and Q are shown with respective coordinates .(x, f (x)) and .(x + h, f (x + h)). Consider the right triangle with hypotenuse P Q; its altitude, .f (x + h) − f (x), represents the difference of the ordinates of the two points Q and P . Therefore, the difference quotient .

f (x + h) − f (x) h

represents the trigonometric tangent of the angle .α that P Q makes with the horizontal. The real number .tan α is called the slope of the line through P and Q, and it provides a way of measuring the “steepness” of this line. For example, if f is a linear function, say .f (x) = mx + q, the difference quotient has the value m, so m is the slope of the line. Some examples of lines of various slopes are shown in Fig. 4.1B. For a horizontal line, .α = 0, and the slope, .tan α, is also 0. The algebraic sign of the derivative of a function gives us useful information about the behaviour of its graph. For example, if x is a point in an open interval where the derivative is positive, then the graph is rising in the immediate vicinity of x as we move from left to right. This occurs at .x3 in Fig. 4.2. A negative derivative in an interval means the graph is falling, as shown at .x1 , while a zero derivative at a point means a horizontal tangent line. At a maximum or minimum, such as those shown at .x2 , .x5 , and .x8 , the slope must be zero. Fermat was the first to notice that points such as .x2 , .x5 , and .x6 , where f has a maximum or minimum, must occur among the roots of the equation .f  (x) = 0. It is important to realize that .f  (x) may also be zero at points where there is no maximum or minimum, such as above the point .x4 . Note that this particular tangent line crosses the graph (Fig. 4.2).

48

4 An Introduction to Differential and Integral Calculus

Fig. 4.1 A. Geometric interpretation of the difference quotient as the tangent of an angle (.tan α). B. Lines of various slopes [9]

Fig. 4.2 A. Geometric meaning of the sign of the derivative

4.4 The Algebra of Derivatives Let f and .gbe two functions defined on a common interval. At each point where f and g have a derivative, the same is true of the sum .f + g, the difference .f − g, the product .fg, and the quotient .f/g. (For .f/g, we need the extra proviso that g is not zero at the point in question.) The derivatives of these functions are given by the following formulas: 1. 2. 3. 4.

(f + g) = f  + g  .    .(f − g) = f − g .    .(fg) = f g + fg .  f    . = gf g−fg . 2 g .

4.5 Definition of Integral

49

Note that 4. is redundant as it can be derived from the 3, as soon as we see = f · |f rac1g. The demonstration that 3. implies 4. is left to the student. With the differentiation formulas developed thus far, we can find derivatives of functions .f for which f(x) is a finite sum of products or quotients of constant multiples of .sin x, .cos x, and .x r (r rational). As yet, however, we have not learned to deal with something like .f (x) = sin(x 2 ) without going back to the definition of derivative. In this section, we shall present a theorem, called the chain rule, that enables us to differentiate composite functions.

.

f g

Chain Rule Let f be the composition of two functions u and v, say .f = u ◦ v = u[v(x)]. Suppose that both derivatives .v  (x) and .u (y) exist, where .y = v(x). Then the derivative .f  also exists and is given by f  (x) = u (y)v  (x).

(4.27)

.

Example Consider again .f (x) = sin(x 2 ). By the chain rule, we have .f  (x) = cos(x 2 ) · (2x) = 2x sin(x 2 ). In this case, in fact, .y = x 2 and .u(y) = sin(y).

4.5 Definition of Integral Integration can be used to find areas, volumes, central points, and many other useful information describing a function. It is easiest to start with the question of finding the area between a function and the x-axis (see Fig. 4.3). We could calculate the function at a few points and add up slices of width .Δx like in Fig. 4.4. We can make .Δx a lot smaller and add up many small slices. As the slices approach zero in width, the estimation of the area under the curve becomes 1.2 1

x2

0.8 0.6

y=

0.4 x=1

Fig. 4.3 The integral of a function .f (x) expresses an area (with sign). In this figure, the integral of the function 2 .y = f (x) = x between 0.01 and 1 is the blue shaded area

0.2 0 −0.2

0

0.2

0.4

0.6

0.8

1

50

4 An Introduction to Differential and Integral Calculus y

y

a = x1

x4

x3

x2

x

b

x2

a = x1

x6

x4

x8

b

x

Δx

Δx y

y

a = x1

x3

x6

x9

x

x12 b

a = x1

Δx

x6

x12

x18

b

x

Δx

Fig. 4.4 Approximation of the area under the curve. The smaller the .δx interval dividing the integration interval, the more accurate the sum of the areas of the rectangles

more accurate. We now write dx to mean the .Δx slices are approaching zero in width. For a continuous function .f : D → R, the real number A (area under the curve f in the interval .[a, b] ⊂ D) is called the definite integral or just the integral of f over .[a, b] and is denoted by

b

f (x)dx.

.

a

A function .f : D → R is Riemann-integrable on .[a, b] ⊂ D, if there exists a sequence of refinement partitions .{Dn }n≥1 of .[a, b] and a number A such that .

lim U (f, Dn ) = lim L(f, Dn ) = A,

n→∞

n→∞

(4.28)

where .U (f, Dn ) and .L(f, Dn ) are the upper and the lower sums, respectively. They are calculated as .

U (f, Dn ) = Δx

n  i=1

mi

4.5 Definition of Integral

51

Fig. 4.5 Lower sum is the sum of the areas of the red rectangles. Upper sum is the sum of the areas of the blue rectangles

5

5

4

4

3

3

2

2

2

4

L(f, Dn ) = Δx

8

6

n 

2

4

6

8

Mi ,

i=1

where .Δx is the width of each subinterval partitioning the interval .[a, b], .Mi is the maximum value of f on subinterval i, and .mi is the minimum value of f on subinterval i (Fig. 4.5). Example In the next, we see an example of calculation of lower and upper sums and use them to determine where a function is Riemann-integrable. The function .f (x) = x is a Riemann-integrable function over .[0, 1]. Let .Dm be a dissection pattern of the interval .[0, 1] in m subintervals all of the same width. To evaluate the lower sum .U (f, Dm ) and upper sum .L (f, Dm ) of f relative to the dissection pattern .Dm , we proceed as follows:

1 2 3 m−1 Dm = 0, , , , · · · , ,1 . m m m m

.

L (f, Dm ) =

.

m  k−1 k=1

m

·

m 1  1 (k − 1). = 2 m m k=1

Let .k − 1 ≡ h: k=1⇒h=0 .

.

k =m⇒h=m−1



m n−1   (k − 1) = h. k=1

Knowing that . m k=1 k = m  .

h=1

h=

m−1  h=1

m(m+1) , 2

h+m=

h=1

we have that m−1  m(m + 1) m(m + 1) h= −m ⇒ 2 2 h=1

52

4 An Introduction to Differential and Integral Calculus

so that  L (f, Dm ) = .

U (f, Dm ) =

 1 1 1 m2 + m − 2m m2 − m m(m + 1) −m · 2 = = 2 2 2 m 2m 2m m → +∞ 2

m m  k 1 1 m(m + 1) 1 1  m2 + m k= 2 → . · = 2 = m m 2 2 m m 2m2 k=1

k=1

We see that L (f, Dm ) < U (f, Dm ) .

.

Moreover, since .limm→∞ U (f, Dm ) = limm→∞ L (f, Dm ) , f is Riemannintegrable and

1

.

0

xdx =

1 . 2

4.6 Relation Between Integral and Derivative At first sight, there seems to be no connection whatever between the problem of finding the area of a region lying under a curve and the problem of finding the tangent line at a point of a curve. The first person to realize that these two seemingly remote ideas are, in fact, rather intimately related appears to have been Newton’s teacher, Isaac Barrow (1630–1677). However, Newton and Leibniz were the first to understand the real importance of this relation, and they exploited it to the fullest, thus inaugurating an unprecedented era in the development of mathematics. We introduce the relation between integral and derivative through an example. Consider the function f (x) = 2x,

.

and suppose we want to calculate the area under the curve of this function on the interval .[0, x ∗ ]. Looking at Fig. 4.6, we see that this area is A = (x ∗ )2 .

.

So the integral is

x∗

A=

.

0

2x dx = (x ∗ )2 .

4.6 Relation Between Integral and Derivative

53

Fig. 4.6 The integral of .f (x) = 2x is .x 2 + C, where .C ∈ R. Note that .(x 2 + C) = 2x, just the argument of the integral

In this example, we notice that 2x is just the first derivative of .x 2 . The integral of a function f is a function F such that .F  = f . This is a general result, valid not only for this example and established by the fundamental theorem of integral calculus. Fundamental Theorem of Integral Calculus Let .f : [a, b] → R be an integrable function. The integral function of f is the function .F such that:

x

F (x) =

f (t)dt,

.

a ≤ x ≤ b.

a

If f is bounded, then F is a continuous function in .[a, b]. Furthermore, if f is continuous in .(a, b), then F is differentiable at all points at which f is continuous and F  (x) = f (x),

.

i.e., F is a primitive of .f. Let .f : [a, b] → R be a Riemann-integrable function on its domain, and let this function be differentiable, i.e., F  (x) = f (x)

.

for any .x ∈ [a, b], and then

b

.

f (x)dx = F (b) − F (a).

(4.29)

a



.

54

4 An Introduction to Differential and Integral Calculus

The fundamental theorem of integral calculus is a powerful tool that allows us to calculate integrals without having to calculate upper and lower sums. Finally, note that if we do not specify the interval of integration, the result of the integral is f (x)dx = F (x) + C

.

(4.30)

as, if we apply the rules of differentiation, we get (F (x) + C) = f (x).

.

For example, .

2xdx = x 2 + C

since .(x 2 + 4) = 2x, but also .(x 2 + 10) = 2x, and also .(x 2 + 34 ) = 2x, etc.

4.7 Methods of Integration We now know that the result of the integral of .f (x) is that function .F (x) whose prime derivative is the function .f (x)itself. However, we need to know some methods that allow us to calculate the integral of .f (x) even when .f (x) is a complicated function. Here, we revise three methods: 1. Integration by parts 2. Integration by substitution 3. Integration by partial fraction decomposition

4.7.1 Integration by Parts Integration by parts is a special method of integration that is often useful when the integrand function .f (x) is (or can be expressed as) the product of two functions .u(x) and .v(x) but is also helpful in other ways. If .f (x) = u(x)v(x), the rule is .

f (x) dx =

u(x)v(x) dx = u(x)

    u (x) v(x) dx − v(x) dx dx, (4.31)

4.7 Methods of Integration

55

which in the case of a definite integral becomes

b

.



b

f (x) dx =

a

 b u(x)v(x) dx = u(x) v(x) dx

a



b



    u (x) v(x) dx dx.

a

(4.32)

a

Example Compute the indefinite integral of .f (x) = x cos x I=

.

x cos x dx.

(4.33)

Let us denote .u = x and .v = cos x. Then, applying rule (4.31), and the differentiation rule for x and .cos x learned in the previous lecture, we get .



I=

x cos x dx = x

=x

cos x dx −

  1 · cos x dx =

  1 · sin x dx cos x dx −

= x sin x + cos x + C. Rule (4.32) is rather complicated to use in practical exercises. That is why we propose to use the following. More simply, integration by parts of a function f given as a product of functions .u(x) and .v(x) f (x) = u(x)v(x)

.

can be done by recognizing that one of the two functions, say for example .v(x), can be expressed as the derivative of a function .w(x), i.e., f (x) = u(x)w  (x).

.

The rule of derivation by parts in (4.31) becomes

f (x) dx =

.





u(x)w (x) dx = u(x)w(x) −

u (x)w(x) dx.

(4.34)

So, we see that the integral in (4.33) can be computed also in this way: I=

.

x cos x dx =

x(sin x) dx = x sin x −

sin x dx = x sin x + cos x + C.

56

4 An Introduction to Differential and Integral Calculus

4.7.2 Integration by Substitution Integration by substitution is a method to find the integral of function f of the form f (g(x)). In general, the integral of .f (g(x)) becomes easier if we pose .u = g(x). For example, consider

.

I=

cos(3x + 4) dx.

.

Let us pose .u = 3x + 4, so that .x =

u 3

− 43 , and .dx = 13 du. Therefore,



1 .I = 3

cos(u) du =

1 sin x + C. 3

Integration by substitution can also be used to find integrals such as .

f (g(x))g  (x)dx.

The substitution also in this case is .u = g(x). Example Compute the integral I=

.

2x 1 + x 2 dx.

Let us pose .u = 1 + x 2 , so that .du = 2x dx; therefore, we get I=

.



√ 2 3 2x 1 + x 2 dx = u du = u 2 + C. 3

4.7.3 Integration by Partial Fraction Decomposition The integration by partial fraction decomposition is used to find the integral of rational expression of polynomials. For example, consider I=

.

2x − 1 dx. −x−6

x2

To compute I , first note that x 2 − x − 6 = (x − 3)(x + 2)

.

4.7 Methods of Integration

57

since .x = 3 and .x = −2 are the roots of the denominator of the integrand function. Recall that the roots of a second-order polynomial .ax 2 + bx + c can be found with the formula √ −b ± b2 − 4ac . .x1/2 = 2a Therefore, the integrand function can be expressed as follows: .

B (A + B)x + 2A − 3B A 2x − 1 + = , = x−3 x+2 (x − 3)(x + 2) −x−6

x2

where the equality between the first and third expressions holds if .

A+B =2 2A − 3B = −1,

i.e., if .A + B = 1. Therefore, we get I=

.

2x − 1 dx = 2 x −x−6



1 1 + x−3 x−2

 dx.

Now, recalling, from Lecture 1, that the primitive of .1/x is .ln |x|, we have that I = ln |x + 3| + ln |x − 2| + C = ln |(x + 3)(x − 2)| + C = ln |x 2 − x − 6| + C.

.

Note that we obtained that I is equal to the logarithm of the absolute value of the denominator of the integrand function .f (x) = x 22x−1 . −x−6 Although this exercise is useful to understand how the method is applied, it must be said that we could have obtained the result in a much quicker way by noting that the nominator of the integrand function is equal to the first derivative of the denominator. For integrals of this type, it can be shown that the following rule P (x) applies. Let .f (x) = Q(X) and .P (x) = Q (x); then .

Q (x) dx = ln |Q(x)| + C. Q(x)

(4.35)

In Table 4.1, you will find some (easily demonstrable) rules suggesting partial fraction decomposition depending on the factor at the denominator of the integrand function.

58

4 An Introduction to Differential and Integral Calculus

Table 4.1 Rules of partial fraction decomposition depending on the factor in the denominator of the integrand function Term in partial fraction decomposition

Factor in denominator .ax

+b

.(ax

+ b)k

2 .ax

+ bx + c

.(ax

2

.

A ax+b .

A1 ax+b

A2 (ax+b)2

+

+ ··· +

Ak . (ax+b)k

k = 1, 2, 3, . . .

Ax+B . 2 ax +bx+c

+ bx + c)k

.

A1 x+B1 ax 2 +bx+c

+

A2 x+B2 (ax 2 +bx+c)2

+ ··· +

Ak x+Bk , (ax 2 +bx+c)k

k = 1, 2, 3, . . . .

4.7.4 The Reverse Chain Rule Let f be the composition of two functions u and v, say .f = u ◦ v = u(v(x)). Suppose that both derivatives .v  (x) and .u (y) exist, where .y = v(x). By the chain rule, the derivative .f  also exists and is given by f  (x) = u (y)v  (x).

.

(4.36)

Integration and differentiation are the reverse processes of each other. Therefore, integrating a derivative of a function leaves the function unchanged: .

[u(v(x))] dx = u(v(x)).

(4.37)

On the other hand, applying the chain rule of differentiation, we obtain that .

[u(v(x))] dx =



u (v(x))v  (x)dx.

(4.38)

So, finally, from (4.37) and (4.38), we obtain that .

[u(v(x))] dx = u(v(x)) + C.

(4.39)

Formula (4.39) is known as the reverse chain rule. There are many applications of the reverse chain rule, but here we focus on a specific case. Assume that u(x) = x n+1

.

so that u (x) = (n + 1)x n .

.

4.7 Methods of Integration

59

More in general [u(v(x))] = (n + 1)(v(x))n .

.

Substituting this expression into the formula of the reverse chain rule, we get .

v  (x)(n + 1)(v(x))n dx = (v(x))n+1 + C

that can be re-written as follows:1 . v  (x)(v(x))n dx =

1 (v(x))n+1 + C. n+1

Example Calculate the following integral: .

2x(x 2 − 1)8 dx.

Let us pose .v(x) = x 2 − 1, and then we have that .v  (x) = 2x. Therefore,

.

2x(x 2 − 1)8 dx =

v  (x)(v(x))8 dx =

1 1 (v(x))9 + C = (x 2 − 1)9 + C. 9 9

Note also that this integral could be calculated also using substitution method posing .z ≡ x 2 − 1.

4.7.5 Using Combinations of Methods It is often necessary to use two or three of these methods in sequence to calculate an integral. Example Consider the integral I=

.

cos2 x 1 − 2 sin2 x

dx.

(4.40)

Typically, integral involving trigonometric functions can be solved by substitution, posing .u = tan x. Here in the following the detailed passages.

the constant of integration by a constant, in this case, .1/(n + 1), yields a constant that we name C again.

1 Multiplying

60

4 An Introduction to Differential and Integral Calculus

I=

.

cos2 x dx = 1 − 2sin2 x =



1 1 cos2 x

sec2 x

2

sin x − 2 cos 2x

dx

1 dx. − 2 tan2 x

Since .

sin2 x + cos2 x 1 = tan2 x + 1, = cos2 x cos2 x

sec2 x =

(4.41)

we get that I=

.

1 dx. 1 − tan2 x

Denoting .u = tan x from which we get that du =

.

1 dx = sec2 x dx cos2 x

and multiplying and dividing the integrand function by .sec2 x, and using (4.41), we obtain that I=

.

=

sec2 x dx = sec2 x(1 − tan2 x) (u2



sec2 x dx () tan2 x + 1)(1 − tan2 x)

du . + 1)(1 − u2 )

By applying partial fraction decomposition, the integrand function becomes B A 1 + = 2 , u + 1 1 − u2 (u2 + 1)(1 − u2 )

.

where .A = B = 12 . Therefore, 1 .I = 2



1 du + u2 + 1 2



du . 1 − u2

Let us pose I1 ≡

.

1 2



du , 2 u +1

I2 ≡

1 2



du 1 − u2

4.8 Ordinary Differential Equations

61

so that .I = I1 + I2 . .I1 can be calculated by substitution, e.g., .u = tan y. Therefore, du = sec2 y dy and

.

I1 =

.

1 2



sec2 y dy = (1 + tan2 y)



1 1 1 · dy = 2 cos2 y 1 + sin2 y cos2 y

dy =

1 y + const, 2

where “const” denotes an arbitrary real constant. Since .u = tan y, .y = arctan u, and finally, we get that I1 =

.

1 arctan u + const. 2

(4.42)

.I2 can be estimated by using the method of partial fraction decomposition, as follows:

.

1 1 C D .= = + , 2 (1 − u)(1 + u) 1−u 1+u 1−u

where .C = D = 12 . Finally, 1 1 · .I2 = 2 2



1 du + 1−u



   1 1  1 + u  + const. du = ln  1+u 4 1 − u

(4.43)

Finally, summing .I1 and .I2 given by Eq. (4.42) and Eq. (4.43), respectively, we obtain that the integral I in (4.40) is I=

.

  1  1 + u  1 + const. arctan u + ln  4 1 − u 2

(4.44)

Since .u = tan x, we finally obtain that   1  1 + tan x  x + const. + ln .I = 2 4  1 − tan x 

4.8 Ordinary Differential Equations In computational science and engineering practice, including in biology, physics, economics, and other fields, often mathematical models connect the system’s state .y(t) at time t defined on a time interval .[a, b] to some of its rates of change as expressed by its derivatives .y  (t), y  (t), . . .. An ordinary differential equation (shortly abbreviated as ODE) is an equation that describes exactly how the unknown state .y(t) changes over time t in .[a, b]. Such an equation needs to be solved to find

62

4 An Introduction to Differential and Integral Calculus

how the system evolves in time. For example, in physics, the equation describing the motion of a body of mass m in free fall under the action of a total force F acting on it is given by the Newton’s second law .y  (t) = F /m. In biology, the equation predicting the rate of change of a population .p = p(t) with growth rate .α in an environment that has a carrying capacity k is the so-called logistic equation  .p (t) = αp(1−p(t)/k). The time dependence is often dropped in the ODE notation because the time variable is understood, and we write simply .y  = F /m for the equation of motion and .p = αp(1 − p(t)/k) for the logistic equation. The order of a differential equation is determined by the highest derivative appearing in the equation. For example, the logistic equation .p = αp(1 − p(t)/k) is a first-order equation, whereas the equation of motion .y  = F /m is a second-order equation.

4.8.1 First-Order Linear Equations A generic first-order ODE expressed in the unknown function .y = y(t) can be written in the form y  = f (t, y),

(4.45)

.

where f is a function describing the relationship between time t and the system’s state .y(t) at time t. This form is called the normal form of a first-order ODE, and .y = y(t) is a solution of (4.45) on the time interval .[a, b] if y is differentiable on .[a, b] and, when substituted into the equation, y satisfies (4.45) exactly for every t in .[a, b]; that is, .y  (t) = f (t, y(t)), for every t in .[a, b]. For example, one solution of .y  = 3t 2 + 2 is .y(t) = t 3 + 2t. A first-order ODE is called linear if the right-hand side .f (t, y) is a linear function in y, that is, if (4.45) has the form y  + p(t)y = q(t).

(4.46)

.

This form represents an important class of first-order ODEs and is encountered frequently in applications. The function .q(t) is called the source term or forcing term because of its role in several applications. The equation is called homogeneous if .q(t) = 0 and nonhomogeneous otherwise. If a first-order ODE cannot be put into the form (4.46), the equation is called nonlinear. If p and q are continuous functions, a closed form solution of (4.46) can be obtained by first multiplying both sides of the equation by the integrating factor μ(t) = e

.



p(t)dt

=e

P (t)

, where P (t) =

p(t)dt,

4.8 Ordinary Differential Equations

63

yielding .

  eP (t) y = eP (t) q(t),

then by integrating both sides to give eP (t) y(t) =

.

eP (t) q(t)dt + C,

where C is a constant, and finally multiplying by .e−P (t) to obtain the general solution −P (t) .y(t) = e (4.47) eP (t) q(t)dt + Ce−P (t) . Notice that the general solution .y(t) is the sum of two terms, y(t) = yh (t) + yp (t),

.

where yh (t) = Ce−P (t) , yp (t) = e−P (t)



.

eP (t) q(t)dt,

where .yh (t) is called the homogeneous solution because it satisfies the homogeneous equation .y  + p(t)y = 0, while .yp (t) is called a particular solution because it is a specific solution to the nonhomogeneous equation .y  + p(t)y = q(t). Example The solution of the homogeneous first-order ODE y  − y sin t = 0

.

is y(t) = Ce− cos t .

.

On the other hand, the solution of the nonhomogeneous first-order ODE y  − y sin t = sin t

.

is y(t) = Ce− cos t − 1.

.

64

4 An Introduction to Differential and Integral Calculus

4.8.2 Initial Value Problems In many real-world problems governed by ODEs, it is known as one state of the system at a given time .t0 , and it is requested to study how the system evolves in time from .t0 on. An initial value problem (abbreviated as IVP) is the problem of solving a differential equation with unknown .y = y(t), subject to the condition .y(t0 ) = y0 , that is

.

y  = f (t, y), y(t0 ) = y0 .

In this case, .t0 is a fixed instant of time, .y0 is the system’s state at .t0 , and .y(t0 ) = y0 is referred to as an initial condition. Graphically, the solution of an IVP is a curve in the .(t, y) plane defined in the interval .a < t < b and passing through point .(t0 , y0 ). The slope .y  (t) of the solution curve .y = y(t) has value .f (t, y) because of the relationship .y  (t) = f (t, y(t)), as shown in Fig. 4.7 for the ODE .y  = y sin t + sin t with initial conditions .y(−2) = y0 and .y0 = −2, −1, 0, 1, 2. The collection of all these mini-tangents forms the direction field for the given equation. Initial value problems raise intriguing mathematical questions: 1. Existence. Is there always a solution for every initial value? It is important to note that there may be a solution even if we cannot find a formula for it. 2. Uniqueness. Is there only one solution, if there is one? 3. Interval of existence. For how many times t does the solution to the initial value problem exist, if a solution exists? The French mathematician Augustin-Louis Cauchy [1789–1857] proved the following theorem, which provides partial answers to the above-mentioned questions. If the right side .f (t, x) of the differential equation is sufficiently smooth in a domain in the tx plane containing the point .(t0 , x0 ), then there is a unique solution that passes through that point. More details can be found in advanced textbooks on differential calculus (see, e.g., [134]).

Fig. 4.7 Direction field for the equation  .y = y sin t + sin t with initial conditions .y(−2) = y0 and .y0 = −2, −1, 0, 1, 2

4.9 Partial Differential Equations

65

Theorem 4.1 Assume the function .f (t, x) and its partial derivative .fx (t, x) are continuous in a rectangle .a < t < b, .c < x < d. Then, for any value .t0 in .a < t < b and .x0 in .c < x < d, the initial value problem  .

x  = f (t, x), x(t0 ) = x0

has a unique solution valid on some open interval .a < α < t < β < b containing t0 .

.

4.9 Partial Differential Equations A partial differential equation (PDE) is characterized by the presence of more than one independent variable .x, y, z, . . .. The dependent variable .u(x, y, z, . . .) is an unknown function of these variables. The equation links the independent variables, the dependent variable u, and the partial derivatives of u. Subscripts are frequently used to denote the partial derivatives of u; for example, .∂u/∂x = ux , .∂u/∂y = uy , .∂u/∂z = uz . As a result, in the case of two variables x and y, a PDE can be written as an equation of the form F (x, y, u(x, y), ux (x, y), uy (x, y)) = F (x, y, u, ux , uy ) = 0.

.

This is the most general PDE of first order in two independent variables. The order of an equation is the highest derivative that appears. Analogously, the most general second-order PDE in two independent variables is F (x, y, u, ux , uy , uxx , uxy , uyy ) = 0.

.

A solution of a PDE is a function .u(x, y, z, . . .) that satisfies the equation identically, at least in some region where the .x, y, z, . . . variables are defined. As an example, a population reproducing continuously with rate .α and spreading over space in a random way is governed by the following PDE: .

∂ 2P ∂P = D 2 + αP , ∂t ∂x

where D is called the dispersion rate and .P (x, t) is the population density at a given time and location. PDEs that satisfy a property known as linearity are an important class of PDEs. Suppose to write the equation in the form .L u = 0, where .L is an operator. That is, given a function v, .L u is a new function. For instance, in our previous example, the operator .L is

66

4 An Introduction to Differential and Integral Calculus

L =

.

∂2 ∂ − D 2 − α. ∂t ∂x

The definition we want for linearity is L (u + v) = L (u) + L (v)

.

L (cu) = cL (u)

(4.48)

for any functions u, v and any constant c. Whenever (4.48) holds (for all choices of u, v, and c), .L is called linear operator, and in this case, the equation Lu = 0

(4.49)

.

is called linear. More precisely, Eq. (4.49) is called a homogeneous linear equation, whereas the equation L u = g, g = 0,

(4.50)

.

where g is a function of the given independent variables, is called an inhomogeneous linear equation. The advantage of linearity for the equation .L u = 0 is that if u and v are both solutions, so is .(u + v). If .u1 , . . . , un are all solutions, so is any linear combination c1 u1 (x) + · · · cn un (x) =

n 

.

cj uj (x)

(cj constants).

j =1

This property is known as the superposition principle. A function with multiple variables contains far more information than a function with only one variable. The graph of a two-variable function .u = f (x, y), for example, is a surface and is clearly a much more complex object than the graph of a one-dimensional curve .u = f (x). A computer could represent .u = f (x) by choosing 1000 points and representing them with equally spaced values .x1 , x2 , . . . , x1000 . However, if we choose 1000 equally spaced x and y values, 2 6 .x1 , x2 , . . . , x1000 and .y1 , y2 , . . . , y1000 , a total of .1000 = 10 values .(xi , yj ) are required to describe .u = f (x, y). For further mathematical details about PDEs and their solution, we refer the reader to [144, 187].

4.10 Discretization of Differential Equations Most differential equations cannot be solved using simple analytic formulas. However, an approximation of the solution can be obtained by a computer algorithm. This is especially true in modern scientific computing and industrial simulations, where differential equations are almost always solved numerically due to the fact

4.10 Discretization of Differential Equations

67

that most real-world problems lead to models that are either too complex to solve analytically or their solution is expressed in the form of complicated formulas involving integrals or infinite series that need to be approximated by computer calculation. To describe the basic idea of discretization of an ODE, we develop numerical approximations using a method called finite difference methods. Assume we want to solve the initial value problem:  .

y  = f (t, y), y(t0 ) = y0 ,

defined on the interval .t0 ≤ t ≤ T . Instead of attempting to find a continuous solution defined at each time t, we seek to determine a discrete sequence of approximations .Y0 , Y1 , Y2 , . . . , Yn , . . . at time values .t0 < t1 < t2 < · · · < tn < · · · in the given interval .t0 ≤ t ≤ T . If the solution is sought at a time .ti < t < ti+1 , it can be obtained by averaging the numerical values .Yi and .Yi+1 . To accomplish this, we partition the time interval .t0 ≤ t ≤ T into N small segments of constant length h, which we refer to as the step size. As a result, h=

.

T − t0 . N

This formula defines the set of discrete equispaced times with equal spacing t0 , t1 , t2 , . . . , tN = T defined as .tn = t0 + nh, .n = 0, 1, 2, . . . , N. The approximation .Yn+1 at time .tn+1 is calculated through a recursive procedure in terms of the previous approximation .Yn at time .tn to produce the whole sequence of numerical values .Y0 , Y1 , Y2 , Y3 , . . . , YN . Such recursive algorithm can be easily derived using Taylor’s theorem. Below, we review this important mathematical result, which is probably one of the most commonly used theorems in numerical analysis.

.

Theorem 4.2 Let .f, f  , . . . , f (n) be continuous functions defined on the interval (n+1) (x) exists for all .x ∈ (a, b). There is then a number .ξ ∈ .[a, b], and assume .f (a, b) such that .

f (b) = f (a) + (b − a)f  (a) + +

(b − a)n (n) (b − a)2  f (a) + · · · + f (a) 2! n!

(b − a)n+1 (n+1) f (ξ ), ξ ∈ (a, b). (n + 1)!

(4.51)

When the two points a and b in (4.51) are close to each other, b is frequently written as .b = a + h for some small quantity h. In addition, the interval over which f is smooth in Theorem 4.2 may be rather larger than the interval .[a, b]. In this case, (4.51) can be written for two arbitrary points x and .x + h within .[a, b] as

68

4 An Introduction to Differential and Integral Calculus

f (x + h) = f (x) + hf  (x) +

.

hn+1 hn h2  f (n+1) (ξ ), f (x) + · · · + f (n) (x) + n! (n + 1)! 2! (4.52)

where .ξ ∈ (x, x + h). Although the notation is different, the formula remains the same. Formula (4.52) can also be written more compactly in summation notation: f (x + h) =

n  hj

.

j!

j =0

f (j ) (x) +

hn+1 f (n+1) (ξ ). (n + 1)!

The remainder term in the Taylor series expansion of .f (x + h) about the point x in the previous expression is Rn (x) ≡ f (x + h) −

n  hj

.

j =0

j!

f (j ) (x) =

hn+1 f (n+1) (ξ ), (n + 1)!

  which is sometimes written as .O hn+1 using the Landau notation. The notation P .z = O(h ) with p a positive integer is widely used in mathematics and means that there exists a positive constant C such that .|z| ≈ Chp . As a result, z converges to zero as .h → 0, and the order of convergence is p. A first-order Taylor’s expansion of .y(t) around .t = tn evaluated at the time  .t = tn+1 gives the approximation of .y (tn ): y  (tn ) ≈

.

y(tn+1 ) − y(tn ) , h

where .y = y(t) is the exact solution to the initial value problem  .

y  = f (t, y), y(t0 ) = y0 .

The approximation error is proportional to .h2 (the step size-squared). Rearranging the terms gives y(tn+1 ) − y(tn ) − hf (tn , y(tn )) + O(h2 ).

.

If we denote the approximation of .y(tn ) by .Yn and neglect the small .O(h2 ) error terms, then we can write Yn+1 = Yn + hf (tn , Yn ), n = 0, 1, 2, . . . , N − 1,

.

(4.53)

which is called the Euler method. To start the method at .n = 0, we take .Y0 = y0 , the initial condition. Then we use (4.53) to compute .Y1 , .Y2 , and so on. These

4.10 Discretization of Differential Equations

69

discrete values approximate the graph of the exact solution, and they are frequently connected by line segments to form a continuous curve. The error term .O(h2 ) in the Taylor’s approximation is called the local truncation error. For Euler’s method, the local truncation error is called order .h2 or second order. The cumulative error, or global discretization error, over all N steps, from .t0 to .tN , is therefore N times .O(h2 ), or .O(h), because .N = (T − t0 )/ h. As a result, applying the Euler method over a bounded interval should result in a global error at the right endpoint proportional to the step size h. Using the Landau notation, the global error can be written .O(h). The presented scheme is called explicit or forward Euler Method because it permits the calculation of .Yn+1 directly from .Yn .

4.10.1 The Implicit or Backward Euler Method The Euler algorithm is the simplest method for numerically approximating the solution to a differential equation. To obtain a more accurate method, we can consider a first-order Taylor’s expansion of .y(t) around .t = tn+1 evaluated at the time .t = tn , which gives the approximation y(tn ) − y(tn+1 ) + hf (tn+1 , y(tn+1 )) + O(h2 ).

.

If we denote as before the approximation of .y(tn ) by .Yn and neglect the small .O(h2 ) error terms, then we can write Yn+1 = Yn + hf (tn+1 , Yn+1 ), n = 0, 1, 2, . . . , N.

.

(4.54)

This solution is not as straightforward as it may initially seem, because it does not provide the .Yn+1 in terms of the .Yn explicitly. Such an equation is called an implicit equation, and the resulting method is called the implicit or backward Euler method. At each time step, we have to solve a nonlinear algebraic equation for .Yn+1 , which is very time consuming, but it pays off in terms of accuracy. Despite the fact that explicit methods are very simple to use, they have a drawback caused by the restrictions placed on the time step size to ensure numerical stability. To further understand this, let us look at a linear IVP, provided by  .

y  = −ay, y(0) = 1, a > 0,

and an exact solution .y(t) = e−at that converges to 0 when .t → ∞. The explicit Euler method yields the following numerical solution to this IVP: .yn+1

= yn − ahyn = (1 − ah)yn = (1 − ah)2 yn−1 = . . . = (1 − ah)n y1 = (1 − ah)n+1 y0 .

70

4 An Introduction to Differential and Integral Calculus

To ensure a correct approximation of the exact solution .y(t) = e−at , which converges to 0 as .t → ∞, we need .|1 − ah| < 1 or .h < 2/a. The explicit Euler method is no longer stable for values of .h ≥ 2/a due to amplifications of errors in the iteration process. In contrast, consider the stability of the implicit Euler method for the same linear IVP. The numerical answer is yn+1 =

.

1 1 1 1 yn = yn−1 = . . . = y1 = , n 2 1 + ah (1 + ah) (1 + ah) (1 + ah)n+1

which is clearly stable and converges to 0 in the same way that the exact solution does for long times (large n and large values of h).

4.10.2 The Runge–Kutta Method The explicit Euler and implicit Euler methods are just two of the many numerical methods that can be used for solving initial value problems. Because solving differential equations is so common in computational science and engineering applications, and real-world models are typically quite complex, significant research effort has gone into developing accurate efficient methods. One of the most popular algorithm is the explicit, fourth-order Runge–Kutta method. The formula for the Runge–Kutta update is Yn+1 = Yn +

.

h (k1 + 2k2 + 2k3 + k4 ), 6

where k1 = f (tn , Yn ),

.

h , Yn + 2 h k3 = f (tn + , Yn + 2 k2 = f (tn +

h k1 ), 2 h k2 ), 2

k4 = f (tn + h, Yn + hk3 ). The new value is a weighted average of the four slopes .k1 , . . . , k4 taken at different points along the interval, so that the cumulative error over a bounded interval is proportional to .h4 . This means that if the step h is set to .0.1, the cumulative errors for the Euler, modified Euler, and Runge–Kutta methods over the given interval of interest are proportional to .0.1, .0.01, and .0.0001, respectively, a significant improvement. The cost of high accuracy is that four function evaluations are required; however, on modern computers with fast processor speeds, this cost is affordable. Additionally, the method is rather simple to implement. As a result, it

4.11 Systems of Differential Equations

71

is built in most numerical software programs and modern scientific calculators. For more numerical methods for solving IVPs, we refer the reader to [10, 84, 134].

4.11 Systems of Differential Equations The numerical methods presented for a single equation are readily applicable to systems of equations. Consider for example the two-dimensional IVP  .

x  = f (t, x, y), y  = g(t, x, y),

with initial conditions x(t0 ) = x0 , y(t0 ) = y0 .

.

A numerical solution over the interval .t0 ≤ t ≤ T can be obtained by first selecting N + 1 equispaced time steps

.

tn = t0 + nh, n = 0, 1, . . . , N,

.

where .h = (T − t0)/N is the step size and N is the number of steps, as we did before. We use the notation .Xn and .Yn to denote the approximate values of .x(tn ) and .y(tn ), respectively. The previous methods can be easily extended as follows: • Explicit Euler method:  .

Xn+1 = Xn + hf (tn , Xn , Yn ), Yn+1 = Yn + hg(tn , Xn , Yn ).

• Implicit Euler method:  .

Xn+1 = Xn + hf (tn , Xn+1 , Yn+1 ), Yn+1 = Yn + hg(tn , Xn+1 , Yn+1 ).

• Runge–Kutta method: We compute the values of the eight slopes as follows: k11 = f (tn , Xn , Yn ),

.

k21 = g(tn , Xn , Yn ),

72

4 An Introduction to Differential and Integral Calculus

k12 k22 k13 k23

  h h h = f tn + , Xn + k11 , Yn + k21 , 2 2 2   h h h = g tn + , Xn + k11 , Yn + k21 , 2 2 2   h h h = f tn + , Xn + k12 , Yn + k22 , 2 2 2   h h h = g tn + , Xn + k12 , Yn + k22 , 2 2 2

k14 = f (tn , Xn + hk13 , Yn + hk23 ) , k24 = g (tn , Xn + hk13 , Yn + hk23 ) ,

and then, using weighted averages of these slopes, we compute the next approximation:  .

Xn+1 = Xn + h6 (k11 + 2k12 + 2k13 + k14 ), Xn+1 = Xn + h6 (k21 + 2k22 + 2k23 + k24 ).

The accuracy of these three methods remains in the orders h, .h2 , and .h4 , respectively. The same procedures can be extended to several equations in several unknowns [10, 84, 134].

Chapter 5

Modelling Chemical Reactions

5.1 Modelling in Systems Biology In the context of system biology, modelling and simulation of pathways as networks of biological reactions are of tremendous interest. The core tenet of this growing field holds that the functioning and function of cells are produced by the system dynamics and organizing principles of complex biological events. When cells are seen as dynamic systems, it is possible to understand their temporal processes such as growth, division, differentiation, and death. System biology is defined by two key questions because it focuses on understanding functional activity from a system-wide perspective: (i) How do the elements that make up a cell interact to create its structure and function? (ii) How do cells interact so that higher levels can form and be maintained? Wet-lab biologists have recently embraced mathematical modelling and simulation as two crucial methods for addressing the aforementioned queries. The fundamental tenet of dynamical system theory is that a biological system’s behaviour is determined by the temporal evolution of its state. The degree to which a simulation closely resembles the actual behaviour of a biological system can be used to gauge how well we understand how that system behaves over time. Deviations from a simulation point to knowledge gaps or limitations. Understanding biological processes through biochemistry fundamentally involves simulating them. A cell-free extract that can mediate a clearly characterized physiological process is normally prepared by a biochemist. Similar to this, the conceptual framework validity of a model is evaluated by how closely its simulation reflects physiological observations. Unfortunately, it is frequently impossible to conduct controlled tests on the system to verify our model (for instance, how can the model be verified if just one historical data set is available?). Additionally, when the model is stochastic or contains random elements, the validation process becomes challenging. Regardless of the model’s nature, validation generally makes sure that the techniques used and the outcomes attained satisfy the model’s objectives. Model validation’s ultimate © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Lecca, B. Carpentieri, Introduction to Mathematics for Computational Biology, Techniques in Life Science and Biomedicine for the Non-Expert, https://doi.org/10.1007/978-3-031-36566-9_5

73

74

5 Modelling Chemical Reactions

purpose is to make the model helpful in the sense that it tackles the proper issue, offers correct details about the system being modelled, and, most importantly, answers the problem.

5.2 The Different Types of Mathematical Models There are four distinct areas in which mathematical models are applied to biology: population dynamics, cell and molecular biology, physiological systems, and spatial modelling. Various formalizations are typically used to characterize the dynamics of these various areas (see Table 5.1). In general, the kind of the determination, the time, and the spatial state all affect the mathematical structure of a model of a physical phenomenon. A model’s determination may be deterministic, stochastic, or even a combination of the two. The state space can also be continuous or discrete, just as the time course might be either. The combination of these characteristics gives rise to different mathematical approaches to the modelling the dynamics of the phenomenon. Here following we list some of the most common mathematical formalisms and approaches to specify the dynamics of a system with respect to the four categories listed above: 1. Deterministic processes (Newtonian dynamical systems). A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process will always generate the same trajectory and no two trajectories cross in state space: • Ordinary differential equations (ODEs) (Continuous time. Continuous state space. No spatial derivatives) • Partial differential equations (Continuous time. Continuous state space. Spatial derivatives) • Maps (Discrete time. Continuous state space) 2. Stochastic processes (random dynamical systems) A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution:

Table 5.1 Classes of biological phenomena and most used formalisms to describe them Population dynamics Cell and molecular biology Physiological systems Spatial modelling (epidemiology)

Deterministic processes Ordinary differential equations Stochastic processes Jump Markov processes and continuous Markov processes Deterministic processes Deterministic processes. Partial differential equations

5.3 Chemical Kinetics: From Diagrams to Mathematical Equations

75

• Jump Markov process—Master equation (Continuous time with no memory of past events. Discrete state space. Waiting times between events discretely occur and are exponentially distributed.) • Continuous Markov process—stochastic differential equations or a Fokker– Planck equation (Continuous time. Continuous state space. Events occur continuously according to a random Wiener process.) • Non-Markovian processes—Generalized master equation (Continuous time with memory of past events. Discrete state space. Waiting times of events (or transitions between states) discretely occur and have a generalized probability distribution.) • Stochastic simulation algorithms: Gillespie exact simulation and StochSim 3. Hybrid stochastic/deterministic systems (metabolic and signalling pathways): • Gillespie .τ -leap algorithm—Differential equations for the simulation of fats reactions and Gillespie algorithm for the exact simulation of slow reactions The two most prominent frameworks for modelling the chemistry of intracellular dynamics are deterministic modelling and stochastic modelling. The creation of a set of rate equations to explain the reactions in the metabolic pathways of interest is the foundation of deterministic modelling. These rate equations are ordinary differential equations with chemical species concentrations as variables. Given the intricacy of biological pathways, we must generally deal with nonlinear differential equations. By solving differential equations, deterministic simulations generate the temporal path of concentrations. In its most well-known form, stochastic modelling entails the development of a collection of chemical master equations with probabilities as variables citekampen. As realizations of random variables selected from the probability distribution specified by the master equations, stochastic simulation generates counts of molecules of some chemical species. Which framework is appropriate for a given biological system is determined not just by the actual phenomena being researched, but also by assumptions made to ease the investigation. For example, the scale, and hence the amount of granularity at which a phenomenon is researched, may be parameters used to determine whether a deterministic or stochastic technique should be used [149].

5.3 Chemical Kinetics: From Diagrams to Mathematical Equations A model of a biological system can be represented in a variety of ways. Historically, biologists have preferred diagrammatic schemes combined with vocal explanations that communicate qualitative information about the displayed mechanisms. Applied mathematicians, on the other hand, have typically preferred to work with systems

76

5 Modelling Chemical Reactions

of ordinary or partial differential equations. These are more exact and totally quantitative, but they have certain drawbacks. The differential equation models are too low level of description because they encode not only the essential features of the model, but also a wealth of baggage associated with specific interpretations of chemical kinetics that are not always well-suited to application in the molecular biology context. The biochemist sees systems as networks of connected chemical reactions between these two extremes. These networks are sufficiently generic that they can be simulated in many ways using different methods depending on the underlying kinetic assumptions. Furthermore, they are precise enough that, once the kinetics are established, they may be utilized directly to build full dynamic simulations of the system behaviour on a computer. A general chemical reaction takes the form s1 X1 + s2 X2 + · · · + sn Xn −→ r1 Y1 + r2 Y2 + · · · + rm Yn ,

.

(5.1)

where n is the number of reactants and m is the number of products. .Xi represents the ith reactant molecule and .Yi is the jth product molecule. .si is the number of molecules of .Xi consumed in a single reaction step, and .rj is the number of molecules of .Yj produced in a single reaction step. The coefficients .si and .rj are known as stoichiometries and are typically (but not necessarily) integers. There is no assumption that .Xi and .Yj are distinct, implying that a single molecule can be eaten and generated in the same reaction. If a chemical species occurs on both the left- and right-hand sides, it is referred to be a modifier. In this situation, the reaction has no influence on the amount of this species, which is generally present in the system since the velocity with which the reaction progresses is determined by the level of this species. Let .nj be the number of molecules in the species .Xj . The reaction equation specifies which chemical species react and in what amounts, as well as what is created. Consider the dimerization of a protein P , which is written as follows: 2P −→ P2 .

.

(5.2)

Two molecules of P react together to produce a single molecule of .P2 . Here P has a stoichiometry of 2, and .P2 has a stoichiometry of 1 that usually is not written. Similarly, the reaction for the dissociation of the dimer is written as P2 −→ 2P .

.

(5.3)

The term reversible refers to a response that can occur in both directions. In biology, reversible reactions are quite common. They are specifically written with a reverse arrow for the backward reaction. This notation is only a shortcut for the two distinct chemical processes that are taking place. It will not be appropriate in the context

5.4 Kinetics of Chemical Reactions

77

of the stochastic models to be explored in this thesis to replace the two distinct reactions with a single reaction proceeding with a velocity determined by some form of combination of the velocities of the two separated reactions. 2P  P2 .

.

(5.4)

5.4 Kinetics of Chemical Reactions Chemical kinetics studies the time evolution of a reaction system defined by a series of coupled chemical reactions. It is particularly concerned in system behaviour away from equilibrium. Although the reaction equations capture the key interactions between competing species, they are insufficient to determine the overall system dynamics. Solving the dynamics of a chemical system entails solving the following general problem: if a fixed volume V contains a spatially uniform mixture of N chemical species that can interact through M chemical reaction channels, what will the molecular population levels be at any later time given the numbers of molecules of each species present at some initial time? We need to know the rates at which each of the reactions happens, as well as the initial concentration of the reacting species, to answer this question. The rate of a reaction is a measure of how the concentration of the substances involved changes over time. Consider a closed volume V holding a mixture of chemical compounds .Xj .(j = 1, 2, · · · , J ) and a typical reaction, such as the one shown below. s1 X1 + s2 X2 + · · · −→ r1 X1 + r2 X2 + · · · .

.

(5.5)

Let .xj be the number of molecules .Xj . It is convenient to represent the set .{nj } geometrically by a vector .n in a J-dimensional state space. The integral values of .xj constitute a lattice. Each lattice point in the octant of non-negative values corresponds to a mixing state and vice versa (Fig. 5.1). The state of the mixture Fig. 5.1 The state space of a binary mixture

78

5 Modelling Chemical Reactions

changes when a chemical reaction occurs. Both sides can be written as a sum over all j when zero values of .sj and .rj are admitted.  .

sj Xj −→



rj Xj .

If .sk = rk = 0 for any k, the corresponding .Xk is a catalyst. .Xk is an autocatalyst if .rk > sk > 0. The real number of molecules required for a reactive collision is .sj . A reaction with intermediate steps (chain reaction) must be expressed as a series of single collision reactions, with the intermediate products listed as independent items among the .Xj . Because three-body collisions are uncommon, only reactions with .sumsj equal to 1 or 2, or possibly 3 if a catalyst is present are encountered in practice. Each reactive collision of type (5.5) transforms the mixture’s state .xj into .xj + sj − rj . It alters the state vector .x by adding a vector .vecv with components .vj = rj − sj in the geometrical representation. As the reaction progresses, the state vector traverses a series of straight-line lattice points. This line cannot reach infinity and must thus terminate on one of the physical octant’s borders. The reverse reaction   rj Xj −→ . sj Xj (5.6) will have instead the effect of subtracting .v from the state vector. Starting from an initial state .x0 , the direct and inverse reactions lead the state vector to move across a discrete chain of lattice points located on a straight line between two physical octant borders. The accessible points are x = x0 + ξ v,

.

(5.7)

where .ξ takes all integer values between an upper and a lower bound. Suppose now that another reactions .sj , .rj and its reverse are possible. Starting from .x0 , a second chain of lattice points becomes accessible. Together with the previous reaction, a network of points can now be reached, n0 + ξ v + ξ  v

.

(ξ, ξ  = . . . , −1, 0, 1, . . . ).

(5.8)

When all possible reactions are considered in this manner, a sublattice of points accessible from .x0 is created. Because .sumj xj is bounded, it cannot cover the entire octant. Because the reactions occur in a closed volume, there is no other way for .nj to fluctuate. As a result, this bounded sublattice represents the set of all system accessible states. The physical octant decomposes in such sublattices, and the system is restricted to the sublattice that contains its starting state .x0 . The accessible sublattice can be parametrized in the following fashion using Eq. (5.8). Each potential reaction .ρ has a vector .v(ρ) , and all lattice points accessible from .x0 are

5.4 Kinetics of Chemical Reactions

79

x = x0 +



.

ξρ v(ρ) .

(5.9)

ρ

Each parameter .ξρ takes the integer values .. . . , −2, −1, 0, 1, 2, . . . , and it is called degree of advancement because it indicates how far the reaction .ρ has advanced. Assume that the representation (5.9) is unique, i.e., that each accessible point .vecx is represented by a single set of values .{ξρ } for each .x 0. If so, (5.9) translates the accessible sublattice onto the integral value lattice in the space with coordinates .ξρ . Each lattice point in this space corresponds to a single and unique state of the combination. Each reactive collision represents a unit step along one of the coordinate axes .ξ . However, there is no reason why (5.9) should be unique in general. There could be two sets of .ξρ that lead from .x0 to the same .x. This implies that there is a set of integers .ζρ , not all zero, such that  .

ζρ v(ρ) .

(5.10)

ρ

In this case, it is still possible to find a smaller set of lattice vectors .w  (ρ) , such that each point of the accessible sublattice is uniquely represented by x = x0 +



.

ηρ w  (ρ) ,

(5.11)

ρ

with integer .ηρ . Each lattice point in the space with coordinates .ηρ corresponds to one and only one state of the mixture, but while in .ξ -state space reactions correspond to unit steps, in .η-state space they do not. Hence, not much has been gained with respect to the original representation in the space of state vectors .x. The reactions that are possible in a closed volume are restricted by conservation laws for the atoms involved (we will return to this point later more formally in this chapter). Let .α label the various kinds of atoms and suppose .Xj contains .mαj atoms of kind .α, where .mαj = 0, 1, 2, . . . . Then the stoichiometric coefficients of (5.5) obey for each .α  .

sj mαj =

j



rj mαj .

(5.12)

j

Since this holds for all reactions, the accessible sublattice lies entirely on the intersection of hyperplanes given by x · m  α = Cα ,

.

where .C α is the total number of available atoms .α.

(5.13)

80

5 Modelling Chemical Reactions

Fig. 5.2 Accessible states for the reactions .2A  2B with .C = 7

There is no requirement for independence among the conservation laws (5.13). A single conservation law results from the binding of a collection of unique atoms in a molecule through all reactions. There may be more conservation laws in addition to those expressing atom conservation. For instance, the relevant stoichiometric coefficient is conserved by itself if .Xk merely acts as a catalyst. A linear subspace of lattice points is defined by all conservation principles taken collectively. This subspace contains the accessible subspace, which is typically, but not always, identical to it. A counterexample would be 2A  2B

.

(5.14)

in which two molecules X by colliding may change into a different modification Y . The conservation law for this reaction is xA + xB = C,

.

(5.15)

and it defines a straight line in the 2-dimensional state space (Fig. 5.2), but only every other lattice point is accessible from a given .x0 .

5.4.1 The Law of Mass Action The rate of a reaction is a measure of how the concentrations of the involved substances changes with time. For the rate at which a reaction as (5.5) occurs, one takes the Van’t Hoff expression s

k · ΠjJ=1 cjj .

.

(5.16)

Here k is a constant, which involves the cross-section for a collision of the required molecules, time the probability for the collision to result in a reaction. This probability is calculated as the product of the reactants concentrations, .cj = xj /V , raised to the power of their stoichiometries. The Van’t Hoff expression in (5.16)

5.4 Kinetics of Chemical Reactions

81

gives the number of collisions per unit time per unit volume in which .{xj } −→ {xj + sj − rj }. The rate equations are therefore  sj xj dxi J . . = V k(ri − si )Πj =1 dt V

(5.17)

This equation is not a universal truth but holds when the following physical requirements are satisfied: 1. The mixture must be homogeneous, so that its density at each point of V equals .xj /V . 2. The elastic, non-reactive impacts must be frequent enough to maintain the Maxwell velocity distribution. Otherwise, the collision frequency could not be proportional to the product of densities, but more information about the velocity distribution would be included. This condition will be met if a solvent or an inert gas is present. 3. The internal degrees of freedom of the molecules are also assumed to be thermally balanced, with the same temperature T as the velocities. Otherwise, the fraction of collisions that cause a reaction would be determined by the characteristics of the distribution over internal states rather than merely the concentrations. Long-lived excited states, on the other hand, can be considered by classifying them as a different species inside the .Xj , but a clear difference in time scales is required. 4. To treat the reaction rate coefficients as constants, the temperature must be constant in space and time. Although these assumptions may be unrealistic in many actual chemical processes, they do not break any physical laws and can thus be approximated to any desired accuracy in appropriate experiments. They ensure that the state vector .vecx completely describes the state of the mixture. We show two simulations of the chemical kinetics specified by Eq. (5.17) in the following two sections: the Lotka–Volterra system and the enzymatic Michaelis– Menten catalysis. These two examples allow us to present the key principles associated with differential equation analysis.

5.4.2 Example 1: the Lotka–Volterra System Chemical kinetics is concerned with the time behaviour of a system of coupled chemical reactions away from the equilibrium. As example, let us consider the Lotka–Volterra (LV) predator–prey system for two interacting species. Y1 −→ 2Y1

.

Y1 + Y2 −→ 2Y2 Y2 −→ ∅.

82

5 Modelling Chemical Reactions

This is the most basic model with nonlinear auto-regulatory feedback behaviour. Y1 is a prey species (such as rabbits), while .Y2 is a predator species (such a foxes). The term “species” refers to a certain type of chemical molecule in a sequence of connected chemical reactions, which is explained by the reactions’ use to illustrate species interaction in a population dynamics scenario. The initial response is the image of prey reproduction. The second reaction is an attempt to document a predator–prey encounter (consumption of prey by predator, in turn influencing predator reproduction rate). The third response symbolizes predators’ natural demise. The LV model allows for the number of preys and predators to be thought of as discrete integer numbers that can only vary when a reaction event occurs. Using reactions to model species interactions in a system however, in typical continuous deterministic chemical kinetics, the quantity of reactants and products is stated as concentrations, measured in moles per litre (M), which can fluctuate continuously as the reaction progresses. The concentration of a chemical species X is commonly denoted as .[X]. According to Eq. (5.17), the instantaneous rate of a reaction is directly proportional to the concentration (and thus directly proportional to mass) of each reactant to the power of its stoichiometry. This kinetic law is, in fact, the mass-action law. As a result, the second reaction in the LV system will proceed at a rate proportional to .[Y1 ][Y2 ]. As a result of the influence of this reaction, .[Y1 ] will drop at the instantaneous rate .k2 [Y1 ][Y2 ], where .k2 is the proportionality constant for this reaction. Because the overall impact of the reaction is to decrease .[Y1 ] at the same rate that .[Y2 ] rises, .[Y2 ] will increase at the same pace. The expression .k2 [Y1 ][Y2 ] is the rate law of the reaction, and .k2 is the rate constant. Considering all three reactions, we can write down a set of ODEs for the system.

.

[Y1 ] = k1 [Y1 ] − k2 [Y1 ][Y2 ]. dt [Y2 ] = k2 [Y1 ][Y2 ] − k3 [Y2 ]. dt .

(5.18) (5.19)

The three rate constants .k1 , .k2 , and .k3 , as well as the initial concentrations of each species, must be provided. Once this is completed, the entire dynamics of the system can be disclosed by solving the set of ODEs, either analytically (in the rare instance where this is possible) or numerically using a computer. The temporal behaviours of the solutions are shown in Fig. 5.3 where the initial values of .[Y1 ] and .[Y2 ] are 4 and 10, respectively, and the rate constants are .k1 = 1, .k2 = k3 = 0.1. The answers were derived with the help of the ODEs solution and analysis programme XPPAUT [57]. An alternative way to display the dynamics of the system is an orbit in a phase space, where the values of one variable are plotted against the values of the other variables. Figure 5.4 shows the dynamics in this way.

5.4 Kinetics of Chemical Reactions

83 25

Fig. 5.3 Lotka–Volterra dynamics for .[Y1 ]t=0 , .[Y2 ]t=0 , .k1 = 1, and .k2 = k3 = 0.1

Y1 Y2

20

[Y]

15

10

5

0

0

20

40

60

80

100

Time

[Y2] 25

20

15

10

5

0

0

1

2

3 [Y1]

4

5

6

Fig. 5.4 Lotka–Volterra dynamics for .[Y1 ]t=0 , .[Y2 ]t=0 , .k1 = 1, and .k2 = k3 = 0.1. The equilibrium solutions for this combination of parameters are .[Y1 ] = 1 and .[Y2 ] = 10. These values correspond to the coordinates of the nullclines intersection points

Phase plane analysis is a tool for predicting how a system’s behaviour will change as different parameters are altered. In what is generally referred to as phase plane analysis, various types of plots are used: • A portrait picture is a phase space trajectory created by displaying the variables that describe a system against each other rather than as a function of time. A phase diagram describes the relationships between variables for a given set of parameters. • A vector field displays the direction of evolution of a system from any point in phase space.

84

5 Modelling Chemical Reactions

• The values of a pair of variables at which one of the variables does not change are represented in phase space by nullclines. The solutions of a system of coupled equations are .X(x, y, t) and .Y (x, y, t) nullclines. .

dX = 0. dt dY = 0. dt

(5.20) (5.21)

A nullcline exists for each variable. The sites at which two nullclines overlap are referred to as fixed points, which stand for stable steady states, also known as equilibrium points.

5.4.2.1

Equilibrium

An equilibrium solution is a collection of concentrations that will not change over time and can be found by solving the simultaneous equations formed by setting the right-hand side of the ODEs to zero. For the Lotka–Volterra example, the conditions for equilibrium are k1 [Y1 ] − k2 [Y1 ][Y2 ] = 0

.

k2 [Y1 ][Y2 ] − k3 [Y2 ] = 0. Solving these for .[Y1 ] and .[Y2 ] in terms of .k1 , .k2 , and .k3 gives two solutions. The first is [Y1 ] = 0,

.

[Y2 ] = 0,

(5.22)

and the second is [Y1 ] =

.

k3 , k2

[Y2 ] =

k1 . k2

(5.23)

Further investigation demonstrates that the second solution is not unstable and hence corresponds to a realistic stable condition of the system. Furthermore, it is not an “attractive” stable state; therefore, there is no reason to believe that the system will trend to this state regardless of the initial conditions. Despite knowing that this system has an equilibrium solution, there is no reason to believe that any given set of beginning conditions will lead to this equilibrium, and even if we do, it will tell us nothing about how the system reaches this equilibrium. To answer this question, we must reduce to a specific set of beginning conditions and integrate the ODEs to reveal the whole dynamics.

5.4 Kinetics of Chemical Reactions

85

Fig. 5.5 (a) Experimental rate of loss of optical activity of sucrose for three initial concentrations of sucrose and fixed concentrations of the enzyme. Data of Michaelis and Menten replotted from Wong [208]. (b) The initial rate .V (0) in (a). of the invertase catalysed reaction plotted as a function of sucrose concentration. Figure adapted from [61]

5.4.3 Example 2: the Michaelis–Menten Reactions Working in the early part of the twentieth century, Michaelis and Menten were able to explain several key experimental facts regarding the conversion of substrate to product catalysed by simple enzymes. Figure 5.5a illustrates their results for the enzyme invertase, which converts sucrose to glucose and fructose. These graphs give the time course of accumulation of product (measured as the change in optical rotation of the solution) for a fixed total enzyme concentration .[E]tot . In the three curves, the initial concentration of sucrose, .[S]tot , is increased from 5.2 mM to 10.4 m M to 20.8 mM. The initial rate of increase of products, given by the slope of these three curves (.V (0)), is plotted for these and similar experiments in Fig. 5.5b. As the concentration of sucrose increases, the initial rates saturate at the value .Vmax . This function is hyperbolic and can be fit with the expression V (0) =

.

Vmax [S]tot . [S]tot + Km

(5.24)

The concentration of substrate at which .V (0) = Vmax /2 is called Michaelis constant of .Km . In further experiments, Michaelis and Menten established that for a given concentration of sucrose .V (0) increased linearly with .[E]tot . The hyperbolic dependence of the rate on .[S]tot and the linear dependence on .[S]tot have been found for a number of enzymes, and Michaelis and Menten proposed a kinetic model to explain these facts. Let us consider again the following chemical reactions: k1+

E + S → ES

.

k1−

ES → E + S k2

ES → E + P .

86

5 Modelling Chemical Reactions

This model involves four variables E, S, ES, and P . However, the total concentrations of enzymes [E]tot = [E] + [ES]

(5.25)

[S]tot = [S] + [ES] + [P ]

(5.26)

.

and of the substrate .

are conserved, so that only two of the concentrations change independently. In this analysis, we choose the concentration of substrate .[S] and enzyme–substrate complex .[ES] as variables and eliminate the concentration of enzyme using the conservation law [E] = [E]tot − [ES].

.

(5.27)

Because the catalytic process is irreversible, the concentration of the product .[P ] does not appear in the equations for .[E] and .[S] that are obtained by applying the mass-action laws applied to the chemical equations of the system .

d[S] = −k1+ [E]tot [S] + (k1− + k1+ [S])[ES]. dt d[S] = k1+ [E]tot [S] − (k1− + k2+ + k1+ [S])[ES]. dt

(5.28) (5.29)

The Michaelis–Menten model has two critical time scales: the time it takes for the substrate to be transformed into product and the time it takes for the enzyme– substrate complex to develop. As a result, the essential rates are .k1 + [E]t ot and .k1 + [S]. Michaelis and Menten assumed that the amount of enzyme is quite small in comparison to the amount of substrate. Under these conditions, we anticipate very low complexity in comparison to the substrate. This means that, at least in the beginning, before a large amount of product has been produced, the rate .k1 + [E]t ot is significantly smaller than the rate .k1 +[S]. As a result, the concentration of catalyst to total concentration of substrate is minimal, i.e., =

.

[E]tot [S]tot

that makes the two time scales widely different.

(5.30)

5.5 Conservation Laws

87

5.5 Conservation Laws In this section, we return to the concept of the law of conservation, which we have already mentioned in the previous sections. As already mentioned, conservation rules can be used to limit the size of the system under examination. Consider the reversible dimerization reaction 2P  P2 .

.

(5.31)

If we make the very strong assumption that neither of these species are involved in any other reactions, then we get the ODEs d[P ] = 2k2 [P2 ] − 2k1 [P ]2. dt d[P2 ] = k1 [P ]2 − k2 [P ], dt .

(5.32) (5.33)

where .k1 and .k2 are the forward and backward rate constants, respectively. The system is at equilibrium whenever k2 [P2 ] = k1 [P ]2

(5.34)

[P2 ] k1 ≡ Keq , = k2 [P ]2

(5.35)

.

that can be re-written as .

where .Keq is the equilibrium constant of the system. This equilibrium is stable and attractive. Note now that .[P ] and .[P2 ] are deterministically related in this system. One way to see this is to add twice the second ODE of the system in (5.32) to the first to get .

  d[P ] d[P2 ] d +2 =0⇒ [P ] + C = 0 ⇒ [P ] + [P2 ] = c, dt dt dt

(5.36)

where c is the concentration of .[P ]. If the dimers had been completely separated, Eq. (5.36) is known as a conservation equation because the reaction system conserves the value of the left-hand side. We determine the equilibrium concentration of .[P ] as a solution of the quadratic equation by solving the conservation equation for .[P2 ] and substituting back into the equilibrium relation (5.35). 2Keq [P ]2 + [P ] − c = 0

.

(5.37)

88

5 Modelling Chemical Reactions

that has a single real positive root given by  [P ]eq =

.

8cK[ eq] + 1 − 1 . 4Keq

(5.38)

The conservation equation can be alternatively used to reduce the pair of ODEs to a single first-order ODE .

d[P ] = k2 (c − [P ]) − 2k1 [P ]2 . dt

(5.39)

It turns out that (5.39) has an analytical solution derivable by solving in .[P ] the following equation:    −4[P ] − 1.56   = t + const. 0.49 × ln  −4[P ] + 2.56 

.

(5.40)

5.6 Markov Processes In this and subsequent sections, we describe stochastic models of chemical kinetics. A Markov process is a particular type of stochastic process. Stochastic processes are commonly used to model unpredictability in the sciences of physics, biology, and economics. Markov processes are commonly used to mimic uncertainty in particular because they are significantly more tractable than a broad stochastic process. A random function .f (X; t), where X is a stochastic variable and t denotes time, is a broad stochastic process. The definition of a stochastic variable consists in specifying: • A set of possible values (called “set of states” or “sample space”) • A probability distribution over this set The set of states can be discrete, such as the number of molecules of a particular component in a reactive mixture. Alternatively, the set may be continuous in a specified interval, for example, one velocity component of a Brownian particle and its kinetic energy. Finally, the set can be both discrete and continuous, such as the energy of an electron in the presence of binding centres. Furthermore, the set of states may be emphatically multidimensional: in this case, X is expressed as a vector .vecX. .vecX may represent the three velocity components of a Brownian particle or the total number of molecules of the various components in a reacting combination. The probability distribution, in the case of a continuous one-dimensional range, is given by a function .P (x) that is non-negative P (x) ≥ 0

.

(5.41)

5.6 Markov Processes

89

and normalized in the sense  P (x)dx = 1

.

(5.42)

where the integral extends over the whole range. The probability that X has a value between x and .x + dx is P (x)dx.

.

(5.43)

In the physical and biological sciences, a probability distribution is frequently represented by an “ensemble”. From this standpoint, a fictional set of an arbitrary large number .N of quantities, each with a different value in the provided range, is introduced. Thus, the number of these quantities with values between x and .x + dx is .N P (x)dx. As a result, a density distribution composed of numerous “samples” assumes the role of the probability distribution. This has no influence on the simulation results because using this language to discuss probability is simply more convenient, and we will use it in this chapter. It should be highlighted that a biological system might have multiple identical clones, which in some cases can be beneficial. Finally, we note that in a continuous range, .P (x) can include delta functions: P (x) =



.

pn δ(x − xn ) + P˜ (x),

(5.44)

n

where .P˜ is finite or at least integrable and non-negative, .pn > 0, and  .

 pn +

P˜ (x)dx = 1.

(5.45)

n

This can be visualized physically as a set of discrete states .xn with probability .pn embedded in a continuous range. If .P (x) is made up entirely of .δ functions (i.e., .P˜ (x) = 0), it can alternatively be thought of as a probability distribution .pn on the discrete set of states .xn . The joint probability densities for values .x1 , x2 , x3 , dots at times .t1 , t2 , . . . are a general approach to specify a stochastic process p(x1 , t1 ; x2 , t2 ; x3 , t3 ; . . . ).

.

(5.46)

If all such probabilities are known, the stochastic process is fully specified. Using (5.46), the conditional probabilities can be defined as usual p(x1 , t1 ; x2 , t2 ; . . . |y1 , τ1 ; y2 , τ2 ; . . . ) =

.

p(x1 , t1 ; x2 , t2 ; . . . |y1 , τ1 ; y2 , τ2 ; . . . ) , p(y1 , τ1 ; y2 , τ2 ; . . . ) (5.47)

90

5 Modelling Chemical Reactions

where .x1 , x2 , . . . and .y1 , y2 , . . . are values at times .t1 ≥ t2 ≥ · · · ≥ τ1 ≥ τ2 ≥ . . . . A Markov process has a very appealing quality in this situation. It is memoryless. For a Markov process p(x1 , t1 ; x2 , t2 ; . . . |y1 , τ1 ; y2 , τ2 ; . . . ) = p(x1 , t1 ; x2 , t2 ; . . . |y1 , τ1 ),

.

(5.48)

the probability to reach a state .x1 at time .t1 and state .x2 at time .t2 , if the state is .y1 at time .τ1 , is independent of any previous state, with times ordered as before. This property makes it possible to construct any of the probabilities (5.46) by a transition probability .p→ (x, t|y, τ ), (.t ≥ τ ), and an initial probability distribution .p(xn , tn ): p(x1 , t1 ; x2 , t2 ; . . . xn , tn ) = p→ (x1 , t1 |x2 , t2 )p→ (x2 , t2 |x3 , t3 )

.

× p→ (xn−1 tn−1 |xn , tn )p(xn , tn ).

(5.49)

A consequence of the Markov property is the Chapman–Kolmogorov equation  p→ (x1 , t1 |x3 , t3 ) =

.

p→ (x1 , t1 |x2 , t2 )p→ (x2 , t2 |x3 , t3 )dx2 .

(5.50)

5.7 The Master Equation The master equation is a differential form of the Chapman–Kolmogorov Eq. (5.50). Various authors use different terminologies. The phrase “master equation” is occasionally restricted to jump processes. Jump processes are characterized by discontinuous motion, which means that the likelihood of a transition occurring per unit time is finite and non-vanishing. w(x|y, t) = lim

.

Δt→0

p→ (x, t + Δt|y, t) Δt

(5.51)

for some y such that .|x − y| > . Here .w(x|y; t) = w(x|y). The master equation for jump processes can be written ∂p(x, t) . = ∂t





w(x|x  )p(x  , t) − w(x  |x)p(x, t) dx  .

(5.52)

The master equation has a straightforward interpretation. The first portion of the integral is the probability gain from the state .x  , and the second part is the probability loss to .x  . A probability distribution for the state space is the solution. Only for simple particular instances can analytical solutions to the master equation be calculated.

5.7 The Master Equation

91

5.7.1 The Chemical Master Equation  to a stare .X  R , where .X,  X  R ∈ ZN A reaction R is defined as a jump to the state .X +.  R ) = v(  is the probability for transition from .X  R to .X  per ˜ X) The propensity .w(X unit time. A reaction can be written as 

w(XR )  XR −→ X.

.

R − X  is used to write the master The difference in molecule numbers .nR = X equation (5.52) for a system with M reactions

.

  t)  dp(X,  + n)p(X  + nR , t) −   t). = w(X w(X)p( X, dt M

M

i=1

i=1

(5.53)

This special case of master equations is called the chemical master equation (CME) [142, 201]. It is fairly easy to write: however, solving it is quite another matter. The number of problems that can be solved analytically using the CME is even lower than the number of problems that can be solved analytically using the deterministic reaction rate equations. Attempts to use the master equation to generate tractable time evolution equations are frequently futile unless all of the reactions in the system are simple monomolecular processes [77]. Let us consider for instance a deterministic model of two metabolites coupled by a bimolecular reaction, as shown in Fig. 5.6. The set of differential equations describing the dynamic of this model is given in Table 5.2, where the .[A] and .[B] are the concentrations of metabolite A and metabolite B, while k, K, and .μ determine the maximal rate of synthesis, the strength of the feedback, and the rate of degradation, respectively. In the formalism of the Markov process, the reactions in Table 5.2 are written as in Table 5.3. The set of chemical master equations describing the metabolites interaction shown in Fig. 5.6 is .

k3 ∂(0, 0, t) = μp(1, 0, t) + μp(0, 1, t) + p(1, 1, t) − V (k1 + k2 )p(0, 0, t) V ∂t

Fig. 5.6 Metabolites A and B are coupled by bimolecular reactions. Adapted from [181]

92

5 Modelling Chemical Reactions

Table 5.2 Reactions of the chemical model displayed in Fig. 5.6. No. corresponds to the number in the figure No.

v1 ([A])

.∅

1

Rate equation

Reaction −−−−→ A v2 ([A])

.A

−−−−→ ∅

3

.∅

v3 ([B])

←−−−− B

4

.B

−−−−→ ∅

2

5

v4 ([B])

.A

v5 ([A],[B])

+ B −−−−−−→ ∅

k1 1+[A]K1

.v1 ([A])

=

.v2 ([A])

= μ[A] k2 1+[B]/K2

.v3 ([B])

=

.v4 ([B])

= μ[B]

.v5 ([A], [B])

= k3 [A][B]

Type Synthesis Degradation Synthesis Degradation Bimolecular reaction

Table 5.3 Reactions of the chemical model depicted in Fig. 5.6, their propensity and corresponding “jump” of state vector .nTR . V is the volumes in which the reactions occur No.

Reaction w1 (a)

1

.∅

−−−→ A

2

.A

−−−→ ∅

.∅

w3 (b)

3

w2 (a)

−−−→ B w4 (b)

4

.B

−−−→ ∅

5

.A

+ B −−−−→ ∅

w5 (a,b)

.n R

T

.w( x) .w1 (a)

= V k1 /(1 + a/V K1 ))

.(−1, 0)

.w2 (a)

= μa

.(1, 0)

.w3 (b)

= V K2 /(1 + b/(V K2 ))

.(0, −1)

.w4 (b)

= μb

.(0, 1)

.w5 (a, b)

= k2 ab/V

.(1, 1)

∂(0, b, t) k2 p(0, b − 1, t) =V ∂t 1 + Vb−1 K2 k3 +μp(1, b, t) + μ(b + 1)p(0, b + 1, t) + (b + 1)p(1, b + 1, t) V     k2 + μb p(0, b, t) − V k1 + 1 + V bK2 k1 ∂p(a, 0, t) p(a − 1, 0, t) =V ∂t 1 + Va−1 K1 +μ(a + 1)p(a + 1, 0, t) + μp(a, 1, t) k3 (a + 1)p(a + 1, 1, t) V     k1 + k + μa p(a, 0, t) − V 2 1 + V aK1

+

∂p(a, b, t) k2 k1 p(a − 1, b, t) + V p(a, b − 1, t) =V a−1 ∂t 1 + V K1 1 + Vb−1 K2

5.8 Molecular Approach to Chemical Kinetics

93

+ μ(a + 1)p(a + 1, b, t) + μ(b + 1)p(a, b + 1, t) k3 + (a + 1)(b + 1)p(a + 1, b + 1, t) V     k3 k1 k2 + μ(a + b) + ab p(a, b, t). − V + 1 + V aK1 V 1 + V bK2

5.8 Molecular Approach to Chemical Kinetics The solution of the set of differential equations of the form (5.17), written for each species .Xj included in the system, describes the time evolution of the system, i.e., the changes in time of the state vectors .x in the system. The expression (5.17) is not the precise number of reactive collisions, but the average. The actual number fluctuates around it, and in order to find the resulting fluctuation in .xj around the macroscopic values determined by (5.17), we need to switch to a molecular approach to the chemical kinetics. In order to understand how chemical kinetics might be modelled stochastically, we must first discuss how the number of molecular species is represented using the deterministic and stochastic approaches. In the deterministic model, this is a concentration measured in M (moles per litre), whereas in the stochastic model, this is an integer reflecting the number of molecules of the species. Then for a concentration of X of .[X] M in a volume of V litres, there are .[X]V moles of X, and hence, .nA [X]V molecules, where .nA 6.023 × 1023 is the Avogadro’s constant (the number of molecules in a mole). The rate constant conversion is the second problem that needs to be resolved. A continuous deterministic perspective of kinetics predominates in a large portion of the literature on biological reactions. As a result, rate constants are typically deterministic constants k when they are described. The response propensity expression and the formulas that change deterministic rate constants into stochastic rate constants are covered in the sections that follow.

5.8.1 Reactions Are Collisions Molecules must collide with enough energy to produce a transition state for a reaction to occur. In order to understand how energy was dispersed among systems with numerous particles, Ludwig Boltzmann devised a fairly broad theory. According to him, the value .exp[−E/kB T ] would be inversely proportional to the number of particles with energy E. The Boltzmann distribution predicts the distribution function for the fractional number of particles .Ni /N occupying a set of states i which each have energy .Ei :

94

5 Modelling Chemical Reactions

.

Ni gi e−Ei /kB T = , N Z(T )

(5.54)

where .kB is the Boltzmann constant, T is the temperature (assumed to be a sharply well-defined quantity), .gi is the degeneracy, or number of states having energy .Ei , N is the total number of particles: N=



.

Ni ,

(5.55)

i

and .Z(T ) is called the partition function Z(T ) =



.

gi e−Ei /kB T .

(5.56)

i

As an alternative, it provides the likelihood that a single system at a known temperature is in the desired condition. Only particles with temperatures and densities low enough to rule out quantum effects are subject to the Boltzmann distribution. The Maxwell–Boltzmann distribution, which bears the names of both scientists, was created by Maxwell by utilizing Boltzmann’s theories and applying them to the particles of an ideal gas. Maxwell also used the equation .E = (1/2)mv 2 for kinetic energy, where v represents the particle’s velocity, to represent energy. The distribution is best represented as a graph that displays the proportion of gaseous particles (as a probability) with various speeds (Fig. 5.7). Consider a bimolecular reaction of the form S1 + S2 −→ . . .

.

(5.57)

and the right-hand side is not relevant to this analysis. According to this reaction, molecules of .S1 and .S2 are capable of reacting with one another if they happen to collide with enough energy while travelling randomly, as a result of Brownian motion. Take into account one pair of these molecules in a sealed volume V . It is possible to comprehend the physical significance of the propensity (i.e., “risk”) of molecules colliding using statistical mechanics considerations. It is rigorously demonstrated that the collision propensity (also known as the collision hazard,hazard function, or reaction hazard ) is constant, provided that the volume is fixed and the temperature is constant, under the assumptions that the volume is not too large or well-stirred, and that it is in thermal equilibrium. The likelihood that the molecules are within reaction distance is independent of time because the molecules are equally distributed across the volume and this distribution is independent of time. A comprehensive treatment of this issue is given in Gillespie [77, 78]. Here we briefly review it by highlighting the physical basis of the stochastic formulation of chemical kinetics.

5.8 Molecular Approach to Chemical Kinetics

2.5

95

Maxwell-Boltzmann Distribution

× 10-3

T=273K T=323K T=473K T=573K

Probability

2

1.5

1

0.5

0 0

200

400

600

800

1000

1200

1400

1600

1800

2000

Velocity

Fig. 5.7 Maxwell–Boltzmann speed distributions that depend on temperature. As the temperature rises, the curve will shift to the right, and the kinetic energy will probably decrease. As temperature rises, it becomes more probable to find molecules with higher energies. We also notice that because the curve form is asymmetric, the average kinetic energy will always be larger than the most likely. The activation energy is the minimum amount of energy needed by the reacting particles

Consider now that the system composed of a mixture of the two molecular species, .S1 and .S2 in gas phase and in thermal, but not necessarily chemical equilibrium inside the volume V . Let us assume that the .S1 and .S2 molecules are hard spheres of radii .r1 and .r2 , respectively. A collision will occur whenever the centre-to-centre distance between an .S1 molecule and an .S2 molecule is less than .r12 = r1 + r2 . To calculate the molecular collision rate, let us pick an arbitrary 1–2 molecular pair, and denote by .v12 the speed of the molecule 1 relative to molecule 2. Consider the system now, which consists of a mixture of the two molecular species, .S1 and .S2 , in gas phase and in thermal equilibrium—though perhaps not necessarily chemical equilibrium. Assume that the molecules .S1 and .S2 are rigid spheres with radii of .r1 and .r2 , respectively (Fig. 5.8). When the centre-to-centre distance between two molecules of types .S1 and .S2 is smaller than .r12 = r1 + r2 , a collision will take place. In order to get the molecular collision rate, choose any pair of molecules between 1 and 2 and write down their respective speeds as .v1 12. Then, in the next small time interval .δt, molecule 1 will sweep out relative to molecule 2 a collision volume 2 δVcoll = π r12 v12 δt,

.

(5.58)

i.e., if the centre of molecule 2 happens to lie inside .δVcoll at time t, then the two molecules will collide in the time interval .(t, t + δt). Now, the classical procedure would estimate the number of .S2 molecules whose centres lie inside .δVcoll , divide the number by .δt, and then take the limit .δ → 0 to obtain the rate at which the .S1

96

5 Modelling Chemical Reactions

Fig. 5.8 A bimolecular reaction between two molecules is considered. The collision volume .δVcoll which molecule 1 (.M1 ) will sweep out relative to molecule 2 (.M2 ) in the next small time interval .δt [77]

molecule is colliding with .S2 molecules. However, this procedure suffers from the following difficulty: as .δVcoll → 0, the number of .S2 molecules whose centres lie inside .δVcoll will be either 1 or 0, with the latter possibility become more and more likely as the limiting process proceeds. This means that the two molecules will collide in the time period .(t,t + deltat) if the centre of molecule 2 happens to be located inside .δVcoll at time t. In order to calculate the rate at which the .S1 molecule collides with the .S2 molecules, the conventional method would first estimate the number of .S2 molecules whose centres lie inside .δVcoll , divide that number by .δt, and then take the limit .delta → 0. However, this method has the following challenge: The amount of .S2 molecules whose centres are located within of .δVcoll will either be 1 or 0, with the latter alternative becoming much more likely as the limiting process moves along. Then, in the limit of vanishingly small .δt, it is physically meaningless to talk about “the number of molecules whose centres lie inside .δVcoll ”. To override this difficulty, we can exploit the assumption of thermal equilibrium. Since the system is in thermal equilibrium, the molecules will at all times be distributed randomly and uniformly throughout the containing volume V . Therefore, the probability that the centre of an arbitrary .S2 molecule will be found inside .δVcoll at time t will be given by the ratio .δVcoll /V ; note that this is true even in the limit of vanishingly small .δVcoll . If we now average this ration over the velocity distributions of .S1 and .S2 molecules, we may conclude that the average probability that a particular 1–2 molecular pair will collide in the next vanishingly small time interval .δt is δVcoll /V =

.

2 v δt π r12 12 . V

(5.59)

For Maxwellian velocity distributions, the average relative speed .v12 is  v12 =

.

8kT π m12

1

2

,

(5.60)

where k is the Boltzmann’s constant, T the absolute temperature, and .m12 the reduced mass .m1 m2 /(m1 +m2 ). If we are given that at time t there are .X1 molecules of the species .S1 and .X2 molecules of the species .S2 , making a total of .X1 X2 distinct 1–2 molecular pairs, then it follows from (5.59) that the probability that

5.8 Molecular Approach to Chemical Kinetics

97

a 1–2 collision will occur somewhere inside V in the next infinitesimal time interval (t, t + dt) is

.

.

2 v dt X1 X2 π r12 12 . V

(5.61)

The chance of a 1–2 collision occurring in V in any tiny time interval can be calculated strictly, despite the fact that we are unable to rigorously quantify the number of 1–2 collisions occurring in V in any infinitesimal interval. As a result, rather than characterizing a system of thermally equilibrated molecules by collision rate, we should instead use a collision probability per unit time, namely the coefficient of dt in (5.61). Because of this, rather of being a deterministic rate process, these collisions are a stochastic Markov process. Then we can conclude that for a bimolecular reaction of the form (5.57), the probability that a randomly chosen A-B pair will react according to R in next dt is

Preact =

.

2 ) v12 (π r12 exp(−E/(kB T ) X1 X2 dt. V

(5.62)

5.8.2 Reaction Rate The amount of the chemical that is added to or withdrawn from a reaction (in moles or mass units) per unit time per unit volume is referred to as the reaction rate for that reactant or product. The physical condition of the reactants, the volume of the container in which the reaction takes place, the temperature at which the reaction takes place, and whether or not catalysts are present in the reaction are the key variables that affect the reaction rate. Physical State The physical state (solid, liquid, or gas) of a reactant has a vital role in determining the rate of change. Reactants come into touch when they are in the same phase, such as in an aqueous solution, due to thermal mobility. However, the reaction can only occur at the point where the reactants meet, while they are at different phases. A reaction can only occur at the point of contact, or in the case of a liquid and a gas, the surface of the liquid. To complete the reaction, the mixture may need to be shaken and stirred ferociously. As a result, the more contact a solid or liquid reactant has with the other reactant and the larger its surface area is per unit volume, the more finely split it is. Volume The reaction propensity is inversely proportional to the volume. We can explain this fact in the following way. Consider two molecules Molecule 1 and Molecule 2. Let the molecule positions in space be denoted by .p1 and .p2 , respectively. If .p1 and .p2

98

5 Modelling Chemical Reactions

are uniformly and independently distributed over the volume V , for a sub-region of space D with volume .V  , the probability that a molecule is inside D is Pr(pi ∈ D) =

.

V V

i = 1, 2.

(5.63)

If we want to know how likely it is that Molecule 1 and Molecule 2 are within a reacting distance r of each other at any given instant of time (assuming that r is much smaller than the dimensions of the container, so that boundary effects can be ignored), we can compute it as Pr(|p1 − p2 | < r) = E(Pr(|p1 − p2 | < r|p2 )),

(5.64)

.

but the conditional probability will be the same for any .p2 away from the boundary, so that the expectation is redundant, and we can state that E(Pr(|p1 − p2 | < r|p2 )) = Pr(|p1 − p2 | < r) = Pr(pi ∈ D) =

.

4π r 3 . 3V

(5.65)

This probability is inversely proportional to V . Arrhenius Equation Temperature often has a large influence on the rate of a reaction. Because molecules gain energy when heated, the more energy they have, the more likely they are to collide with other reactants. As a result, at higher temperatures, there are more crashes. Because heating a molecule increases its kinetic energy, the “energy” of the hit becomes more relevant. The empirical Arrhenius law is typically used to describe the temperature dependence of the reaction rate coefficient k:

Ea k = A exp − . RT

.

(5.66)

The activation energy is denoted by .Ea , while the gas constant is denoted by R. Because the molecules’ energies at temperature T are determined by a Boltzmann distribution, the number of collisions with energies greater than .Ea should be proportional to .exp[−Ea /RT ]. The frequency factor is A. This factor counts the number of collisions between reactants that result in the products. The reaction determines the values for A and .Ea . As can be seen, raising the temperature or lowering the activation energy (for example, by employing catalysts) accelerates the reaction. Although not exact, the Arrhenius equation is extremely accurate in a wide range of conditions, and distinct alternate formulas are periodically discovered to be more beneficial in specific cases. One example comes from the “collision theory” of chemical reactions developed by Max Trautz and William Lewis between 1916 and 1918. Molecules respond when they collide with relative kinetic energy larger than .Ea along their line-ofcentres, according to this theory. This yields a formula that is strikingly similar to

5.8 Molecular Approach to Chemical Kinetics

99

the Arrhenius equation, except that the pre-exponential factor A is proportional to the square root of temperature rather than a√constant. This shows how the average molecular speed, which is proportional to . T , determines the total frequency of all collisions, whether reactive or not. In contrast to the exponential dependence associated to .Ea , the pre-exponential factor’s square root temperature dependence is often rather slow in practise. Wigner, Eyring, Polanyi, and Evans established the Transition State Theory of chemical reactions in the 1930s, which includes another formulation similar to Arrhenius’. This can take various forms, but one of the most common is

kB T ΔG (5.67) .k = exp − , h RT where .ΔG is the Gibbs free energy of activation, .kB is Boltzmann’s constant, and h is Planck’s constant. At first glance, this appears to be an exponential with a temperature-dependent linear component added on top. However, it is important to remember that free energy is a temperature-dependent quantity in and of itself. When all of the intricacies are sorted out, one is left with a formula that looks like an Arrhenius exponential multiplied by a slowly changing function of T . The free energy of activation also includes an entropy and an enthalpy term, both of which are temperature-dependent. The reaction affects the precise shape of the temperature dependency, which can be approximated using statistical mechanics methods (partition functions of the reactants and the activated complex are involved). Catalysts A catalyst is a material that accelerates a chemical reaction but has no effect on the end result. The catalyst accelerates the reaction by using other, lower-activation response methods to transfer energy. In autocatalysis, a reaction product acts as its own catalyst. A similar approach could start a chain reaction. In the Gillespie’s formulation of stochastic chemical kinetics, the container’s volume and temperature remain constant in time. However, these assumptions are overly general in the biological context, strong, and can produce incorrect simulation results.

5.8.3 Zeroth-, First-, and Second-Order Reactions These reactions have the following form: cμ

Rμ : ∅ −→ X.

.

(5.68)

Although nothing is formed from nothing in practise, it is occasionally beneficial to control the pace of formation of a chemical species (or influx from another

100

5 Modelling Chemical Reactions

compartment) using a zeroth-order reaction. In this case, .cμ is the propensity of a reaction of this type occurring, and so aμ (Y, cμ ) = cμ .

.

(5.69)

For a reaction of this nature, the deterministic rate law is k M.s −1 , and thus for a volume V , X is produced at a rate .nA V kμ molecules per second, where .kμ is the deterministic rate constant for the reaction .Rμ . As the stochastic rate law is just .cμ molecules per second, we have cμ = nA V kμ .

.

(5.70)

Consider the first-order reaction cμ

Rμ : Xi −→ . . . .

.

(5.71)

Here, .cμ represents the propensity that a particular molecule of .Xi will undergo the reaction. However, if there are .xi molecules of .Xi , each of which having a propensity of .cμ of reacting, the combined propensity for a reaction of this type is aμ (Y, cμ ) = cμ xi .

.

(5.72)

The spontaneous conversion of one or more molecules into others and the spontaneous dissociation of a complex molecule into simpler molecules are both examples of this type of first-order reaction. Since this is a second-order process, they are not intended to simulate the transformation of one molecule into another in the presence of a catalyst. A first-order reaction provides a good approximation in the presence of a large pool of catalyst, whose concentration is not anticipated to change throughout the evolution of the reaction network. For a first-order reaction, the deterministic rate law is .kμ [X] M .s −1 , and so for a volume V, a concentration .[X] corresponds to .x = nA [X]V molecules. Since .[X] decreases at rate .nA kμ [X]V = kμ x molecules per second. Since the stochastic rate law is .cμ x molecules per second, we have cμ = kμ ,

.

(5.73)

i.e., for first-order reactions, the stochastic and deterministic rate constants are equal. The form of the second-order reaction is the following: cμ

Rμ : Xi + Xk −→ . . . .

.

(5.74)

Here, .cμ represents the propensity that a particular pair of molecules of types .Xi and .Xk will react. But, if there are .xi molecules of .Xi and .xk molecules of .Xk , there

5.8 Molecular Approach to Chemical Kinetics

101

are .xi xk different pairs of molecules of this type, and so this gives the combined propensity of aμ (Y, cμ ) = cμ xi xk .

.

(5.75)

There is another type of second-order reaction, called homodimerization reaction, which needs to be considered: cμ

Rμ : 2Xi −→ . . . .

.

(5.76)

Again, .cμ is the propensity of a particular pair of molecules reacting, but here there are only .xi (xi − 1)/2) pairs of molecules of species .Xi , and so aμ (Y, cμ ) = cμ

.

xi (xi − 1) . 2

(5.77)

For second-order reactions, the deterministic rate law is .kμ [Xi ][Xk ] M .s −1 . Here for a volume V , the reaction proceeds at a rate of .nA kμ [Xi ] [Xk ]V = kμ xi xk /(nA V ) molecules per second. Since the stochastic rate law is .cμ xi xk molecules per second, we have cμ =

.

kμ . nA V

(5.78)

For homodimerization reaction, the deterministic law is .kμ [Xi ]2 , so the concentration of .Xi decreases at rate .nA 4kμ [Xi ]2 V = 2kμ xi2 /(nA V ) molecules per second. The stochastic rate law is .cμ xi (xi − 1)/2 so that molecules .Xi are consumed at a rate of .cμ xi (xi − 1) molecules per second. These two laws do not match, but for large .xi , .xi (xi − 1) can be approximated by .xi2 , and so to the extent that the kinetics match, we have cμ =

.

2kμ . nA V

(5.79)

Note the additional factor of two in this case. By equating Eq. (5.78) with Eq. (5.62), we obtain the following expression for the deterministic rate of a second-order reaction of type (5.74) kμ =

.

2 exp nA v12 π r12

Eμ , kB T

(5.80)

while for a second-order reaction of type (5.76), the deterministic rate constant is kμ =

.

Eμ 1 2 exp[ nA v12 π r12 ]. 2 kB T

(5.81)

102

5 Modelling Chemical Reactions

5.8.4 Higher-Order Reactions Most reactions that are commonly represented as a single reaction of order greater than two are actually the combined result of two or more reactions of order one or two; however, this is not always the case. In these cases, comprehensive reaction modelling is usually preferred over the use of high-order stochastic kinetics. Consider the following trimerization reaction, for example. cμ

cμ : 3X −→ X3 .

.

(5.82)

The rate constant .cμ represents the propensity of triples of molecules of X coming together simultaneously and reacting, leading to a combined propensity of the form   x x(x − 1)(x − 2) = cμ . .aμ (Y, cμ ) = cμ 3 6

(5.83)

However, modelling the process as a pair of second-order reactions is probably more accurate in most circumstances. 2X −→ X2

.

X2 + X −→ X3 , and this system will have a quite different dynamics to the corresponding third-order system.

5.9 Fundamental Hypothesis of Stochastic Chemical Kinetics Let us now use a more formal way to generalize the principles stated in the prior section. If the aforementioned reasoning is applied just to reactive collisions, that is, collisions that affect the state vector, the chemical processes are better characterized by a reaction probability per unit time rather than a reaction rate. Assume that the reactions can take place between the molecules .S1 and .S2 . R1 : S1 + S2 → 2S1 .

.

(5.84)

Then in analogy with Eq. (5.59), we may assert the existence of a constant .c1 , which depends only on the physical properties of the two molecules and the temperature of the system, such that c1 dt = average probability that a particular 1–2

.

molecular pair will react according to R1 in the next infinitesimal time interval dt.

(5.85)

5.9 Fundamental Hypothesis of Stochastic Chemical Kinetics

103

More generally, if, under the assumption of spatial homogeneity (or thermal equilibrium), the volume V contains a mixture of .Xi molecules of chemical species .Si , (.i = 1, 2, . . . , N ), and these N species can interact through M specified chemical reaction channels .cμ (.μ = 1, 2, . . . , M), we may assert the existence of M constants .cμ , depending only on the physical properties of the molecules and the temperature of the system. Formally, we assert that cμ = average probability that a particular combination

.

of cμ reactant molecules will react accordingly to cμ in the next infinitesimal time interval dt.

(5.86)

. This equation includes both the definition of the stochastic reaction constant .cμ and the basic hypothesis of the stochastic formulation of chemical kinetics. This idea applies to any chemical solution that is kept well-mixed, either through direct stirring or by simply demanding that non-reactive molecule collisions occur far more frequently than reactive molecular collisions. Let us finally note that the master equation that describes the time evolution of the probability function .P ({X1 , X2 , . . . , XN }, t) may be derived from (5.86). To derive the master equation from the fundamental hypothesis of stochastic chemical kinetics, .P ({X1 , X2 , . . . , XN }, t) is expressed using the sum and the multiplication laws of probability theory. Thus .P ({X1 , X2 , . . . , XN }, t) is the sum of the probabilities of the .M + 1 different ways in which the system can reach the  = X1 , X2 , . . . , XN ) at time .t + dt: state (.X P ({X1 , X2 , . . . , XN }, t + dt) =

.

P ({X1 , X2 , . . . , XN }, t)(1 −

M  μ=1

aμ dt) +

M 

Bμ dt,

μ=1

(5.87) where aμ dt ≡

.

 cμ × no. of distinct molecular combinations in the state X = probability that a cμ reaction will occur in V in (t = dt), given the system is in the state (X1 , X2 , . . . , XN ) at time t.

(5.88)

The first term in Eq. (5.87) is the probability that the system will be in the state (X1 , X2 , . . . , XN ) at time t and then remains in that state (i.e., it undergoes no reactions) in .(t, t + dt). The quantity .Bμ dt gives the probability that the system is one .cμ reaction removed from the state .(X1 , X2 , . . . , XN ) at time t, and the

.

104

5 Modelling Chemical Reactions

n undergoes a .cμ reaction in .(t, t + dt). Namely, .Bμ will be the product of P evaluated at the appropriate once-removed state at t, times .cμ , times the number of .cμ molecular reactant combinations available in that once-removed state. Thus, Eq. (5.87) leads directly to the master equation

.

M  ∂ P (X1 , . . . , XN ; t) = [Bμ − aμ P (X1 , . . . , XN : t). ∂t

(5.89)

μ=1

5.10 The Reaction Probability Density Function In this section, we introduce the foundation of the stochastic simulation algorithm of  = {X1 , . . . , XN } at time Gillespie. If we are given that the system is in the state .X t, computing its stochastic evolution means “moving the system forward in time”. In order to do that, we need to answer two questions: 1. When will the next reaction occur? 2. What kind of reaction will it be? Because of the essentially random nature of chemical interactions, these two questions are answerable only in a probabilistic way. Let us introduce the function .P (τ, μ) defined as the probability that gives the  at time t, and the next reaction in the volume V will occur in the infinitesimal state .X time interval .(t + τ, t + τ + dτ ) and will be an .Rμ reaction. .P (τ, μ) is called reaction probability density function because it is a joint probability density function on the space of the continuous variable .τ .(0 ≤ τ < ∞) and the discrete variable .μ .(μ = 1, 2, . . . , M). The values of the variables .τ and .μ will give us answer to the two questions mentioned above. Gillespie showed that from the fundamental hypothesis of stochastic chemical kinetics (see Sect. 5.9) it is possible to derive an analytical expression for .P (τ, μ) and then use it to extract the values for .τ and .μ. Gillespie showed how to derive from the fundamental hypothesis and from an analytical expression of .P (τ, μ). First of all, .P (τ, μ) can be written as the product of .P0 (τ ),  at time t, no reaction will occur in the time the probability that given the state .X interval .(t, t + dt), times .aμ dτ , the probability that an .Rμ reaction will occur in the time interval .(t + τ, t + τ + dτ ) P (μ, τ )dτ = P0 (τ )aμ dt.

(5.90)

M    P0 (τ  + dτ  ) = P0 (τ  ) 1 − ai dτ  ,

(5.91)

.

In turn, .P0 (τ ) is given by

.

i=1

5.11 The Stochastic Simulation Algorithms

105

   where .[1 − M i=1 ai dτ ] is the probability that no reaction will occur in time .dτ  from the state .X. Therefore, 

M P0 (τ ) = exp − ai τ .

.

(5.92)

i=1

Inserting (5.91) into (5.90), we find the following expression for the reaction probability density function

P (μ, τ ) =

.

aμ exp(−a0 τ ) if 0 ≤ τ < inf ty 0 otherwise,

(5.93)

where a0 ≡

M 

.

i=1

ai ≡

M 

hi ci .

(5.94)

i=1

The expression for .P (μ, τ ) in (5.93) is, like the master equation in (5.53), a rigorous mathematical consequence of the fundamental hypothesis (5.86). Notice finally that .P (τ, μ) depends on all the reaction constants (not just on .cμ ) and on the current numbers of all reactant species (not just on the .Rμ reactants).

5.11 The Stochastic Simulation Algorithms In this section, we review the three formulations of stochastic simulation variants of Gillespie algorithm: direct, first reaction, and next reaction method.

5.11.1 Direct Method On each step, the direct method generates two random numbers .r1 and .r2 from a set of uniformly distributed random numbers in the interval .(0, 1). The time for the next reaction to occur is given by .t + τ , where .τ is given by τ=

.

  1 1 . ln a0 r1

(5.95)

The index .μ of the occurring reaction is given by the smallest integer satisfying

106

5 Modelling Chemical Reactions μ  .

aj > r2 a0 .

(5.96)

j =1

The system states are updated by .X(t + τ ) = X(t) + νμ , and then the simulation proceeds to the next occurring time: Algorithm 1. Initialization: Set the initial numbers of molecules for each chemical species; input the desired values for the M reaction constants .c1 , c2 , . . . , cM . Set the simulation time variable t to zero and the duration T of the simulation. 2. Calculate and store the propensity functions .ai for all the reaction channels .(i = 1, dots, M), and .a0 . 3. Generate two random numbers .r1 and .r2 in .U nif (0, 1). 4. Calculate .τ according to (5.95). 5. Search for .μ as the smallest integer satisfying (5.96). 6. Update the states of the species to reflect the execution of .μ (e.g., if .Rμ :S1 +S2 → 2S1 , and there are .X1 molecules of the species .S1 and .X2 molecules of the species .S2 , then increase .X1 by 1 and decrease .X2 by 1). Set .t ← t + τ . 7. If .t < T , then go to step 2, otherwise terminate. Note that the random pair .(τ, μ), where .τ is given by (5.95) and .μ by (5.96), is generated according to the probability density function in (5.93). A rigorous proof of this fact may be found in [76]. Suffice here to say that (5.95) generates a random number .τ according to the probability density function P1 (τ ) = a0 exp(−a0 τ ),

.

(5.97)

while (5.96) generates an integer .μ according to the probability density function P2 (μ) =

.

aμ , a0

(5.98)

and the stated result follows because P (τ, μ) = P1 (τ ) · P2 (μ).

.

Note finally that, to generate random numbers between 0 and 1, we can do as follows. Let .FX (x) be a distribution function of an exponentially distributed variable X and let .U ∼ U nif [0, 1) denote an uniformly distributed random variable U on the interval 0 to 1.

1 − e−ax if x ≥ 0 .FX (x) = (5.99) . 0 if x < 0

5.11 The Stochastic Simulation Algorithms

107

.FX (x) is a continuous non-decreasing function, and this implies that it has an inverse .FX−1 . Now, let .X(U ) = FX−1 (U ), and we get the following:

P (X(U ) ≤ x) = P (FX−1 (U ) ≤ x)

.

= P (U ≤ FX (x).

(5.100)

= FX (x).

(5.101)

It follows that FX−1 (U ) = −

.

ln(1 − U ) ∼ Exp(a). a

(5.102)

In returning to step 1 from step 7, it is necessary to re-calculate only those quantities .ai , corresponding to the reactions .Ri whose reactant population levels were altered in step 6; also .a0 must be re-calculated simply by adding to it the difference between each newly changed .ai value and its corresponding old value. This algorithm uses M random numbers per iteration, takes time proportional to M to update the .ai ’s, and takes time proportional to M to identify the smallest putative time.

5.11.2 First Reaction Method The first reaction method generates a .τk for each reaction channel .Rμ according to τi =

.

1 1 , ln ai ri

(5.103)

where .r1 , r2 , . . . , rM are M statistically independent samplings of .U nif (0, 1). Then τ and .μ are chosen as

.

τ = min{τ1 , τ2 , . . . , τM }

(5.104)

μ = the index of min{τ1 , τ2 , . . . , τM }.

(5.105)

.

and .

Algorithm 1. Initialization: Set the initial numbers of molecules for each chemical species; input the desired values for the M reaction constants .c1 , c2 , . . . , cM . Set the simulation time variable t to zero and the duration T of the simulation. 2. Calculate and store the propensity functions .ai for all the reaction channels .(i = 1, dots, M), and .a0 .

108

5 Modelling Chemical Reactions

3. 4. 5. 6.

Generate M independent random numbers from .U nif (0, 10. Generate the times .τi , .(i = 1, 2, . . . , M) according to (5.103). Find .τ and .μ according to (5.104) and (5.105), respectively. Update the states of the species to reflect the execution of reaction .μ. Set .t ← t + τ. 7. If .t < T , then go to step 2, otherwise terminate. The direct and the first reaction methods are fully equivalent to each other [76, 77]. The random pairs .(τ, μ) generated by both methods follow the same distribution.

5.11.3 Next Reaction Method Gibson and Bruck [74] transformed the first reaction method into an equivalent but more efficient new scheme. The next reaction method is more efficient than the direct method when the system involves many species and loosely coupled reaction channels. This method can be viewed as an extension of the first reaction method in which the unused .M − 1 reaction times (5.104) are suitably modified for reuse. Clever data storage structures are employed to efficiently find .τ and .μ. Algorithm 1. Initialize: • Set the initial numbers of molecules, set the simulation time variable t to zero, and generate a dependency graph G. • Calculate the propensity functions .ai , for all i. • For each i, .(i = 1, 2, . . . , M), generate a putative time .τi , according to an exponential distribution with parameter .ai . • Store the .τi values in an indexed priority queue P . 2. Let .μ be the reaction whose putative time .τμ stored in P is least. Set .τ ← τμ . 3. Update the states of the species to reflect the execution of the reaction .μ. Set .t ← τμ . 4. For each edge .(μ, α) in the dependency graph G: • Update .a0 . • If .α = μ, set τα ←

.

aα,old (τα − t) + t. aα,new

(5.106)

• If .α = μ, generate a random number r and compute .τα according to the following equation:

5.12 Spatio-Temporal Simulation Algorithms

τα =

.

1 1 ln + t. aα (t) r

109

(5.107)

• Replace the old .τα value in P with the new value. 5. Go to step 2. Two data structures are used in this method: • The dependency graph G is a data structure that tells precisely which .ai should change when a given reaction is executed. Each reaction channel is denoted as a node in the graph. A direct edge connects .Ri to .Rj if and only if the execution of .Ri affects the reactants in .Rj . The dependency graph can be used to re-calculate only the minimal number of propensity functions in step 4. • The indexed priority queue consists of a tree structure of ordered pairs of the form .(i, τi ), where i is a reaction channel index and .τi is the corresponding time when the next .Ri reaction is expected to occur, and an index structure whose ith element points to the position in the tree that contains .(i, τi ). In the tree, each parent has a smaller .τ than either of its children. The minimum .τ always stays in the top of the node, and the order is only vertical. In each step, the update changes the value of the node and then bubbles it up or down according to its value to obtain the new priority queue. Theoretically, this procedure takes at most .ln(M) operations. In practice, usually there are a few reactions that occur much more frequently. Thus, the actual update takes less than .ln(M) operations. The next reaction method takes some CPU time to maintain the two data structures. For a small system, this cost dominated the simulation. For a large system, the cost of maintaining the data structures may be relatively smaller compared with the savings. The argument for the advantage of the next reaction method over the direct method is based on two observations: first, in each step, the next reaction method generates only one uniform random number, while the direct method requires two. Second, the search for the index .μ of the next reaction channel takes .O(M) time for direct method, while the corresponding cost for the next reaction method is on the update of the indexed priority queue that is .O(ln(M)).

5.12 Spatio-Temporal Simulation Algorithms The preceding sections discussed stochastic techniques for modelling biological pathways without spatial information. However, the genuine biological world is made up of components that interact in three dimensions. The intracellular material is not distributed uniformly in space within a cell compartment, and molecular localization is significant, for example, in the diffusion of ions and molecules across membranes and the transmission of an action potential through the axon of a nerve fibre. Thus, in true biological systems, the basic premise of spatial homogeneity and

110

5 Modelling Chemical Reactions

huge concentration diffusion is no longer valid [54]. A stochastic spatio-temporal simulation of a biological system is necessary in this scenario. The improved performance of Gillespie Algorithms has made spatio-temporal simulation feasible. The Gillespie Algorithms were expanded to represent intracellular diffusion by Stundzia and Lumsden [188] and Elf et al. [54]. The reaction– diffusion master equation and diffusion probability density functions were formalized. The total volume of a model was partitioned into several sub-volumes, and the Gillespie Algorithm was applied without much change by considering diffusion processes as chemical reactions. Stundzia demonstrated the algorithm’s application on calcium wave propagation within living cells, observing regional variations and spatial correlations in the small particles limit. However, in order to estimate the probability density function for diffusion, thorough understanding of the diffusion processes is required. Furthermore, the methods have only been applied to tiny systems with a limited number of molecular species, despite the fact that they need a substantial amount of processing power. Shimizu extended the StochSim algorithm to add spatial effects of the system in [177]. Spatial information was added to the properties of each molecular species in his technique, and a simple two-dimensional lattice was built to allow interaction between surrounding nodes. The algorithm was applied to study the action of a complex of signalling proteins associated with the chemotactic receptors of coliform bacteria. He showed that the interactions among receptors could contribute to high sensitivity and wide dynamic range in the bacterial chemotaxis pathway. Directly approximating the individual molecules’ Brownian motions is another method of simulating stochastic diffusion (MCell [15]). In this instance, random numbers are used during the simulation to decide the molecules’ direction and travel. Similar to this, collisions with possible binding sites and surfaces are identified and dealt with only by random numbers with a calculated binding probability. Both stochastic and a three-dimensional biological model including a discrete number of molecules can be handled by MCell. MCell uses 3D spatial partitioning and parallel computing to improve algorithmic efficiency, but due to the high computational demand, the simulation is only capable of simulating microphysiological processes such as synaptic transmission. The simulation of a spatio-stochastic biological system is still a difficult task, despite improvements to many approaches. A new mathematical theory of diffusion that may be implemented into a stochastic algorithm simulating the dynamics of a reaction–diffusion system is recently given by Lecca et al. [121, 122] as a solution k

to this problem. A first-order reaction .Ai − → Aj is used to model the transport of a molecule A from a location i to a region j of the space, where the rate constant k is dependent on the diffusion coefficient. The local concentration of the solutes, their intrinsic viscosities, their frictional coefficients, and the temperature of the system are all taken into account while modelling the diffusion coefficients. The occurrence of diffusion events and chemical reaction events determines the system’s stochastic time evolution. The intrinsic reaction kinetics and diffusion dynamics determine the probability distribution of waiting durations from which an event (reaction or diffusion) is chosen at each time step.

5.13 Ordinary Differential Equation Stochastic Models: the Langevin Equation

111

5.13 Ordinary Differential Equation Stochastic Models: the Langevin Equation While both closed and open systems can have internal fluctuations, which are self-generated within the system, external fluctuations are influenced by the surroundings of the system. We have seen that one defining trait of internal fluctuations is that they scale with system size and tend to disappear in the limit of thermodynamics. An important factor in the development of arranged biological structures is environmental noise. In order to simulate the ontogenetic development and plasticity of some brain structures, external noise-induced ordering was incorporated [56]. It was also shown that noise can help a system change from one stable state to another stable state. External noise can facilitate transitions to states that are unavailable (or even do not exist) in a deterministic framework because stochastic models may display qualitatively different behaviours from their deterministic counterpart [100]. When it comes to extrinsic stochasticity, the governing reaction equations are modified to include multiplicative or additive stochastic factors. These equations are also known as stochastic differential equations, and they are typically thought of as random perturbations to the deterministic system. The general equation is .

dx = f (x) + ξx (t). dt

(5.108)

The definition of the additional term .ξx differs according to the formalism adopted. In Langevin Equations [79], .ξx is represented by Eq. (5.109). Other studies [91] adopt a different definition where .ξi (t) is a rapidly fluctuating term with zero mean (.{ξi (t)} = 0). ξx (t) =

M 

.

√ Vij αj X(t)Nj (t),

(5.109)

j =1

where .Vij is the change in the number of molecules of species i brought by one reaction j and .Nj are statistically independent normal random variables with mean 0 and variance 1. The Langevin method used to include fluctuations in the molecular population level evolution equation does not apply to nonlinear systems. The challenges that such a generalization leads to are briefly described in this section. The term “external noise” refers to fluctuations introduced by the application of a random force, whose stochastic features are assumed to be understood, into an otherwise deterministic system. The fact that the system is made up of distinct particles causes internal noise. It cannot be separated from the system’s evolution equation since it is a necessary component of the mechanism through which the state of the system evolves. A closed physical system with internal noise is a Brownian particle and the fluid it is surrounded by. However, Langevin considered the particle as a mechanical system that was susceptible to the fluid’s force. He separated this force into a

112

5 Modelling Chemical Reactions

random force and a deterministic damped force, both of which he viewed as external forces because their qualities as functions of time were assumed to be known. If a further force is applied to the particle, the physical pictures of these attributes remain unchanged. Although the noise source in a chemical reacting network is internal and there is no physical basis for a separation into a mechanical part and a random term with known properties, Eq. (5.108) has recently been used to model the evolution of biochemical systems. The following method is employed to apply the Langevin equation to describing the evolution of a system of chemically reacting particles. Consider a situation where a system’s evolution is phenomenologically characterized by a deterministic differential equation .

dx = f (x), dt

(5.110)

where x stands for a finite set of macroscopic variables, but for the sake of simplicity, we consider the case that x is a single variable. Let us suppose to know that for some reason there must also be fluctuations about these macroscopic values. Therefore, we supplement (5.110) with a Langevin term .

dx = f (x) + L(t). dt

(5.111)

Note now that on averaging (5.111) one does not find that .x obeys to the phenomenological equation (5.110), rather than 1 ∂t x = f (x) = f (x) + (x − x)2 ∂t2 (x) + . . . , 2

.

(5.112)

where .∂t ≡ ∂/∂t. It follows that .x does not obey any differential equation at all. This demonstrates the fundamental problem with the Langevin approach’s applicability to the internal noise of systems with nonlinear phenomenological laws. The phenomenological equation (5.110) holds true only when fluctuations are ignored. This means that there is an inherent margin of uncertainty of the order of fluctuations when .f (x) is determined phenomenologically. There is no need to postulate that .f (x) is to be employed in (5.111) if we derive a specific version of .f (x) from a theory or experiment in which fluctuations are ignored. Although it could not be seen in macroscopic findings, there could be a mismatch between both that is the same size as the fluctuations; this cannot, of course, be ignored in the equation of the fluctuations themselves.

5.14 Hybrid Algorithms

113

5.14 Hybrid Algorithms As a result of the coupling of activities with vastly diverse time scales, biological systems are rigid by nature. Some compounds require a long time to run over, while others are quickly synthesized and degraded (usually metabolites). While some biochemical processes include a series of stages, others only require a single attachment or dissociation event. This disparity in time scales can be taken advantage of by assuming quasi-equilibrium and using the equilibrium constant to remove some components from the model and so simplify it, as we saw in Sect. 5.4.2.1. The stiffness issues that deterministic algorithms face also affect stochastic algorithms. The entire simulation is drastically slowed down in order to capture the system’s quick dynamics. Therefore, the fundamental goal of hybrid algorithms is to utilize the benefits of other algorithms to balance out the drawbacks of stochastic algorithms. There have been numerous attempts to demonstrate the usefulness and viability of hybrid algorithms. Similar methods have been employed by Bundschuh et al [163], Haseltine and Rawlings [89], Puchalka and Kierzek [163], and Bundschuh himself to combine Langevin equation and Gillespie algorithms. In both situations, the modeller must come up with techniques and standards for dividing the system into slow dynamics and fast dynamics subsystems. While Gillespie methods can manage the slow dynamics subsystem, ODE or Langevin equations can handle the fast dynamics subsystem. The correctness of the solutions must also be maintained through numerical treatment, such as the “slow variables” in Bundschuh and the “probability of no reaction” in Haseltine and Rawlings. The algorithms produce encouraging findings, which are also in line with those of Gillespie algorithms. By utilizing a hybrid algorithm to simulate the effect of stochasticity on the bi-modality of an intracellular viral infection model, Haseltine and Rawlings in [89] demonstrated the application of hybrid algorithms. The algorithms were also tested using the .λ-phage model, according to Kiehl et al [117]. The relevance of hybrid algorithms has been pointed out in several papers (Alur et al. [6]; Matsuno et al. [141]; Bockmayr and Courtois [23]). Modelling an alternate splicing regulatory model using hybrid constraint programming techniques was done by Bockmayr and Courtois. In situations where full model knowledge is lacking, this method is highly helpful. Alur et al. modelled the quorum sensing phenomena in Vibrio fischeri, a marine bacterium, using CHARON, a formal description language of hybrid system that combines ODE with “mode switching” mechanism. A Hybrid Petri Net [141] approach has been employed to model a hybrid system using ODEs and discrete events. This method has been used to model the growth pathway control of .λ-phage. The goal of hybrid algorithms is to bridge the gap between the system’s macroscopic and mesoscopic scales. It has been demonstrated that hybrid modelling is particularly relevant for accurately capturing biological system behaviour. Hybrid methods have moreover significantly reduced the computational expense of large-

114

5 Modelling Chemical Reactions

scale modelling and simulation. The fact that more parameters must be defined in order to give the algorithms further numerical treatment has the significant drawback that the accuracy of the answers depends on the correctness of the parameters. The simulations typically produce solutions with well-calibrated parameters. Even if the computing cost of these hybrid approaches has significantly decreased, there are still many computational problems that need to be solved before they can be used to tackle a real-world problem. Some of the issues are accuracy of results, consistency of system parameters between different levels of abstraction, highly nonlinearity, possibility to separate the systems into different subsystems, and implementing dynamic switching between different mathematical formalisms (see the work of Lecca and al. [126] on this last issue).

Chapter 6

Reaction–Diffusion Systems

6.1 The Physics of Reaction–Diffusion Systems As their names indicate, reaction–diffusion models consist of two components. The first is a set of biochemical reactions that produce, transform, or remove chemical species. The second component is a mathematical description of the diffusion process. At molecular level, diffusion is due to random motion of molecules in a medium. The great majority of mesoscopic reaction–diffusion models in intracellular kinetics is usually performed on the premise that diffusion is so fast that all concentrations are homogeneous in space. However, recent experimental data on intracellular diffusion constants indicate that this supposition is not necessarily valid even for small prokaryotic cells. If two mutually accessible regions of space contain substantially different numbers of molecules, in the absence of other effects and forces, this random motion will result in a net flow of molecules from the region of high concentration to the region of lower concentration. The diffusion, as a result of Brownian motion, is a simple statistical effect that does not depend on the detailed mechanism by which molecules transit from a region to the other. If the system is composed by a sufficiently large number of molecules, the concentration, i.e., the number of molecules per unit volume, becomes a continuum and differentiable variable of space and time. In this limit, a reaction–diffusion system can be modelled by using differential equations. In an unstructured solvent, ideally behaving solute (i.e., ones for which solute–solute interaction is negligible) obeys the Fick’s law of diffusion. However, in biological system, even for purely diffusive transport phenomena, the classical Fickian diffusion is at best a first approximation [2, 3]. Spatial effects are present in many biological systems, so that the spatially homogeneous assumption will not always hold. Examples of spatial effects include mRNA movement within the cytoplasm [72], Ash 1 mRNA localization in budding yeast [4], morphogen gradients across egg-polarity genes in Drosophila oocyet [4], and the synapse specificity of long-term facilitation in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Lecca, B. Carpentieri, Introduction to Mathematics for Computational Biology, Techniques in Life Science and Biomedicine for the Non-Expert, https://doi.org/10.1007/978-3-031-36566-9_6

115

116

6 Reaction–Diffusion Systems

Aplysia [114]. Since intracellular medium can be hardly described as unstructured, the chief effect is this fact is to make the diffusion coefficient of a species dependent on the concentration of that species and on the other species of solutes eventually present in the medium. Before proceeding further, it is useful to review the concepts of diffusive flux and Fick’s laws. The key concepts in the mathematical description of diffusion are summarized in the definition of flux of solute moving from one region to the other of the space. Consider a small surface S of area dA oriented perpendicular to one of the coordinate axes, say the y-axis. The flux of solute in the y direction, J , is defined as the number of molecules that pass through the surface per unit area per unit time. Therefore, the number of solute molecules crossing the surface in time dt is J dAdt. The net flux depends on the number of molecules in small regions to either side of the surface: if there are more molecules on the left, then we expect a left-to-right flux that grows in size as the difference of concentration to either side of the surface increases. Moving the surface S from one point in space to another, we may find that this local difference changes. Therefore, the flux is a vectorial quantity depending on the position in space, i.e., .J = J (x, y, z). The simplest description of the concentration dependence of the flux is the Fick’s first law, namely the flux is proportional to the local derivative of the concentration c of solute with respect to the spatial variables: .J = −D∂c/∂x in one dimension, or .J = −D∇c in three dimensions. The quantity D in the Fick’s law is known as diffusion coefficient. If the medium is isotropic, D is a constant scalar independent of the concentration of the solute. With regard to reaction–diffusion systems, in this chapter, we focus on the work of the author [128]. Lecca et al. [128] presented a model of the concentration dependence of the diffusion coefficients for a reaction–diffusion system and calculated the rates of diffusion of the biochemical species in terms of these concentration-dependent diffusion coefficients. For simplicity, we treat here purely diffusive transport phenomena of non-charged particles, and, in particular, the case in which the diffusion is driven by a chemical potential gradient in x direction only (the generalization to the three-dimensional case poses no problems). The method in [128] consists of the following five main steps: 1. Calculation of the local virtual force F per molecules as the spatial derivative of the chemical potential 2. Calculation of the particles mean drift velocity in terms of F and local frictional f 3. Estimation of the flux J as the product of the mean drift velocity and the local concentration 4. Definition of diffusion coefficients as function of local activity and frictional coefficients and concentration, and 5. Calculation of diffusion rates as the negative first spatial derivative of the flux J The determination of the activity coefficients has required the estimation of the second virial coefficient, which in our model is calculated from its mechanical statistical definition and using a Lennard–Jones potential to describe the molecular

6.2 Diffusion of Non-charged Molecules

117

interactions. The frictional coefficient is modelled as linearly dependent on the local concentration. In Lecca et al. [128] model, the spatial domain is divided into a number of reaction chambers, which we call cells or meshes; the reaction chambers can exchange molecules in a way to simulate diffusion, and they can also host chemical reactions between internal molecules. The reaction–diffusion system is then solved by the Gillespie algorithm.

6.2 Diffusion of Non-charged Molecules When solutions with differing concentrations come into touch with one another, the solute molecules have a tendency to move from areas with higher concentrations to areas with lower concentrations, eventually bringing the concentrations to par. The gradient of chemical potential .μ , i.e., the difference in Gibbs energy between regions of varying concentration, is what propels diffusion. Consider a solution containing N different solutes. The chemical potential .μi of any particular chemical species i is defined as the partial derivative of the Gibbs energy G with respect to the concentration of the species i, with temperature and pressure held constant. Species are in equilibrium if their chemical potentials are equal. μi ≡

.

∂G = μ0i + RT ln ai , ∂ci

(6.1)

where .ci is the concentration of the species i, .μ0i is the standard chemical potential of the species i (i.e., the Gibbs energy of 1 mol of i at a pressure of 1 bar), .R = 8.314 J .· K.−1 .· mol.−1 is the ideal gas constant, and T is the absolute temperature. The quantity .ai is called chemical activity of component i. The activity is decomposed into ai =

.

γi ci , c0

(6.2)

where .γi is the activity coefficient and .c0 being a reference concentration. The activity coefficients express a deviation of a solution from the ideal thermodynamic behaviour, and in general, they may depend on the concentration of all the solutes in the system. For ideal solution, a limit that is recovered experimentally at high dilutions, .γi = 1. If the concentrations of species i vary from point to point in space, then so does the chemical potential. For simplicity, we treat here the case in which there is only a chemical potential gradient in the x direction only. Chemical potential is the free energy per mole of substance i, free energy is the negative of the work W which a system can perform, and work is connected to force F by .dW = F dx. Therefore, an inhomogeneous chemical potential is related to a virtual force per molecule of

118

6 Reaction–Diffusion Systems

Fi = −

.

1 dμi kB T c0  ∂ai ∂cj =− , γi ci ∂cj ∂x nA dx

(6.3)

j

where .nA = 6.022 × 1023 mol.−1 is the Avogadro’s number, .kB = 1.381 × 10−23 J .· K.−1 is the Boltzmann’s constant, and the sum is taken over all species in the system other than the solvent. This force is balanced by the drag force experienced by the solute (.Fdrag,i ) as it moves through the solvent. Drag forces are proportional to the speed. If the velocity of the solute is not too high in such a way the solvent does not exhibit turbulence, we can assume that the drag force is Fdrag,i = fi vi ,

(6.4)

.

where .fi ∝ ci is the frictional coefficient, and .vi is the mean drift speed. Again, if the solvent is not turbulent, we can assume that the flux, defined as the number of moles of solute that pass through a small surface per unit time per unit area, is Ji = ci vi ,

(6.5)

.

i.e., the number of molecules per unit volume multiplied by the linear distance travelled per unit time. Since the virtual force on the solute is balanced by the drag force (i.e., .Fdrag,i = −Fi ), we obtain the following expression for the mean drift velocity: vi =

.

Fi fi

so that Eq. (6.5) becomes Ji = −

.

 ∂cj kB T  ∂ai ∂cj Dij ≡− , γi fi ∂cj ∂x ∂x

(6.6)

kB T c0 ∂ai γi fi ∂cj

(6.7)

j

j

where Dij =

.

are the diffusion coefficients. Eq. (6.7) states that, in general, the flux of one species depends on the gradients of all the others and not only on its own gradient. However, here we will suppose that the chemical activity .ai depends only weakly on the concentrations of the other solutes, i.e., we assume that .Dij ≈ 0 for .i = j and the Fick’s laws still hold. Let .Di denote .Dii . It is still generally the case that .Di depends on .ci in sufficiently concentrated solutions since .γi (and thus .ai ) has a non-trivial dependence on .ci . In order to find an analytic expression of the diffusion

6.2 Diffusion of Non-charged Molecules

119

coefficients .Di in terms of the concentration .ci , let us consider that the rate of change of concentration of the substance i due to diffusion is given by Di = −

.

∂Ji . ∂x

(6.8)

Substituting Eq. (6.7) into Eq. (6.6), and then substituting the obtained expression for .Ji into Eq. (6.8), gives Di = −

.

∂ ∂x

 − Di (ci )

∂ci ∂x

 (6.9)

so that  Di =

.

=

 ∂ 2 ci ∂Di (ci ) ∂ci + Di (ci ) 2 ∂x ∂x ∂x

∂Di (ci ) ∂cj ∂ci ∂ 2 ci + Di (ci ) 2 . ∂cj ∂x ∂x ∂x

(6.10)

Let .ci,k denote the concentration of a substance ith coordinate .xk , and .l = xk − xk−1 the distance between adjacent mesh points. The derivative of .ci with respect to x calculated in .xk− 1 is 2

∂ci  ci,k − ci,k−1 . ≈  ∂x xk− 1 l

.

(6.11)

2

By using Eq. (6.11) into Eq. (6.6), the diffusive flux of species i midway between the mesh points .Ji,k− 1 is obtained 2

Ji,k− 1 = −Di,k− 1

.

2

2

ci,k − ci,k−1 , l

(6.12)

where .Di,k− 1 is the estimate of the diffusion coefficient midway between mesh 2 points. The rate of diffusion of substance i at the mesh point k is Dik = −

Ji,k+ 1 − Ji,k− 1

.

2

2

l

(6.13)

and thence Dik =

.

Di,k− 1

2

l2

(ci,k−1 − ci,k ) −

Di,k+ 1

2

l2

(ci,k+1 − ci,k ).

(6.14)

120

6 Reaction–Diffusion Systems

To determine completely the right-hand side of Eq. (6.14) is now necessary to find an expression for the activity coefficient .γi and the frictional coefficient .fi , contained in the formula (6.7) for the diffusion coefficient. In fact, by substituting Eq. (6.2) into Eq. (6.7), we obtain an expression of the diffusion coefficient in terms of activity coefficients .γi Dii =

.

kB T  ci ∂γi  1+ . fi γi ∂ci

(6.15)

Let us focus now on the calculation of the activity coefficients, while a way to estimate the frictional coefficients will be present in Sect. 6.2.1. By using the subscript “1” to denote the solvent and “2” to denote the solute, we have 

μ2 =

.

μ02

 γ2 c2 + RT ln , c0

(6.16)

where .γ2 is the activity coefficient of the solute and .c2 is the concentration of the solute. By differentiating with respect to .c2 , we obtain .

1 ∂μ2 1 ∂γ2  + = RT . c2 γ ∂c2 ∂c2

(6.17)

The chemical potential of the solvent is related to the osmotic pressure (.Π ) by μ1 = μ01 − Π V1 ,

.

(6.18)

where .V1 is the partial molar volume of the solvent and .μ01 its standard chemical potential. Assuming .V1 to be constant and differentiating .μ1 with respect to .c2 , we obtain .

∂μ1 ∂Π = −V1 . ∂c2 ∂c2

(6.19)

Now, from the Gibbs–Duhem relation, the derivative of the chemical potential of the solute with respect to the solute concentration is .

∂μ2 M(1 − c2 v) ∂Π M(1 − c2 v) ∂μ1 = , =− ∂c2 c2 ∂c2 ∂c2 V1 c2

(6.20)

where M is the molecular weight of the solute and .v is the partial molar volume of the solute divided by its molecular weight. The concentration dependence of osmotic pressure is usually written as  Π RT 2 = . 1 + BMc2 + O(c2 ) , c2 M

(6.21)

6.2 Diffusion of Non-charged Molecules

121

where B is the second virial coefficient (see Sect. 6.2.2), and thence, the derivative with respect to the solute concentration is .

∂Π RT + 2RT Bc2 + O(c22 ). = ∂c2 M

(6.22)

Introducing Eq. (6.22) into Eq. (6.20) gives .

1  ∂μ2 = RT (1 − c2 v) + 2BM . ∂c2 c2

(6.23)

From Eq. (6.17) and Eq. (6.23), we have .

1 ∂γ2 1

RT (1 − c2 v)(1 + 2BMc2 ) − 1 = c2 γ2 ∂c2

(6.24)

so that .

1

γ2

dγ2 = γ2



c2 c0

1

RT (1 − c2 v)(1 + 2BMc2 ) − 1 dc2 . c2

(6.25)

On the grounds that .c2 v 1 [198], by solving the integrals, we obtain γ2 = exp[2BM(c2 − c0 )].

.

(6.26)

The molecular weight .Mi,k of the species i in the mesh k can be expressed as the ratio between the mass .mi,k of the species i in that mesh and the Avogadro’s number .Mi,k = mi,k /nA . If .pi is the mass of a molecule of species i and .ci,k l is the number of molecules of species i in the mesh k, then the molecular weight of the solute of species i in the mesh k is given by Mi,k =

.

pi l ci,k . nA

(6.27)

Substituting this expression in Eq. (6.26), we obtain for the activity coefficient of the solute of species i in the mesh k (.γi,k ), the following equation:  pi l 2  γi,k = exp 2B c . nA i,k

.

(6.28)

122

6 Reaction–Diffusion Systems

6.2.1 Intrinsic Viscosity and Frictional Coefficient The diffusion coefficient depends on the ease with which the solute molecules can move. The diffusion coefficient of a solute is a measure of how readily a solute molecule can push aside its neighbouring molecules of solvent. An important aspect of the theory of diffusion is how the magnitudes of the frictional coefficient .fi of a solute of species i and, hence, of the diffusion coefficient .Di , depend on the properties of the solute and solvent molecules. Examining existing experimental data reveals that when the solute’s molecule size rises, diffusion coefficients tend to decrease. The cause is that a larger solute molecule will travel more slowly than a smaller one because it must push aside more solvent molecules as it moves. The basic premise and model of the kinetic theory of gases and liquids cannot be used to derive a precise description of the frictional coefficients for the diffusion phenomena in a biological environment. The Stokes’s theory considers a simple situation in which the solute molecules are so much larger than the solvent molecules that the latter can be regarded as a continuum (i.e., not having molecular character). For such a system, Stokes deduced that the frictional coefficient of the solute molecules is .fi = 6π riH η, where .riH is the hydrodynamical radius of the molecule and .η is the viscosity of the solvent. The estimation of frictional coefficient using Stokes’ rule for proteins diffusing in the cytosol is challenging for a number of reasons. A protein travelling through the cytosol cannot be accurately approximated by the assumption of very big, spherical molecules in a continuous solvent since both the protein and the solvent may not be continuous. Additionally, the explicit inclusion of water molecules in protein–protein interactions in the cytosol complicates the estimation of the hydrodynamical radius. Finally, the viscosity of the solvent .η within the cellular environment cannot be approximated as either the viscosity of liquid or the viscosity of gas. In both cases, the theory predicts a strong dependence on the temperature of the system, which has not been found in the cell system, where the most significant factor in determining the behaviour of frictional coefficient is the concentration of solute molecules. To model the effects of non-ideally on the friction coefficient, we assume that its dependence on the concentration of the solute is governed by expression similar to the one used to model friction coefficient in sedimentation processes [184], fi,k = kf ci,k ,

.

(6.29)

where .kf is an empirical constant, whose value can be derived from the knowledge of the ratio .R = kf /[η]. Accordingly to the Mark–Houwink equation, .[η] = kM α is the intrinsic viscosity coefficient, .α is related to the shape of the molecules of the solvent, and M is still the molecular weight of the solute. If the molecules are spherical, the intrinsic viscosity is independent of the size of the molecules, so that .α = 0. All globular proteins, regardless of their size, have essentially the same .[η]. If a protein is elongated, its molecules are more effective in increasing the viscosity and .[η] is larger. Values of 1.3 or higher are frequently obtained for molecules that exist in solution as extended chains. Long-chain molecules that are coiled in solution

6.3 Algorithm and Data Structures

123

give intermediate values of .α, frequently in the range from 0.6 to 0.75 [119]. For globular macromolecule, R has a value in the range of 1.4–1.7, with lower values for more asymmetric particles [88].

6.2.2 Calculated Second Virial Coefficient The mechanical statistical definition of the second virial coefficient is given by the following:



B = −2π NA

.

0

u(r) dr, r 2 exp − kB T

(6.30)

where .u(r) is the interaction free energy between two molecules and r is the intermolecular centre–centre distance. In this chapter, we assume for .u(r) the Lennard–Jones pair (12,6) potential (Eq. 6.31) that captures the attractive nature of the Van der Waals interactions and the very short-range Born repulsion due to the overlap of the electron clouds.

 1 12  1 6 , u(r) = 4 − (6.31) r r   and expanding the term .exp kB4T r16 into an infinite series, Eq. (6.30) becomes .



 1 1 ∗ j ∞ 2−6j (T ) r exp − T ∗ 2 dr, .B = −2π NA j! r 0

(6.32)

j =0

where .T ∗ ≡ 4/(kB T ) and thus B=−

.

∞  1 1  1 1 π NA  1 j 4 (kB T )− 4 + 2 j Γ − + j . 6 !j 4 2

(6.33)

j =0

In our model, the estimate of B is given by truncating the infinite series of .Γ functions to .j = 4, since taking into account the additional terms, obtained for .j > 4, does not significantly influence the simulation results.

6.3 Algorithm and Data Structures The algorithm of Lecca et al. [128] implementing the model described in the previous sections first subdivides the volume into cells of fixed dimension. The

124

6 Reaction–Diffusion Systems

dimension of each cell in the mesh is chosen to be not too fine-grained, in order to reduce simulation time, but within the constraints described in [22] to preserve accuracy. Both the method and the algorithm and data structures trivially generalize to meshes in any dimension; however, to make the analysis as simple as possible, we focused our attention to meshes in one dimension. The algorithm is a refinement of those proposed by Bernstein [22] and by Elf et al. [55]. The next sub-volume method proposed by Elf et al. is a two-level system in which every cell computes individually the next event, a chemical reaction or a diffusion, using the Gillespie direct method [80]. The cell where the next event will occur is determined using a global priority queue that holds the times of the quickest event for each cell. This event is consumed, and only the one or two cells affected by the event are updated, adjusting their position in the priority queue accordingly. This algorithm is therefore efficient but centralized and sequential in nature and can have problems in scaling to very large systems. Moreover, it cannot be easy to adapt to take advantage of parallel or distributed systems. Since the number of reactions in the master system can easily be in the millions for even a modest mesh and a small set of chemical reactions, scalability is required to make large simulations feasible. Our algorithm overcomes these limitations by eliminating the use of a global priority queue. Assume that each cell is aware of its own concentrations, diffusion, and reaction rates, as well as the upcoming reactive or diffuse event, its timing, and the cells nearby. Additionally, we demand that the concentration in each reaction chamber be thought of as uniform in order for the original Gillespie algorithm to be applicable to the chemical reaction taking place there. Each cell has dependency relations on a set of neighbour cells; the cell can perform its next event only if it is quicker than the diffusion events of the neighbour cells because diffusion events can change reactant concentrations, and therefore the time and order of the events (Fig. 6.1). The algorithm takes advantage of this property: at each step, all cells that can develop are permitted to consume one event and advance one simulation step as long as they do not violate the constraints imposed by their dependencies. The algorithm still has the same average computational complexity; however, removing the global priority queue allows it to scale gracefully with the number of reactions and processors. A tool that implements the algorithm described here was developed in C# for the .NET and Mono Frameworks; the tool also has an interactive OpenGL viewer that shows the progress of the simulation in real time (Fig. 6.2). The viewer window is Fig. 6.1 The dependency relations of a cell

6.4 Drug Release

125

Fig. 6.2 The OpenGL viewer

divided into two zones: in the upper zone, a plot shows the variation of concentration in space; in the lower zone, cells in the mesh are drawn as rectangles. The rectangles are filled with an amount of colour that is proportional to the concentration of each species so that it is possible to immediately view the variation in space of the gradient. The small red dot indicates the cell in which the reaction–diffusion is taking place. We refer the reader to [121, 122] to a detailed description of the Redi software and the application case studies.

6.4 Drug Release Drug release refers to many processes that contribute to the transfer of drug from the dosage form to the bathing solution (for instance gastrointestinal fluids, dissolution medium). The devices used for drug release are designed to deliver the drug at a rate that is governed more by the dosage form and less by drug properties and conditions of the external environment. The release systems are classified into: 1. Diffusion-controlled 2. Chemically controlled 3. Swelling-controlled There are several mathematical models that have been developed to describe drug release from controlled-release dosage forms.

126

6 Reaction–Diffusion Systems

The mathematical models of diffusion-controlled systems are based on the Fick’s laws with either concentration-independent or concentration-dependent diffusion coefficient. Various types of diffusion can be implemented, i.e.: • Diffusion through an inert matrix • Diffusion through a hydrogel • Diffusion through a membrane In chemically controlled systems, the rate of drug release is controlled by: • The degradation and the dissolution of the polymer in erodible system. • The rate of the hydrolytic or enzymatic cleavage of the drug-polymer chemical bond in pendant-chain systems. In pendant-chain systems, the drug molecules are covalently attached to the main polymer chain via degradable linkages. So, as the polymer is exposed to water or chemicals, the linkages break down releasing the drug (Fig. 6.3). In swelling-controlled systems, the swelling of the polymer matrix after the inward flux of the liquid bathing the system induces the diffusion of the drug molecules towards the bathing solution. In other terms, swelling-controlled release consists of drug dispersion with glassy polymer matrix. When the systems come in contact with bio-fluids, it starts swelling (Fig. 6.4).

6.4.1 The Higuchi Model In situations where the drug loading exceeds the solubility in the matrix medium, the Higuchi model [95, 158, 178] for the rate of drug release from matrix devices has been shown to be a reliable framework and an indispensable instrument in the growth of a sizeable portion of the modern controlled drug delivery industry. It describes the kinetics of drug release from an ointment with the following assumptions:

Fig. 6.3 A water-soluble polymer exposes degradable bonds on its surface. Once in water, the bonds are degraded, and the drug molecules bounded to these linkages are released in the solution

6.4 Drug Release

127

Fig. 6.4 Swelling of polymer matrix in contact with a solvent. The dug molecules. The drug molecules are released as the polymer is eroded by the solvent Fig. 6.5 Schematic presentation of the drug concentration–distance profile within the ointment base after exposure to perfect sink conditions at time t (solid line) and at time .t + Δt (dashed line)

1. The drug is homogeneously dispersed in the planar matrix. 2. The medium into which the drug is released is a perfect sink. The dissolution medium volume should be 5–10 times the saturation solubility of the drug. Sink condition means that the drug dissolution is much slower than the drug absorbed. Following exposure to ideal sink conditions at time t (solid line) and time .t + Deltat, Fig. 6.5 shows a schematic presentation of the drug concentration– distance profile within the ointment base is shown (dashed line). The variables have the following meanings: .c0 and .cs denote the initial drug concentration and drug solubility, respectively; h represents the distance of the front, which separates ointment free of non-dissolved drug excess from ointment still containing nondissolved drug excess, from the “ointment-skin” interface at time t; .Δh is the distance this front moves inwards during the time interval .Δt [179]. The solid lines indicate the spatial concentration profile of drug existing in the ointment containing the suspended drug in contact with a perfect sink. The broken line indicates the temporal evolution of the profile (snapshot after a time interval

128

6 Reaction–Diffusion Systems

Δt). For the distance h above the exposed area, the concentration gradient .(c0 − c1 ) is considered to be constant assuming that .c0 is much bigger than .c1 . The cumulative amount .q(t) of drug released at time t is

.

q(t) = A

.

D(2c0 − cs )cs t,

(6.34)

where .A is the area of the ointment exposed to the absorbing surface, D is the diffusion coefficient of the drug in the matrix medium, .c0 is the initial concentration of the drug, and .cs is the solubility of the drug in the matrix. Usually, Eq. (6.34) is written in the following simplified form: .

√ q(t) = k t, q∞

(6.35)

where .q∞ is the cumulative amount of drug released at infinite time, and k is a composite kinetic constant related to: (i) the drug diffusional properties in the matrix and (ii) the design of the system. From an exact solution of the Fick’s second law of diffusion for thin films of thickness .δ, under perfect sink conditions, and uniform initial drug concentration with .c0 > cs , Crank et al. [44] demonstrated that  k=4

.

D , π δ2

and therefore,  Dt q(t) =4 . . q∞ π δ2

(6.36)

Equation (6.35) states that the fraction of drug released is linearly proportional to the square root of time. This equation cannot be applied throughout the release process because the assumptions for its derivation are not valid for the entire process of drug release. The use of Eq. (6.35) for the analysis of drug release data is recommended only for the first 60% of the release curve (i.e., . q(t) q∞ ≤ 0.6). This is a rule of thumb, it does not rely on strict theoretical or experimental findings, and it is based only on the fact that unrealistic different physical conditions have been postulated for the derivation of Eqs. (6.35) and (6.36). In the literature, we find that a linear plot of .q(t) or . q(t) q∞ (utilizing data up to 60% of the release curve) versus the square root of time is often used to model diffusion-controlled drug release from a large variety of drug delivery systems.

6.4 Drug Release

129

6.4.2 Systems with Different Geometries 1. Fickian diffusional release from a thin polymer film. This process is described by Eq. (6.36). 2. Case II transport [71] release from a thin polymer film. The fractional drug release is given by .

q(t) 2k0 t, = δc0 q∞

(6.37)

where k0 is the Case II relaxation constant and c0 is the drug concentration (considered constant). 3. Case II radial release from a cylinder of radius ρ. The model for fractional drug release is as follows: q(t) 2k0 = t− . q∞ ρc0



k0 t ρc0

2 .

(6.38)

4. Case II 1-dimensional radial release from a sphere of radius ρ. The model is

.

 2  3 q(t) k0 k0 3k0 t −3 t + t . = ρc0 ρc0 ρc0 q∞

(6.39)

5. Case II radial and axial release from a cylinder. Next, we will deal with Case II radial and axial release from a cylinder, as the models described by Eqs. (6.37) and (6.38) are special cases of the equations derived for the Case II radial and axial release from a cylinder.

6.4.2.1

Case II Radial and Axial Release from a Cylinder

A cylinder of height 2L that is allowed to release from all sides can be treated as a cylinder of height L that can release from the round side and the top only. If the big cylinder of Fig. 6.6 is cut in half across the horizontal line, two equal cylinders, each of height L, are formed [137]. If the drug release from the two newly formed areas (top and bottom) of the two small cylinders is not considered, the two cylinders of height .L show the same release behaviour as the big cylinder, i.e., .q(t)2L = 2q(t)L and.q∞,2L = 2q∞,L . As a consequence, .

q(t)2L q(t)L . = q∞,L q∞,2L

(6.40)

130

6 Reaction–Diffusion Systems

Fig. 6.6 Case II drug transport with axial and radial release from a cylinder of height 2L and radius .ρ at time t (Fig. 6.6). The large cylinder’s four sides all release drugs. The area between the large and small cylinders is where the drug mass is located. After time t, the height of the cylinder becomes .2L and its radius becomes .ρ 

The proportionality (6.40) states that either a cylinder of height L releases from the round and the top surface, or a cylinder of height 2L releases from all sides. At time .t = 0, the height and the radius of the cylinder are L and .ρ, respectively. After a time .Δt, the height decreases to .L and its radius to .ρ  . The decrease rate of radius .ρ  and height .L can be written as .

dρ  dL k0 = =− . dt dt c0

(6.41)

The initial conditions for Eq. (6.41) are ρ  (0) = ρ,

.

and

L (0) = L.

By integrating Eq. (6.41), we obtain .

ρ  (t) = ρ −

k0 t, c0

t≤

c0 ρ. k0

(6.42)

Ł (t) = L −

k0 t, c0

t≤

c0 L. k0

(6.43)

These equations state that the smaller dimension of the cylinder (.ρ or L) determines the duration of the drug release. The amount of drug released at any time t is given by the following equation (that is indeed a balance equation):   q(t) = c0 π ρ 2 L − (ρ  )2 (L )2 .

.

(6.44)

Substituting (6.42) and (6.43) into (6.44), we obtain      k0 k0 2 2 L− t t .q(t) = c0 π ρ L − ρ − c0 c0 and

(6.45)

6.4 Drug Release

131

q∞ = c0 πρ 2 L.

(6.46)

.

Therefore, .

q(t) = q∞



  2  2k02 k03 3 k0 2k0 k0 2 t− t + + − t . ρc0 Lc0 ρ 2 c02 ρLc02 ρ 2 Lc03

(6.47)

Equation (6.47) describes the entire fractional release curve for Case II drug transport with axial and radial release from a cylinder. When .ρ  L, Eq. (6.47) can be approximated as follows: .

q(t) k0 = t, q∞ Lc0

which is identical to Eq. (6.37) with a difference of a factor 2 due to the fact that the height of the cylinder is 2L. When .ρ L, Eq. (6.47) can be approximated as follows: q(t) 2k0 t− . = ρc0 q∞



k0 t ρc0

2 ,

which is identical to Eq. (6.38). In summary, we have demonstrated that Eqs. (6.37) and (6.38) are special cases of the result in Eq. (6.47).

6.4.3 The Power-Law Model The drug release from polymeric devices is modelled as follows: .

q(t) = kt λ , q∞

(6.48)

where k is a constant depending on the structural (and geometrical) properties of the delivery system, and .λ depends on the mechanism of drug release. This model is extensively used in many applications, mainly because of the following reasons: 1. Equations (6.34) and (6.36) of the Higuchi model are special cases of Eq. (6.48) when .λ = 0.5. Note also that Eq. (6.37) is a special case of Eq. (6.48) when .λ = 1. 2. The value of .λ obtained by fitting the model (6.48) to the first 60% of the experimental release data is indicative of the release mechanism.

132

6 Reaction–Diffusion Systems

Fig. 6.7 Dissolution is the process in which a substance forms a solution. A dosage form, such as a tablet, capsule, ointment, etc., is tested for dissolution to determine the degree and rate of solution formation. A drug’s bioavailability and therapeutic efficacy depend on how well it dissolves

6.5 What Drug Dissolution Is Drug dissolution is the reaction of the solid drug with the fluid and/or the components of the dissolution medium (Fig. 6.7). This reaction takes place at the solid–liquid interface so that dissolution kinetics is dependent on: 1. The flow rate of the dissolution medium towards the solid–liquid interface 2. The reaction rate at the interface 3. The properties of the diffusion of the dissolved drug molecules from the interface towards the bulk solution The bioavailability of a pharmacological ingredient is significantly influenced by how it dissolves in the body. Transport control, interface control, and mixed-kinetic control are three recognized solid dissolution methods.

6.6 The Diffusion Layer Model (Noyes and Whitney) Experiments showed that the rate . dc(t) dt of change of the concentrations of dissolved species is proportional to the difference between the saturation solubility, i.e., the maximum mass of solute dissolved per unit volume of solvent at a given temperature and pressure. .cs of the species and the concentration existing at time t, i.e., .

 dc(t) = k cs − c(t)], dt

c(0) = 0,

(6.49)

where k is the proportionality constant. This equation can be expressed in terms of cumulative amount of drug dissolved at time t, i.e., .q(t):  DA q(t) dq(t) cs − , = . δ V dt

q(0) = 0,

(6.50)

6.7 The Weibull Function in Dissolution

133

where D is the diffusion coefficient of the drug, A is the effective contact surface, V is the volume of the dissolution medium, and .δ is the effective diffusion boundary layer thickness adjacent of the dissolving surface of the drug. The proportionality constant k is then k=

.

DA . Vδ

The integrated form of (6.50) is   q(t) = cs V 1 − e−kt .

.

(6.51)

The limit .t → ∞ defines the total amount of drug, .qs = cs V , that can be dissolved in the volume V [137]. The fraction of drug accumulated in solution at time t is the ratio .

q(t) . qs

6.7 The Weibull Function in Dissolution It has the following form: .

q(t) μ = 1 − e−(λt) , q∞

(6.52)

where .q∞ is the total mass that can be dissolved, and .λ, μ are constants: • .λ is the scale parameter that defines the time scale of the process. • .μ is the shape parameter that characterizes the shape of the curve: – It is exponential if .μ = 1. – It is S-shaped if .μ > 1. Next, we prove the validity of the Weibull model.

6.7.1 Inhomogeneous Conditions The drug’s diffusional properties can change with time so that the validity of use of a classical rate constant k in Eq. (6.50) is questionable. It is then reasonable to introduce an instantaneous time-dependent rate coefficient .k(t). A widely used model is the following:

134

6 Reaction–Diffusion Systems

k(t) = k◦

.

 −ν t , t◦

(6.53)

where .k◦ is a rate constant (independent of time), .t◦ is a time scale parameter and .ν is a pure number. With .t◦ = 1, this equation is used in chemical kinetics to describe reactions occurring under dimensional constraints or under-stirred conditions. We will use it to describe the time dependency of the drug dissolution rate coefficient due to the change in time of the effective surface A, of the thickness of the diffusion layer .δ, and of the diffusion coefficient D. Using Eq. (6.53) in Eq. (6.50), we obtain    t  dq(t) q∞ − q(t) , = k◦ . t◦ dt

q(t0 ) = 0,

(6.54)

 1−ν  1−ν  t t0 . − t◦ t◦

(6.55)

whose integrated form is q(t) = 1 − exp . q∞



k ◦ t◦ − 1−ν

If .t0 = 0 and .ν < 1, we get    q(t) k◦ t◦ t −ν . = 1 − exp − . q∞ 1 − ν t◦

(6.56)

Equation (6.51) is identical to the Weibull equation (6.52) for 1 .λ = t◦



k ◦ t◦ − 1−ν



1 1−ν

and μ = 1 − ν.

.

Note also that Eq. (6.56) is identical to Eq. (6.51) if .ν = 0. Here we have shown that the Weibull model is able to capture the time-dependent character of the rate coefficient governing the dissolution process.

6.7.2 Drug Dissolution Is a Stochastic Process The ratio . q(t) q∞ is the accumulated fraction of amount of drug dissolved from a solid dosage. It also gives the probability of the residence time of drug molecules in the

6.7 The Weibull Function in Dissolution

135

Fig. 6.8 Cumulative dissolution curve

dissolution medium. As a consequence, the dissolution process can be interpreted stochastically. We are saying that . q(t) q∞ has a statistical meaning because it represents the cumulative distribution function of the random variable dissolution time T . The dissolution time T is the time up to dissolution for drug fractions from the dosage form. Therefore, . q(t) q∞ can be defined as the probability that a molecule will leave the formulation prior to t, i.e., in formulae .

q(t) = Pr[T ≤ t]. q∞

(6.57)

The first moment of the distribution function . q(t) q∞ is called the mean dissolution time (MDT): MDT = E[T ] =

.

1 q∞





tdq(t) =

0

ABC , q∞

(6.58)

where ABC is the area between the cumulative dissolution curve and the horizontal line that corresponds to .q∞ (Fig. 6.8). It can be proved that MDT =

.

cs V qs = . q∞ k q∞ k

(6.59)

We also note that this equation does not hold when the entire amount of available drug (.q0 ) is not dissolved. Indeed, let us multiply both sides of Eq. (6.49) by . qV0 , and we obtain the same equation in terms of the fraction of actual dose of drug dissolved .

where .θ ≡

q0 qs .

 dφ(t) 1 =k − φ(t) , dt θ

φ(0) = 0,

Eq. (6.60) has two solutions.

Solution 1 .θ ≤ 1 (.q0 ≤ qs ). The entire dose is eventually dissolved.

(6.60)

136

6 Reaction–Diffusion Systems

φ(t) =

.

⎧  ⎨ 1 1 − e−kt , θ

⎩ 1,

for t < t◦

for t ≥ t◦ ,

is the time at which dissolution terminates. where .t◦ = − ln(1−θ) k The MDT is t◦ θ + (1 − θ ) ln(1 − θ ) .MDT = tdφ(t) = . kθ 0

(6.61)

It can be easily shown that Eq. (6.59) is a special case of this general equation, when .θ = 1. Solution 2 .θ > 1 (.q0 > qs ) Only a portion of the dose is dissolved, and the drug reaches the saturation level .1/θ . φ(t) =

.

 1 1 − e−kt . θ

(6.62)

The MDT is infinite because the entire dose is not dissolved (see Eq. (6.61) for θ > 1).

.

6.7.3 The Inter-facial Barrier Model It assumes that the reaction at the solid–liquid interface is not rapid due to the high requirement of free energy of activation. Therefore, this reaction is the rate-limiting step for the dissolution process. We will treat here a continuous reaction-limited dissolution model as follows: Undissolved drug + free solvent → dissolved drug complexed with solvent

.

Experiments showed that the equation describing the rate of drug dissolution in terms of the fraction of drug dissolved is .

dφ(t) = k ∗ [1 − φ(t)][1 − θ φ(t)], dt

φ(0) = 0,

(6.63)

where .φ(t) is the fraction of drug dissolved up to time t, and .θ is the dose-solubility ratio . qq0s . .k ∗ is the dissolution rate constant. The fractional dissolution rate is a decreasing function of the fraction of dissolved amount .φ(t), as we also observed for the diffusion layer model. However, Eq. (6.63) reveals a second-order dependency of the reaction rate on the dissolved amount .φ(t). This is a unique feature that is generally absent in models dealing with diffusionlimited dissolution [137].

6.7 The Weibull Function in Dissolution

137

The solution of Eq. (6.63) is φ(t) =

.

exp[k ∗ (1 − θ )t] − 1 . exp[k ∗ (1 − θ )t − θ ]

(6.64)

For .θ = 1, φ(t) =

.

k∗t . k∗t + 1

The asymptotes of Eq. (6.63) are .

φ(t) → 1, t→∞

φ(t) →

t→∞

1 , θ

for θ ≤ 1 for θ > 1.

6.7.4 Compartmental Model We will deal with dynamical models of drug absorption and in particular with the compartmental models as in Machera et al. [137]. In such models, mixing tanks in series with linear transfer kinetics from one to the next with the same transit rate constant .kt simulate the flow in the human small intestine. The differential equations of mass transfer in a series of m compartments are .

dq(t) = kt qi−1 (t) − kt qi (t), dt

i = 1, . . . , m,

(6.65)

where .qi (t) is the amount of drug in the i-th compartment and .kt = m/Tsi , with Tsi  the mean small intestine transit time (.≈199 min in humans). The rate of exit of the compound from the small intestine is

.

.

dqm (t) = −kt qm (t). dt

(6.66)

A fit of this equation to experimental human small intestine transit time data estimates that the .m = 7, so that the rate of drug absorption in terms of mass absorbed .qa (t) from the small intestine is  dq(t) = ka qi (t), dt 7

.

i=1

where .ka is the first-order absorption rate constant.

138

6 Reaction–Diffusion Systems

This model is referred to as compartmental transit model (CAT). The fraction of drug dose absorbed .Fa is Fa =

.

7 qa (t) ka  ∞ = qi (t)dt. q0 q0 0

(6.67)

i=1

From Eqs. (6.65) and (6.67), we obtain   ka −7 .Fa = 1 − 1 + , kt

(6.68)

where kt =

.

7 . Tsi 

The first-order absorption rate constant is expressed in terms of permeability P and the radius R of the small intestine as follows [137]: ka =

.

2P . R

Part III

Linear Algebra and Modelling

Chapter 7

Linear Algebra Background

7.1 Matrices 7.1.1 Introduction Matrices are mathematical objects and data structures routinely used to represent large data sets by conveniently arranging them in a table with m rows and n columns. Hereafter, we denote a matrix by un uppercase letter, e.g., we write .Am×n = [aij ] where the scalar .aij is the matrix element placed at the i-th row and j -th column of the table. The subscript symbol .m × n is the matrix size or matrix dimension of A. If .m = n, A is called a square matrix and the notation .An×n can be simplified as .An , otherwise A is a rectangular matrix. Some examples of matrices are given below:  A2×3 =

123 456

 (2 × 3 rectangular matrix),

⎤ 12 ⎢3 4⎥ ⎥ =⎢ ⎣ 5 6 ⎦ (4 × 2 rectangular matrix), 78 ⎡

.

A4×2

⎤ 123 = ⎣ 4 5 6 ⎦ (3 × 3 square matrix). 789 ⎡

A3×3

A column of A is also referred to as a column vector, while a row of A is a row vector. If not specified otherwise, the term vector means a column vector. Vectors are denoted using lowercase letters with an arrow over it to distinguish them from © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Lecca, B. Carpentieri, Introduction to Mathematics for Computational Biology, Techniques in Life Science and Biomedicine for the Non-Expert, https://doi.org/10.1007/978-3-031-36566-9_7

141

142

7 Linear Algebra Background

scalars (standard real or complex numbers). For example, we can write the i-th column of

matrix A as .ai , and, consequently, we can partition A by its columns as .A = a1 , a2 , . . . , an . The number of elements in a column or in a row vector is referred to as vector size or vector dimension. Sometimes, the i-th row of A is denoted as .Ai∗ , while the j -th column of matrix A is denoted as .A∗j . For example,    

123 2 if .A = , then .A2∗ = 4 5 6 and A∗2 = . If we delete a combination 456 5 of rows and columns from A, we obtain a submatrix of A. In the previous example, by deleting

the second row and the third column of A, we obtain the submatrix .B = 1 2 . The order of the elements in a matrix or in a vector is of crucial importance. Indeed, two matrices .A = [aij ] and .B = [bij ] are deemed to be equal if and only if the two conditions below are both satisfied: 1. A and B have the same size (the same number of rows and columns). 2. .aij = bij for every row index i and column index j . Example Consider the cases below: ⎡ ⎤   1 2 1 2 3 ⎣ ⎦ 1. .A = are not equal because they have different 3 4 and .B = 4 5 6 5 6 sizes, although they have the same elements. ⎡ ⎤ ⎡ ⎤ 1 2 1 2 2. .A = ⎣ 3 4 ⎦ and .B = ⎣ 3 4 ⎦ are not equal although they have the same size, 5 6 5 0 because .a32 = b32 . ⎡ ⎤ ⎡ ⎤ 1 2 2 1 3. .A = ⎣ 3 4 ⎦ and .B = ⎣ 4 3 ⎦ are not equal although they have the same 5 6 6 5 size and contain the same numbers because the order of the elements in the two matrices is different. Matrices are vastly used in computational science and mathematical modelling since they allow for a compact and very efficient data representation. In a typical microarray experiment in molecular biology, a large .n × n gene co-expression matrix .A = [aij ] is used to represent a co-expression relationship between two genes i and j , which describes the tendency of a gene to co-activate across an entire group of samples. Fold-change measurements of gene expression data sets between a reference and a treated sample, e.g., due to drug exposure or to genetic variation, are a crucial information in the diagnosis and prognosis of various diseases. In a human whole-ventricle electrophysiology simulation by the bidomain equation, the quantities .aij ’s represent the difference of potential between two cardiac cells at a given instant of time. Owing to the large number of cells in the cardiac tissue, A often contains several ten million rows and columns; however, many entries are

7.1 Matrices

143

zeros since the action potential of an electric stimulus propagates only locally to few neighbouring cells.

7.1.2 Special Matrices We describe below some special matrices arising frequently in computational science and in engineering practise. 1. Matrices with one row (.m = 1) are called row matrices. Analogously, matrices with one column (.n = 1) are called column matrices. If .m = n = 1, the matrix reduces to a scalar (a real or a complex number). Examples

A1×4 = 1 2 3 4 (row matrix).

.

A4×1

⎡ ⎤ 1 ⎢2⎥ ⎥ =⎢ ⎣ 3 ⎦ (column matrix). 4

A1×1 = [ 1 ] (scalar). 2. An .m × n matrix .A = [aij ] with all elements equal to 0 (.aij = 0 for all indices .i ∈ [1, m] and .j ∈ [1, n]) is called the zero matrix and is denoted by the letter O. It has the form ⎡ ⎤ 0 ··· 0 ⎢. . .⎥ .O = ⎣ . . . . ⎦ . . . 0 ··· 0 3. An upper triangular matrix is a square matrix .U = [uij ] with all the elements below the main diagonal equal to 0, that is, .uij = 0 for all .i > j . Schematically, it has the form ⎡

u11 · · · · · · ⎢ ⎢ 0 ... .U = ⎢ ⎢ . . . ⎣ .. . . . . 0 ··· 0

⎤ u1n .. ⎥ . ⎥ ⎥ .. ⎥ . . ⎦ unn

4. A lower triangular matrix is a square matrix .L = [ij ] with all the elements above the main diagonal equal to 0, that is, .ij = 0 for all .j > i. Schematically,

144

7 Linear Algebra Background

it has the form ⎡

11 ⎢ .. ⎢ . .A = ⎢ ⎢ . ⎣ .. n1

0 ··· .. .. . . .. . ··· ···

⎤ 0 .. ⎥ . ⎥ ⎥. ⎥ 0 ⎦ nn

5. A triangular matrix is either upper triangular or lower triangular. 6. A diagonal matrix .D = [dij ] has 0’s everywhere except, possibly, on the main diagonal, that is, .dij = 0 for all .i = j . Schematically, it has the form ⎡

d11 0 ⎢ ⎢ 0 ... .D = ⎢ ⎢ .. . . ⎣ . . 0 ···

⎤ ··· 0 . ⎥ .. . .. ⎥ ⎥. ⎥ .. . 0 ⎦ 0 dnn

7. The identity matrix is a diagonal matrix with all 1’s on the main diagonal: ⎡ ⎢ ⎢ .I = ⎢ ⎢ ⎣

1 0 . 0 .. .. . . . . 0 ···

··· .. . .. . 0

⎤ 0 .. ⎥ . ⎥ ⎥. ⎥ 0⎦ 1

8. A banded matrix is a square .n × n matrix .A = [aij ] with all 0’s outside a diagonally bordered band with lower bandwidth .k1 and upper bandwidth .k2 , that is, aij = 0 if j < i − k1 or j > i + k2 with k1 , k2  0.

.

The bandwidth of matrix A is the integer .k = max{k1 , k2 }. Thus, it is .aij = 0 if |i − j | > k. Examples A banded matrix with .k1 = k2 = 0 is a diagonal matrix, while a banded matrix with .k1 = k2 = 1 is a tridiagonal matrix

.



a11 a12 0

··· .. . .. .

0 .. .



⎥ ⎥ a22 a23 ⎥ ⎥ . . 0 ⎥ a32 . . ⎥ ⎥ .. .. . . an−1,n−1 an−1,n ⎦ 0 · · · 0 an,n−1 ann

⎢ ⎢ a21 ⎢ ⎢ .A = ⎢ ⎢ 0 ⎢ . ⎣ ..

7.1 Matrices

145

For .k1 = k2 = 2, one obtains a pentadiagonal matrix. On the other hand, for k1 = 0 and .k2 = n − 1, one gets an upper triangular matrix and for .k1 = n − 1 and .k2 = 0 a lower triangular matrix.

.

7.1.3 Operation on Matrices Some operations can be performed on matrices, and these are defined in the next sections.

7.1.3.1

Sum of Matrices

Two matrices can be summed up provided that they have the same size. When Am×n = [aij ] and .Bm×n = [bij ], their sum .A + B is defined as the matrix .Cm×n = [cij ] of elements .cij = aij + bij for all indices .i ∈ [1, m] and .j ∈ [1, n]. .

Examples 

     1 2 3 1 2 3 2 4 6 + = , 4 5 6 4 5 6 8 10 12



⎤ ⎡ ⎤ ⎡ ⎤ 1 2 2 1 3 3 ⎣ 3 4⎦ + ⎣ 4 3⎦ = ⎣ 7 7 ⎦, . 5 6 6 5 11 11 ⎡ ⎤ 5

⎢6⎥ ⎥ 1 2 3 4 +⎢ ⎣ 7 ⎦ → not defined. 8 Matrix elements may contain information of different type such as gene differential expressions of different genes pairs in a microarray experiment or a difference of potential between pairs of cardiac cells in a heart simulation. Based on the definition of matrix sum, we do not mix up things. In a microarray experiment, we will sum up gene differential expressions of the same pairs of genes; in the cardiac problem, we will add differences of potential of the same pair of cardiac cells, perhaps at different instant of time. Properties The following properties of the matrix sum are easy to demonstrate. They generalize similar properties holding for real and complex numbers.

146

7 Linear Algebra Background

Closure Property: if A and B are two .m × n matrices, then .A + B is again an .m × n matrix. Associative Property: if A, B, and C are three .m × n matrices, then .(A + B) + C = A + (B + C). Commutative Property: if A and B are two .m × n matrices, then .A + B = B + A. Additive Identity: the .m×n zero matrix O, consisting of all 0’s, is such that .A+0 = A for any given .m × n matrix A.

7.1.3.2

Scalar Multiplication

Given a matrix A, the operation .A + A doubles the value of all the elements of A as the following example shows: ⎡

1 ⎣ . 3 5

⎤ ⎡ ⎤ ⎡ ⎤ 2 12 2 4 4⎦ + ⎣3 4⎦ = ⎣ 6 8 ⎦. 6 56 10 12

The same operation can be written in matrix computation as ⎡

⎤ ⎡ ⎤ 12 2 4 .2 · ⎣ 3 4 ⎦ = ⎣ 6 8 ⎦ . 56 10 12 The multiplication operation on the left-hand side of the previous expression is called scalar multiplication on matrices and is ⎤ defined as follows: ⎡ a11 · · · a1n ⎥ ⎢ Given an .m × n matrix .A = ⎣ ... . . . ... ⎦ and a scalar .α, then am1 · · · amn ⎡

a11 · · · ⎢ . .. .α · ⎣ . . . am1 · · ·

⎤ ⎤ ⎡ a1n (αa11 ) · · · (αa1n ) .. ⎥ = ⎢ .. .. ⎥ . .. . . ⎦ ⎣ . . ⎦ amn (αam1 ) · · · (αamn )

When the meaning is clear from the context, we can omit the symbol “.·” and write simply .αA. In the special case .α = −1, we obtain .(−1)A, which is called additive inverse of A and is denoted for simplicity as .−A instead of .(−1)A. The elements of .−A are the opposite of the elements of A.

7.1 Matrices

147



1 ⎣ .− 3 5

⎤ ⎡ ⎤ 2 −1 −2 4 ⎦ = ⎣ −3 −4 ⎦ . 6 −5 −6

If A is a row or a column vector, as expected the result is ⎡ ⎤ ⎡ ⎤ 1 3



.3 ⎣ 2 ⎦ = ⎣ 6 ⎦ , 4 2 3 4 = 8 12 16 . 3 9 Properties It is straightforward to prove the following properties of the scalar multiplication of matrices. We denote by A and B two .m × n matrices and by .α and .β two scalars. Then: Closure property: .αA is again an .m × n matrix. Associative property: .(αβ)A = α(βA). Left distributive property: .α(A + B) = αA + αB. Right distributive property: .(α + β)A = αA + βA. Identity property: .1A = A. The number 1 is an identity element under scalar multiplication. 6. Additive inverse: the .m × n matrix .−A has the property that .A + (−A) = 0. 1. 2. 3. 4. 5.

7.1.3.3

Matrix Subtraction

The difference between two matrices A and B, denoted in symbols as .A − B, is defined as A − B = A + (−B),

.

where .−B is the additive inverse of B introduced in the previous section. The value of the .(i, j )-th element of .A − B is .aij − bij . We are subtracting elements occupying the same .(i, j )-th position in A and B, and hence we are not mixing up quantities of different type. Similarly to the sum, the subtraction operation is defined only between matrices of the same size. Example ⎡

⎤ ⎡ 23 1 .⎣4 5⎦ − ⎣3 67 5

⎤ ⎡ ⎤ 2 11 4⎦ = ⎣1 1⎦. 6 11

148

7 Linear Algebra Background

7.1.3.4

Product of Matrices

In general, two matrices A and B cannot be multiplied, except in one case, when A has exactly as many columns as B has rows. Such matrices are called conformable. If A and B are not conformable, the product AB is undefined. For example, if A is .2 × 3 and B is .3 × 4, then AB exists. A simple mnemonic can be used to determine if AB exists, by comparing the respective sizes of A and B. Suppose that A is .m × p and B is .q × n, then look at the product .

(m × p) × (q × n) .

1. If the inner integers are different, .p = q, then AB is undefined. 2. On the other hand, if .p = q, then matrix AB exists, and, additionally, the size of the product matrix is .m × n (the outer integers). For conformable matrices .Am×p = [aij ] and .Bp×n = [bij ], the .(i, j )-th entry of the product AB is equal to the sum of the products of the elements in the i-th row of A times those in the j -th column of B:

.

[AB]ij = Ai∗ B∗j = ai1 b1j + ai2 b2j + · · · + aip bpj =

p

aik bkj .

k=1

In linear algebra, given two vectors .u = [ui ] and .v = [vi ] of the same dimension u, v between .u and .v is defined as n, the inner or dot product . .

u, v =

n

ui vi .

i=1

Then, the .(i, j )-th entry of the product AB of two conformable matrices A and B is equal to the inner product of the i-th row of A with the j -th column of B. For example, if .A3×3 is .3 × 3 and .B3×4 is .3 × 4, .[AB]23 is obtained by forming the inner product of the second row of A with the third column of B, ⎡

⎤ ⎤⎡ a11 a12 a13 b11 b12 b13 b14 ⎣ a21 a22 a23 ⎦ ⎣ b21 b22 b23 b24 ⎦ , a31 a32 a33 b31 b32 b33 b34 which gives

.

[AB]23 = A2∗ B∗3 = a21 b13 + a22 b23 + a23 b33 =

3 k=1

a2k bk3 .

7.1 Matrices

149

If only a few elements of AB are needed, it is a wasted effort to compute the entire product. Indeed, from the definition, we have that: 1. .[AB]i∗ = Ai∗ B ⇐⇒ i-th row of AB = (i-th row of A) times B. 2. .[AB]∗j = AB∗j ⇐⇒ j -th column of AB = A times (j -th column of B). ⎡ ⎤   12 −1 −2 If .A = ⎣ 3 4 ⎦ , B = , then the first row of AB is −3 −4 56

.

[AB]1∗

  −1 −2

= A1∗ B = 1 2 = −7 −10 , −3 −4

and the second column of AB is ⎡

.

[AB]∗2 = AB∗2

1 = ⎣3 5

⎤ ⎡ ⎤  2  −10 −2 = ⎣ −22 ⎦ . 4⎦ −4 6 −34

The distributive and associative properties do hold for matrix multiplication. Properties For conformable matrices, the following properties are true: Left-hand distributive law: .A(B + C) = AB + AC. Right-hand distributive law: .(D + E)F = DF + EF . Associative law: .A(BC) = (AB)C. On the other hand, there are important differences between scalar and matrix multiplication, and these are summarized below: 1. Matrix multiplication is not commutative: when the product AB exists, the product BA may not be defined, e.g., if B is .3 × 4 and A is .2 × 3; even when both AB and BA exist and have the same size, they may not be equal: 

       1 −1 11 00 2 −2 .A = , B= ⇒ AB = , BA = . 1 −1 11 00 2 −2 2. The cancellation law for scalars αβ = αγ and α = 0 ⇒ β = γ

.

does  not apply to matrices,  like the  following example with matrices .A = 11 22 31 , B= shows: , C= 11 22 13 

 44 .AB = = AC, but B = C. 44

150

7 Linear Algebra Background

3. It is possible   = 0 with both .A = 0 and .B = 0, for example, for to have .AB 1 −1 11 .A = and .B = . 1 −1 11 Some other useful properties of the matrix multiplication are summarized below: 1. (Identity matrix) The .n×n matrix with 1’s on the main diagonal and 0’s elsewhere ⎡ ⎢ ⎢ I =⎢ ⎢ ⎣

.

1 0 . 0 .. .. . . . . 0 ···

··· .. . .. . 0

⎤ 0 .. ⎥ . ⎥ ⎥. ⎥ 0 ⎦ 1

is called the identity matrix of order n. For every .m × n matrix A, A In = A and Im A = A.

.

The subscript on .In can be neglected whenever the size is obvious from the context. 2. (Matrix powers) The p-th power .Ap of a square .n × n matrix A with p nonnegative integer is defined as: (a) .p = 0: .A0 = In , the identity matrix of size .n × n. (b) .p > 0: .Ap = AA · · · A. p times

Because of the associative law, it makes no difference how matrices are grouped for powering. For example, .A3 A2 is the same as .A2 A3 since A5 = AAAAA = (AAA)(AA) = (AA)(AAA).

.

Additionally, the usual laws of exponents hold for matrices: for non-negative integers p and q, Ap Aq = Ap+q and (Ap )q = Apq .

.

Powers of non-square matrices are never defined due to the lack of conformability.

7.1.3.5

Product of a Matrix Times a Vector

When in the product AB, matrix B is a conformable column matrix of dimension equal to the number of columns of .A, the operation AB can be denoted as .A · x,

7.1 Matrices

151

where .x is the vector forming B. Similarly to the case of the scalar multiplication, the .· symbol is often omitted when the operation is clear from the context. The result of .A x can be obtained using the same rule defined in the previous section. It is easy to observe that .A x is the linear combination of the columns .x , taken in their order. Therefore, if .A = of A times the elements⎡of vector ⎤ x1 ⎢ x2 ⎥

⎢ ⎥ a1 , a2 , . . . , an and .x = ⎢ . ⎥ , we have ⎣ .. ⎦ xn A x = x1 a1 + x2 a2 + · · · + xn an .

.

For .n = 3: ⎡

⎤ ⎤ ⎡  a1 , x a11 x1 + a12 x2 + a13 x3 A x = ⎣  a2 , x ⎦ = ⎣ a21 x1 + a22 x2 + a23 x3 ⎦  a x1 + a32 x2 + a33 x3 a , x ⎡ 3 ⎤ ⎡ 31 ⎤ ⎡ ⎤ a11 x1 a12 x2 a13 x3 = ⎣ a21 x1 ⎦ + ⎣ a22 x2 ⎦ + ⎣ a23 x3 ⎦ . a31 x1 a32 x2 a33 x3 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ a11 a12 a13 = x1 ⎣ a21 ⎦ + x2 ⎣ a22 ⎦ + x3 ⎣ a23 ⎦ = x1 a1 + x2 a2 + x3 a3 . a31 a32 a33 This observation is useful because it introduces a very compact notation for the linear combination of a large number of vectors with as many scalars. Example ⎡

⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 147 2 1 4 7 7 . ⎣ 2 5 8 ⎦ ⎣ 3 ⎦ = 2 ⎣ 2 ⎦ + 3 ⎣ 5 ⎦ − ⎣ 8 ⎦ = ⎣ 11 ⎦ . 369 −1 3 6 9 15 The equation above shows that the result of .A x can be computed in two different ways: 1. Either as a linear combination of the columns of A with the elements of .x as scalar, 2. Or by performing inner products of the rows of A with vector .x. In the first case, we look at matrix A column-wise (column picture of A), while in the second case we look at A row-wise (row picture of A). The result is the same using both representations. The choice of the computational procedure to use to compute the operation .A x is only a matter of personal preference.

152

7 Linear Algebra Background

Properties From the properties of the product AB, it follows that the operation .A x satisfies the left-hand distributive, right-hand distributive, and associative laws.

7.1.4 Transposition and Symmetries The transpose of an .m × n matrix .A = [aij ] is obtained by interchanging its rows and columns. More precisely, the transpose of A, denoted by the symbol .AT , is defined as the .n × m matrix of elements .[AT ]ij = aj i . For example, ⎡ .

⎤T





12 ⎣3 4⎦ = 1 3 5 , 246 56

⎡ ⎤ 1 ⎢2⎥

T ⎥ 1234 =⎢ ⎣3⎦. 4

When A is a complex matrix, the adjoint or Hermitian of A is called the conjugate transpose of A defined as the .n × m matrix of elements .[AH ]ij = aj i , where by the symbol .aj i we denote the complex conjugate of the complex number .aj i . For example,  .

2 + 4i 4i 5 3i 1 + 2i i

H



⎤ 2 − 4i −3i = ⎣ −4i 1 − 2i ⎦ . 5 −i

By applying the definitions, it is easy to show the following properties of the transpose and of the Hermitian of a matrix:  T 1. For all matrices, . AT = A and .(AH )H = A, and .AH = AT whenever A contains only real values. 2. If A and B are both .m × n, then (A + B)T = AT + B T and (A + B)H = AH + B H .

.

3. If A is .m × n and c is a real or complex number, then .

¯ H. (cA)T = cAT and (cA)H = cA

4. For conformable matrices A and B, (AB)T = B T AT and (AB)H = B H AH .

.

7.2 Linear Systems

153

For some matrices, interchanging rows and columns produces no effect. These matrices are called symmetric. More precisely, the following definitions are given. Definition 7.1.1 Let .A = [aij ] be a square matrix. A is called Symmetric whenever .A = AT (or .aij = aj i ) Skew-symmetric whenever .A = −AT (or .aij = −aj i ) Hermitian whenever .A = AH (or .aij = aj i ) Skew-Hermitian whenever .A = −AH (or .aij = −aj i ) ⎤ ⎤ ⎡ ⎡ 123 1 1 + 2i 2 − 3i Example .A = ⎣ 2 4 5 ⎦ is symmetric, .A = ⎣ 1 − 2i 2 4 − 5i ⎦ is Hermitian 356 2 + 3i 4 + 5i 3 ⎤ ⎡ 1 1 + 2i 2 − 3i but not symmetric, and .B = ⎣ 1 + 2i 2 4 − 5i ⎦ is symmetric but not 2 − 3i 4 − 5i 3 Hermitian. – – – –

7.2 Linear Systems 7.2.1 Introduction A fundamental mathematical equation involving matrices and vectors, called linear system, writes in the form .

 A x = b,

(7.1)

where .A = [aij ] and .b = [bi ] are a given .m × n matrix and m-dimensional column vector, respectively, and .x = [xi ] is an n-dimensional unknown column vector. In (7.1), matrix A is called the coefficient matrix of the linear system, .b is the right-hand side and .x is the solution. While A and .b are assigned, vector .x is a quantity to be determined. By recalling the definition of matrix–vector product, we can expand (7.1) as the following system of m separate equations in the n unknowns .x1 , x2 , . . . , xn : ⎧ ⎪ ⎪ a11 x1 + a12 x2 + · · · + a1n xn = b1 , ⎨ a21 x1 + a22 x2 + · · · + a2n xn = b2 , . ⎪ ··· ··· ··· ··· = ··· , ⎪ ⎩ am1 x1 + am2 x2 + · · · + amn xn = bm .

(7.2)

System (7.2) is linear in the sense that the variables appear as monomials of degree 1. Solving (7.2), or its equivalent matrix form (7.1), means finding all possible values of the .xi ’s satisfying the m equations simultaneously. Upon

154

7 Linear Algebra Background

replacing the scalars .xi ’s into the system above and simplifying the result, the same numerical value is obtained at the left and the right sides of the “=” symbol in each equation. For example, .x = 1, .y = 2, .z = 3 is a solution of the linear system ⎧ ⎨ x + 2y + 3z = 14, . 2x − y + z = 3, ⎩ 3x + 2y − 2z = 1, since ⎧ ⎨ (1) + 2(2) + 3(3) = 14, . 2(1) − (2) + (3) = 3, ⎩ 3(1) + 2(2) − 2(3) = 1,

⇒ ⇒ ⇒

14 = 14, 3 = 3, 1 = 1.

⎡ ⎤ 1 In matrix form, .x = ⎣ 2 ⎦ solves the system since 3 ⎡

⎤⎡ ⎤ ⎡ ⎤ 1 2 3 1 14 . ⎣ 2 −1 1 ⎦⎣2⎦ = ⎣ 3 ⎦. 3 2 −2 3 1 Linear systems are ubiquitous in computational science and engineering practise. Just to give an example, consider a biomedical application where four proteins .α, .β, .γ , and .δ are known to interact in the following way: protein .α interacts with .(β, γ , δ), protein .β with .(α, γ ), protein .γ with .(α, β, δ), and protein .δ with .(α, γ ). The information naturally translates into a .4 × 4 table filled with 0’s and 1’s, where a value of 1 is assigned to the .(i, j )-th position of the table if and only if the i-th protein interacts with the j -th molecule: α 0 α 1 β . 1 γ 1 δ T otal 3

β 1 0 1 0 2

γ 1 1 0 1 3

δ 1 0 . 1 0 2

From the above table, we can determine the probability that two molecules can have a physical contact by dividing each value by the number of elements in the respective column: ⎡ .

⎢ ⎢ ⎣

0 1 3 1 3 1 3

1 2

1 3 1 3

1 2



0 ⎥ ⎥ . 0 21 ⎦ 0 31 0

0 1 2

7.2 Linear Systems

155

We want to determine a non-negative number, called ranking .ri , that reveals the importance of the i-th protein in our problem. The ranking depends on both the number of molecules interacting with the i-th protein and their importance or ranking. It is reasonable to assume that .rα = 12 rβ + 13 rγ + 21 rδ since .α represents .1/2 of the interactions of .β and of .δ and .1/3 of those of .γ . Analogously, we can assume that .rβ = 13 rα + 13 rγ , .rγ = 13 rα + 12 rβ + 12 rδ , and .rδ = 31 rα + 13 rγ . Altogether, we have obtained a system of 4 equations in the 4 unknown rankings .rα , rβ , rγ , and .rδ , ⎧ 1 1 ⎪ ⎪ rα = rβ + rγ + ⎪ ⎪ 3 2 ⎪ ⎪ ⎪ 1 1 ⎪ ⎪ ⎨ rβ = r α + r γ , 3 3 . 1 1 ⎪ ⎪ rγ = rα + rβ + ⎪ ⎪ ⎪ 2 3 ⎪ ⎪ ⎪ ⎪ ⎩ rδ = 1 r α + 1 r γ , 3 3

1 rδ , 2 1 rδ , 2

which can be further simplified as ⎧ 1 ⎪ ⎪ ⎪ rβ ⎪ 2 ⎪ ⎪ ⎪ ⎪ 1 ⎪ ⎪ ⎨ rα 3 . 1 ⎪ ⎪ ⎪ rα ⎪ ⎪ ⎪ ⎪3 ⎪ ⎪ ⎪ ⎩ 1 rα 3

1 + rγ 3 1 + rγ 3 1 + rβ 2 1 + rγ 3

1 + rδ − rα = 0, 2 − rβ = 0, (7.3)

1 + rδ − rγ = 0, 2 − rδ = 0.

 where System (7.3) can be written in matrix form as .(R − I )r = 0, ⎡

⎤ 1 1 1 2 3 2⎥ ⎡ ⎥ 10 1 ⎥ ⎥ 0 0 ⎢0 1 3 ⎥ , I =⎢ ⎣0 0 1 1⎥ ⎥ 0 ⎥ 2 2⎥ 00 1 ⎦ 0 0 3 3

⎢0 ⎢ ⎢1 ⎢ ⎢3 .R = ⎢ ⎢1 ⎢ ⎢3 ⎣1

0 0 1 0

⎤ 0 0⎥ ⎥, 0⎦ 1

⎤ rα ⎢ rβ ⎥ ⎥ and .r = ⎢ ⎣ rγ ⎦ denotes the global ranking vector for the four proteins .(α, β, γ , δ). rδ A similar idea is used in web search engine models, to sort relevant webpages matching a user-defined query. When we digit a list of keywords, the search engine ⎡

156

7 Linear Algebra Background

finds a large number of web pages containing those keywords and returns them according to their ranking. The linear system is very large since a modern web search engine indexes hundreds of billions of webpages and the coefficient matrix will contain many zeros because each page is typically hyperlinked only to a few tens of other pages. Those zeros can be ignored in the computation and do not need to be stored in the computer memory. However, in other applications, it can be fully dense.

7.2.2 Special Linear Systems Some linear systems are straightforward to solve because they have a simple structure. Below are some examples that can be frequently encountered in applications.  where the coefficient matrix 1. Upper triangular systems. They have form .U x = b, U is square and upper triangular. The nonzero values of matrix U can be possibly located only in its upper triangle: ⎡

u11 ⎢ 0 ⎢ .U = ⎢ ⎣ 0 0

⎤ u12 · · · u1n u22 · · · u2n ⎥ ⎥ . . . . . .. ⎥ . . . ⎦ 0 0 unn

The solution is easy to determine by the backward substitution method that computes in sequence the values of .xn , .xn−1 , ..., .x2 , .x1 as follows: • First compute .xn = bn /unn . • Then, recursively, compute xi =

.

 1  bi − ui,i+1 xi+1 − ui,i+2 xi+2 − · · · − ui,n xn , uii

for .i = n−1, n−2, . . . , 2, 1. The solution is unique and can be found without breakdowns or failures of the algorithm if the diagonal elements of U are different from 0, at the cost of about .n2 arithmetic operations.  where the coefficient matrix 2. Lower triangular systems. They have form .L x = b, L is square and upper triangular. The nonzero values of matrix L can be possibly located only in its lower triangle: ⎡

11 0 · · · ⎢ ⎢ 21 22 . . . .L = ⎢ ⎢ . . . ⎣ .. .. . . n1 n2 · · ·

0



⎥ 0 ⎥ ⎥. ⎥ 0 ⎦ nn

7.2 Linear Systems

157

The solution can be determined by the forward substitution method that computes in sequence the values of .x1 , .x2 , ... , .xn−1 , .xn as follows: • First compute .x1 = b1 /11 . • Then, recursively, compute xi =

.

 1  bi − i,1 x1 − i,2 x2 − · · · − i,i−1 xi−1 , ii

for .i = 2, 3, . . . , n − 1, n. The solution is unique and can be found without breakdowns or failures of the algorithm if the diagonal elements of L are different from 0, at the cost of about .n2 arithmetic operations. 3. Triangular systems are either upper or lower triangular. Hence, they can be solved by either the backward or the forward substitution method, respectively. Triangular systems must be square by definition. 4. Diagonal systems .D x = b are a special case of triangular systems, where the coefficient matrix D is diagonal, that is, the nonzero elements of D are possibly located only on the main diagonal: ⎡

⎤ d11 0 · · · 0 ⎢ . ⎥ ⎢ 0 d22 . . . .. ⎥ ⎥. .D = ⎢ ⎢ . . ⎥ ⎣ .. . . . . . 0 ⎦ 0 · · · 0 dnn It follows from the definition that diagonal systems are square. The solution is unique provided that all the diagonal elements .dii are different from 0 and can be computed as .xi = bi /dii for .i = 1, 2, . . . , n at the cost of performing n divisions.  As a special subcase, if the coefficient matrix is the identity matrix I , then .x = b.

7.2.3 General Linear Systems  possibly rectangular with .m = n, is not x = b, A general .m × n linear system .A so straightforward to solve as in the special cases described in the previous section. When all the unknowns are coupled by an equation, the simple solution formulas derived for triangular systems cannot be applied. For a general linear system, three important questions need to be answered. 1. Does the system have at least one solution? 2. If there is at least one solution, how many solutions do exist? 3. Is it possible to represent all solutions of .A x = b by a general formula?

158

7 Linear Algebra Background

The mathematical tool to answer precisely these questions is the Gaussian elimination method, shortly abbreviated as GEM, that will be introduced in the next section.

7.2.4 The Gaussian Elimination Method x = b into a new system that has exactly the The basic idea of GEM is to transform .A same set of solutions, but it is much simpler to solve. The final system is precisely upper triangular, so that the solution can be determined by the backward substitution method presented before. It is easier to describe the method starting with the case of a square .n × n coefficient matrix and then generalizing it to a rectangular .m × n, .m = n, matrix. We denote the transformed system as .U x  = c, where U is upper triangular and .c is a suitably modified right-hand side vector: ⎡

u11 u12 ⎢ u22 ⎢ .U x  = c ⇐⇒ ⎢ ⎣

... ... .. .

⎤⎡ ⎤ ⎡ ⎤ c1 x1 u1n ⎢ x2 ⎥ ⎢ c2 ⎥ u2n ⎥ ⎥⎢ ⎥ ⎢ ⎥ .. ⎥ ⎢ .. ⎥ = ⎢ .. ⎥ . . ⎦⎣ . ⎦ ⎣ . ⎦ unn

xn

(7.4)

cn

System (7.4) can be obtained from .A x = b by applying to it three basic types of row operations that preserve the solution .x, namely:  Type 1. Exchanging any pair of equations of .A x = b. Clearly, the order of the equations does not matter since they are solved simultaneously by the .x1 , x2 , . . . , xn . Type 2. Multiplying any equation by a nonzero scalar. If the i-th equation ai1 x1 + ai2 x2 + · · · + ain xn = bi

.

is multiplied by a nonzero scalar .α, we get α (ai1 x1 + ai2 x2 + · · · + ain xn ) = αbi .

.

Therefore, .x1 , x2 , . . . , xn solve also the new i-th equation of the system. Type 3. Replacing one equation of .A x = b by its sum with a multiple of another  x = b. equation of .A Since variables .x1 , x2 , . . . , xn solve all the m equations of the linear system, in particular they verify the i-th and j -th equations. Hence, we can write

7.2 Linear Systems

159

.

ai1 x1 + ai2 x2 + · · · + ain xn = bi , aj 1 x1 + aj 2 x2 + · · · + aj n xn = bj .

By summing up .α times the j -th equation to the i-th equation, we get   ai1 x1 +ai2 x2 + · · · + ain xn +α aj 1 x1 + aj 2 x2 + · · · + aj n xn = bi +αbj ,

.

showing that the solution does not change. In the next section, we describe the complete GEM procedure step by step starting with a .3 × 3 example.

7.2.4.1

A 3 × 3 Example

Consider, for example, the linear system ⎧ ⎨ x + y + z = 2, . x + 2y + 3z = 3, ⎩ x + 3y + 4z = 1,

(7.5)

 writes as which, in standard matrix form .A x = b, ⎡

⎤⎡ ⎤ ⎡ ⎤ 111 x 2 .⎣1 2 3⎦⎣y ⎦ = ⎣3⎦, 134 z 1 ⎡

⎤ ⎡ ⎤ ⎡ ⎤ 111 x 2 where clearly .A = ⎣ 1 2 3 ⎦, .x = ⎣ y ⎦, and .b = ⎣ 3 ⎦. The Gaussian elimination 134 z 1 method proceeds by annihilating the subdiagonal values of the coefficient matrix using simple row operations of Types 1, 2, and 3, one column at a time starting from column 1. After .n − 1 = 2 steps, the final upper triangular system .U x = c is obtained and can be easily solved to compute .x. The sequence of steps is described below: Step 1 In column 1, entries .a21 = 1 and .a31 = 1 below the diagonal value .a11 = 1 are zeroed out by subtracting the first equation from the second and from the third equation of the system, respectively. As we know, this operation does not alter the solution .x. We get .

(x + 2y + 3z) − (x + y + z) = y + 2z,

so that variable x is eliminated from the second equation. The same operation must be applied to the right-hand sides; otherwise the solution will change:

160

7 Linear Algebra Background .

(3) − (2) = 1.

By repeating the same procedure also to the third equation, we get .

(x + 3y + 4z) − (x + y + z) = 2y + 3z (left-hand side), (1) − (2) = −1 (right-hand side).

We refer to the first equation as the pivotal equation at Step 1, since it is the reference equation that we use to eliminate variable x from the subsequent equations. The coefficient of x in the pivotal equation is the first pivot. We have achieved our goal. At the end of Step 1, the initial system (7.5) is transformed into ⎧ ⎨ x + y + z = 2, . y + 2z = 1, ⎩ 2y + 3z = −1,

(7.6)

or equivalently, in matrix form, ⎡

⎤⎡ ⎤ ⎡ ⎤ 111 x 2 .⎣0 1 2⎦⎣y ⎦ = ⎣ 1 ⎦. 023 z −1

(7.7)

Step 2 System (7.6) is not upper triangular yet. We need to annihilate the value 2 below the diagonal entry equal to 1 in the second column of (7.7). This can be obtained by subtracting in (7.6) two times the second equation (the pivotal equation at Step 2) from the third equation: .

(2y + 3z) − (2) (y + 2z) = −z (left-hand side), (−1) − (2) (1) = −3 (right-hand side).

The coefficient of the y variable in the pivotal equation is called the second pivot. The multiplier (2) is obtained by dividing the coefficient of the variable that we want to eliminate (y) in the current equation by the pivot value. At the end of Step 2, the initial system (7.5) has been converted into full upper triangular form: ⎡

⎤⎡ ⎤ ⎡ ⎤ 1 1 1 x 2 .⎣0 1 2 ⎦⎣y ⎦ = ⎣ 1 ⎦. 0 0 −1 z −3 Solution At this stage, the reduced system can be easily solved by the backward substitution method:

7.2 Linear Systems

161

z = (−3)/(−1) = 3, . y = 1 − 2z = 1 − 2 · 3 = −5, x = 2 − y − z = 2 − (−5) − 3 = 4.

7.2.4.2

The General n × n Case

It is straightforward to generalize the elimination procedure to systems of n linear equations in n unknowns (.x1 , x2 , . . . , xn ), of the form ⎧ a11 x1 + a12 x2 + · · · + a1n xn = b1 , ⎪ ⎪ ⎪ ⎨ a21 x1 + a22 x2 + · · · + a2n xn = b2 , . .. .. .. .. .. ⎪ ⎪ . . . . . ⎪ ⎩ an1 x1 + an2 x2 + · · · + ann xn = bn , or written equivalently in the standard matrix form .A x = b as ⎡

an1

a12 · · · a22 · · · .. . . . . an2 · · ·

a12 a22 .. .

··· ··· .. .

a11 ⎢ a21 ⎢ .⎢ . ⎣ ..

⎤⎡ ⎤ ⎡ ⎤ x1 b1 a1n ⎢ x2 ⎥ ⎢ b2 ⎥ a2n ⎥ ⎥⎢ ⎥ ⎢ ⎥ .. ⎥ ⎢ .. ⎥ = ⎢ .. ⎥ , . ⎦⎣ . ⎦ ⎣ . ⎦ ann

xn

bn

where ⎡

a11 ⎢ a21 ⎢ .A = ⎢ . ⎣ ..

an1 an2

⎤ ⎡ ⎤ ⎡ ⎤ x1 b1 a1n ⎢ x2 ⎥ ⎢ b2 ⎥ a2n ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ .. ⎥ , x = ⎢ .. ⎥ and b = ⎢ .. ⎥ . ⎦ ⎣ ⎦ ⎣ . ⎦ . . · · · ann xn bn

The Gaussian elimination procedure produces a sequence of .n − 1 systems A(k) x = b(k) , for .k = 1, 2, . . . , n−1, that have exactly the same solution of .A x = b but are much simpler to solve. The last system .A(n−1) x = b(n−1) , in particular, is upper triangular and its solution .x can be easily found by backward substitution in about .n2 arithmetic operations. The .n − 1 elimination steps are described below:

.

Step 1 First, the multipliers mi1 =

.

(1) ai1 (1) a11

, i = 2, 3, . . . , n,

are computed, where .aij(1) ’s are the elements of matrix .A(1) = A. Variable .x1 is eliminated from the subsequent equations by subtracting from the i-th equation, for

162

7 Linear Algebra Background

i = 2, . . . , n, the first equation (our pivotal equation at Step 1) multiplied by the multipliers .mi1 . At the end of Step 1, a new equivalent system

.



A(2) x = b(2)

.

(1)

(1)

a a ⎢ 11 12 (2) ⎢ 0 a22 ⎢ ⇐⇒ ⎢ . .. ⎣ .. . (2) 0 an2

... ··· .. .

⎤ ⎤ ⎡ (1) ⎤ (1) ⎡ b a1n x1 ⎢ 1(2) ⎥ (2) ⎥ ⎢ ⎥ x a2n ⎥ ⎢ 2 ⎥ ⎢ b2 ⎥ ⎢ ⎥ .. ⎥ ⎥⎢ . ⎥ = ⎢ . ⎥ . ⎦ ⎣ .. ⎦ ⎣ .. ⎦ (2)

xn

. . . ann

(2)

bn

is obtained, where the elements .aij ’s of .A(2) and .bi ’s of .b(2) are computed by the following formulae: (2)

.

(2)

(1) aij(2) = aij(1) − mi1 a1j , i, j = 2, . . . , n,

bi(2) = bi(1) − mi1 b1(1) , i = 2, . . . , n.

Recall that every row operation performed on the coefficient matrix must be applied also to the right-hand side vector; otherwise the solution .x changes. Note also that only rows .2, . . . , n are modified, while the pivotal equations remain unaltered till the end. We have reached our goal to eliminate .x1 from equations .2, 3, . . . , n. Step 2 By applying exactly the same idea, we can use the second equation of A(2) x = b(2) (which will be our new pivotal equation) to eliminate .x2 from equations .3, . . . , n. Observe that this operation does not modify the 0’s introduced at Step 1 in .A(2) x = b(2) .

.

Step .k ∈ {3, . . . , n − 2} At this stage of the algorithm, the following system ⎡

A(k) x = b(k)

.

(1) a11 ⎢ 0 ⎢ ⎢ . ⎢ . ⎢ . ⇐⇒ ⎢ ⎢ 0 ⎢ ⎢ .. ⎣ . 0

⎤ ⎡ (1) ⎤ (1) ⎤ ⎡ b1 . . . . . . a1n x1 (2) ⎢ ⎥ ⎢ (2) ⎥ a2n ⎥ ⎥ ⎢ x2 ⎥ ⎢ b2 ⎥ ⎢ ⎥ ⎢ ⎥ .. ⎥ ⎥⎢ . ⎥ ⎢ . ⎥ . ⎥ ⎢ .. ⎥ ⎢ .. ⎥ ⎥=⎢ ⎥ (k) (k) ⎥ ⎢ ⎢ x ⎥ ⎢ (k) ⎥ akk . . . akn ⎥ ⎥ ⎢ k ⎥ ⎢ bk ⎥ .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥ . . ⎦⎣ . ⎦ ⎣ . ⎦ (k) (k) xn . . . 0 ank . . . ann bn(k)

(1) a12 ... (2) a22 .. . ... 0 .. .

(k)

has been computed. The matrix values below .akk in the k-th column of .A(k) will be zeroed out by subtracting from the j -th (.j > k) equation of this system a multiple of the k-th equation (the pivotal equation at Step k). As usual, the same operation will be applied to the right-hand side vector. Step k modifies only rows .k+1, k+2, . . . , n of .A(k) . Step .n − 1 Finally, the last step eliminates variable .xn−1 from the last equation and yields the reduced upper triangular system .A(n−1) x = b(n−1) :

7.2 Linear Systems

163

⎤ ⎤ ⎡ (1) ⎤ (1) (1) (1) ⎡ b1 a11 a12 . . . . . . a1n x1 ⎢ (2) (2) ⎥ (2) ⎥ ⎥ ⎢ x a2n ⎥ ⎢ ⎢ ⎢ 0 a22 2 ⎢ ⎥ ⎢ b2 ⎥ ⎢ ⎥ ⎢ ⎥ .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥ . ⎢ ⎥ .⎢ 0 0 .. . ⎥⎢ . ⎥ = ⎢ . ⎥. ⎢ ⎢ . ⎥ . ⎥ ⎢ . ⎥ .. . . .. ⎥ ⎢ ⎢ . ⎥ . . ⎦ ⎣ .. ⎦ ⎣ .. ⎦ ⎣ . . (n) (n) xn 0 0 . . . . . . ann bn ⎡

(k) The diagonal entries .akk (.k = 1, 2, . . . , n) of .A(n−1) are the pivots used during the elimination. Pivots must be non-null as they appear at the denominator of the multipliers formulae; otherwise the algorithm breaks down. Wrapping up the procedure described, the formulae to generate the new .(k + 1)th system .A(k+1) x = b(k+1) from k-th system .A(k) x = b(k) , for .k = 1, . . . , n − 1, are (k+1)

.

aij

(k)

(k)

= aij − mik akj , i, j = k + 1, . . . , n,

bi(k+1) = bi(k) − mik bk(k) , i = k + 1, . . . , n,

where the multipliers .mik are defined as (k)

mik =

.

aik

(k)

akk

, i = k + 1, . . . , n

(k)

and .akk = 0 (non-null pivots). The complete elimination applied to an .n × n matrix n3 n n3 n2 5n requires . + − multiplications/divisions and . − additions/subtractions. 2 6 3 3 3 Once the matrix is made upper triangular, the system can be solved by the n(n + 1) n2 n backward substitution algorithm in . = + multiplications/divisions 2 2 2 n n2 (n − 1)n − = and . additions/subtractions. All in all, solving an .n × n 2 2 2 n3 n + n2 − linear system by the Gaussian elimination procedure requires . 3 3 5n n2 n3 − additions/subtractions. multiplications/divisions and . + 2 6 3 7.2.4.3

Gaussian Elimination with Pivoting

So far we have assumed in our discussion that the pivots are always non-null, otherwise the elimination must stop since these quantities appear at the denominator of the multipliers formulae. In practice, however, it may happen that a zero pivot is encountered during the elimination, as illustrated in the following example after one step:

164

7 Linear Algebra Background



⎤ ⎡ ⎤ 123 1 2 3 = ⎣ 2 4 5 ⎦ ⇒ A(2) = ⎣ 0 0 −1 ⎦ . 678 0 −5 −10

A = A(1)

.

One obvious workaround to continue the procedure is to swap the second row (the pivotal row at Step 2) with the third row, so that we can bring a nonzero value to the pivotal position .(2, 2): ⎡

A(2)

.

⎤ 1 2 3 = ⎣ 0 −5 −10 ⎦ . 0 0 −1

The technique consisting of swapping at some stage during the elimination the pivotal row with one subsequent row, in order to move a nonzero scalar in the pivotal position, is called pivoting. It is a necessary modification of the Gaussian algorithm when a zero pivot is encountered. Precisely, the rule can be stated as follows: (k) At step k, whenever a zero pivot .akk is encountered, exchange the k-th row with (k) the j -th row, .j > k, such that .aj k = 0, and perform the usual elimination step (k)

using the nonzero entry .aj k as the next pivot element. Occasionally, it may happen that it is impossible to find a nonzero entry on or below the main diagonal in the k-th column, like in the case of a matrix with the structure shown below: ⎡

A(k)

.

(1)

a11 ⎢ 0 ⎢ ⎢ . ⎢ .. ⎢ =⎢ ⎢ 0 ⎢ ⎢ .. ⎣ . 0

(1)

a12 . . . . . . . . . (2) a22 .. . (k) . . . 0 ak,k+1 . . . .. .. . . (k) . . . 0 an,k+1 . . .

(1) ⎤ a1n (2) ⎥ a2n ⎥ .. ⎥ . ⎥ ⎥ (k) ⎥ . akn ⎥ ⎥ .. ⎥ . ⎦ (k) ann

Since the k-th column has only 0’s in rows .k, k + 1, . . . , n, we can assume that it is already reduced and continue the reduction of the next .(k + 1)-th column. However, recall that the pivotal equations of .A x = b are not modified until the end of the elimination algorithm. Therefore, the final upper triangular system .A(n−1) x = b(n−1) will contain one equation of the form (k)

(k)

(k)

0xk + ak,k+1 xk+1 + · · · + ak,n xn = bk ,

.

and the backward substitution algorithm will encounter a division by zero when attempting to compute the value of .xk : (k) (k) 0xk = c, with c = bk(k) − ak,k+1 xk+1 − · · · − ak,n xn .

.

7.2 Linear Systems

165

At this stage, two outcomes are possible: either .c = 0, leading to .0xk = 0 which is always true for any scalar .xk , or .c = 0, leading to .0xk = c = 0 that is never satisfied by any value of .xk . In both cases, the backward substitution algorithm will break down permanently, indicating that .A x = b may have infinitely many solutions (for .c = 0) or no solutions (for .c = 0). The reason for the breakdown is that one (nonzero) pivot is missing at some step of the elimination procedure. In algebra, a square .n × n matrix that does not have a complete set of n (nonzero) pivots is called singular; otherwise, it is called nonsingular. By extension, a linear system .A x = b with a singular coefficient matrix A is referred to as a singular system and it has either zero or infinitely many solutions. On the other hand, .A x = b with nonsingular A is referred to as a nonsingular system and it has a unique solution.

7.2.4.4

Partial Pivoting

When A is nonsingular, Gaussian elimination with pivoting completes without breakdowns and the backward substitution method computes the unique solution accurately in exact arithmetic. However, an important amendment needs to be made when the algorithm is implemented in finite precision arithmetic on computers in order to ensure the accuracy of the approximate solution. To understand better the potential problem, consider the following linear system:  .

10−18 x1 − x2 = −1 or, in matrix form, − x1 + x2 = −1



10−18 −1 −1 1



x1 x2



 =

 −1 . −1

1 + 10−18 2 ≈ 1. On the ≈ 2, x2 = −18 1 − 10−18 1 − 10 other hand, rounding GEM operations to double-precision arithmetic (16 significant digits) gives after one step The exact solution is .x1 =

 .

10−18 x1 − x2 = −1 and in matrix form − 1018 x2 = −1018   −1 . = −1018



10−18 −1 0 −1018



x1 x2



The backward substitution algorithm using double-precision arithmetic computes the approximate solution .x2 = 1 and .x1 = 0, which is clearly wrong. The problem in the previous computation is that, due to the large size of the multiplier (.1018 ), the quantities .1 − 1018 and .−1 − 1018 are rounded to .−1018 . Recall the (k) transformation formulae .aij(k+1) = aij(k) − mik akj , i, j = k + 1, . . . , n. Because of (k)

the choice of a small pivot .akk , a large value of .|mik | may amplify errors in the

166

7 Linear Algebra Background (k)

approximation of .akj . To avoid troubles, a strategy called partial pivoting is highly recommended: at k-th step, search for the largest entry (in module) in the column .A(k) (k : n, k) and use that value as the pivot element. When partial pivoting is used, the multipliers cannot have a size larger than 1 in absolute value. Hence, the impact of round-off errors in the computation is minimized. In the last example,  .

10−18 x1 − x2 = −1, − x1 + x2 = −1,

partial pivoting yields  .

− x1 + x2 = −1, 10−18 x1 − x2 = −1,

and GEM in double-precision arithmetic (16 significant digits) results in the reduced system  .

−x1 + x2 = −1, − x2 = −1,

whose solution .x2 = 1 and .x1 = 2 is accurate. Example As an example, we consider solving the .3 × 3 system ⎡

⎤⎡ ⎤ ⎡ ⎤ 1 2 3 x1 2 . ⎣ 2 4 5 ⎦ ⎣ x2 ⎦ = ⎣ 3 ⎦ 6 7 8 7 x3 by Gaussian elimination with partial pivoting. The sequence of steps is

.

⎧ ⎨ x1 + 2x2 + 3x3 = 2 2x + 4x2 + 5x3 = 3 ⎩ 1 6x1 + 7x2 + 8x3 = 7

⎧ ⎨ 6x1 + 7x2 + 8x3 = 7 ⇒ 2x + 4x2 + 5x3 = 3 ⎩ 1 x1 + 2x2 + 3x3 = 2



⎧ 6x + 7x + 8x = 7 ⎪ ⎪ ⎪ 1 5 2 7 3 2 ⎨ x2 + x3 = 3 3 3 ⎪ ⎪ 5 5 5 ⎪ ⎩ x2 + x3 = 6 3 6

⎧ 6x + 7x + 8x = 7 ⎪ ⎪ ⎪ 1 5 2 7 3 2 ⎨ x2 + x3 = ⇒ 3 3 3 ⎪ ⎪ 1 1 ⎪ ⎩ x3 = 2 2

⎧ ⎨ x1 = 1 ⇒ x = −1 . ⎩ 2 x3 = 1

 2 Partial pivoting adds a cost of about . n−1 k=1 n + 1 − k ≈ n searches. It is possible (k) to extend the search to the submatrix .A (k : n, k : n), performing about .n3

7.2 Linear Systems

167

searches. However, this technique, called complete pivoting, generally offers little improvement in accuracy with respect to partial pivoting, which is often preferred and implemented in most numerical software.

7.2.4.5

Gauss–Jordan Method

A popular variant of the Gaussian elimination procedure, which is known as the Gauss–Jordan method, can further simplify the solution of a linear system. The procedure is characterized by the following two properties: 1. At each step, the selected pivot value must be equal to 1. 2. At each step, all matrix elements above the pivot as well as those below the pivot are zeroed out. At the end of the Gauss–Jordan elimination process, the initial system .A x = b is transformed into a new one of the form .In x = c, where .In is the .n×n identity matrix, instead of an upper triangular system .U x = c like in the conventional Gaussian elimination algorithm. ⎡

a11 ⎢a ⎢ 21 .⎢ . ⎢ . ⎣ . an1

a12 a22 .. . an2

··· ··· .. . ···

⎡ ⎤⎡ ⎤ ⎡ ⎤ x1 b1 a1n ⎢ ⎥ ⎢ ⎥ ⎢ a2n ⎥ ⎥ ⎢ x2 ⎥ ⎢ b2 ⎥ Gauss-Jordan ⎢ ⎢ . ⎥ = ⎢ . ⎥ −−−−−−−−→ ⎢ .. ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ . ⎦ ⎣ .. ⎦ ⎣ .. ⎦ ann xn bn

1 0 .. . 0

0 1 .. . 0

··· ··· .. . ···

0 0 .. . 1

⎤ ⎡ ⎤ c1 x1 ⎥⎢x ⎥ ⎢c ⎥ ⎥⎢ 2 ⎥ ⎢ 2 ⎥ ⎥⎢ . ⎥ = ⎢ . ⎥ ⎥⎢ . ⎥ ⎢ . ⎥ ⎦⎣ . ⎦ ⎣ . ⎦ xn cn ⎤⎡

The solution is available immediately as .x = c, without the need of using n3 + n2 multiplications/divisions backward substitution. Gauss–Jordan requires . 2 3 2 n −n and . additions/subtractions for solving an .n × n linear system, while 2 Gaussian elimination with back substitution requires only about .n3 /3 multiplications/divisions and about the same number of additions/subtractions. By simple calculations, we conclude that the Gauss–Jordan method requires about 50% more operations. Although this difference is insignificant for small n, it becomes noticeable for large n. For example, the difference for .n = 10000 is bigger than 11 multiplications/divisions and additions/subtractions. .10 Example Solve the following linear system by applying the Gauss–Jordan method: ⎧ ⎨ x + y + z = 2, . x + 2y + 3z = 3, ⎩ x + 3y + 4z = 1. Solution At Step 1, the pivotal equation is the first one. The coefficient of x in the pivotal equation is 1, so it is accepted as pivot. Upon eliminating x from the second and third equations, we obtain the new system

168

7 Linear Algebra Background

⎧ ⎨ x + y + z = 2, . y + 2z = 1, ⎩ 2y + 3z = −1. At Step 2, the pivotal equation is the second one. The coefficient of y in the pivotal equation is 1, so it is accepted as pivot. Upon eliminating y from the third equation as well as from the first equation, we obtain the system ⎧ ⎨x .



− z = 1, y + 2z = 1, − z = −3.

At Step 3, the pivotal equation is the third one. The coefficient of z in the pivotal equation is .−1(= 1), so it cannot be accepted as pivot. We divide the third equation by .−1 so that the pivot is equal to 1: ⎧ ⎨x .



− z = 1, y + 2z = 1, z = 3,

and then we can proceed by eliminating z from the first and second equations. This yields immediately the final solution, without the need to apply backward substitution: ⎧ ⎨x = 4 . y = −5 . ⎩ z= 3

7.2.4.6

Interim Conclusions

The main conclusions relating to our discussion on square linear systems .A x = b are: 1. The solution is unique if A has n (nonzero) pivots, that is, if A is nonsingular. 2. If A does not have a full set of n (nonzero) pivots, then the system is singular and it may have either no solution or infinitely many solutions, depending on the  right-hand side vector .b. 3. If a zero pivot is encountered during Gaussian elimination, a row exchange can be applied to move a nonzero scalar in the diagonal position and continue with the elimination. 4. For reasons of numerical accuracy, when Gaussian elimination is implemented in finite precision arithmetic, it is highly recommended to use partial pivoting in order to control round-off error propagation.

7.2 Linear Systems

169

7.2.5 Gaussian Elimination for Rectangular Systems It is rather straightforward to extend the Gaussian elimination method to the case of rectangular systems of m linear equations in n unknowns, of the form ⎧ ⎪ a11 x1 + a12 x2 + · · · + a1n xn = b1 , ⎪ ⎪ ⎨ a21 x1 + a22 x2 + · · · + a2n xn = b2 , . .. .. .. .. .. ⎪ ⎪ . . . . . ⎪ ⎩ am1 x1 + am2 x2 + · · · + amn xn = bm , where m is possibly different from n. However, square linear systems (.m = n) are  still included in our analysis. Consider the system .A x = b: ⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 ⎪ ⎪ ⎨ 2x1 + 4x2 + 5x3 + 5x4 + 3x5 . ⎪ 3x + 6x2 + 8x3 + 8x4 + 5x5 ⎪ ⎩ 1 x1 + 2x2 + 5x3 + 5x4 + 7x5



= 1, 12 ⎢2 4 = 2, or ⎢ ⎣3 6 = 3, = 4, 12

3 5 8 5

3 5 8 5

⎤ ⎡ ⎤ x1 1 2 ⎢ ⎥ ⎢ x2 ⎥ ⎢ ⎥ 3⎥ ⎥ ⎢ ⎥ ⎢ x3 ⎥ = ⎢ 2 ⎥ . 5⎦⎢ ⎥ ⎣3⎦ ⎣ x4 ⎦ 4 7 x5 ⎤



After Step 1 of GEM, we get the modified system .A(2) x = b(2) : ⎧ ⎡ x1 + 2x2 + 3x3 + 3x4 + 2x5 = 1, 1 ⎪ ⎪ ⎨ ⎢ 0 − x3 − x4 − x5 = 0, or ⎢ . ⎣ 0 ⎪ − x3 − x4 − x5 = 0, ⎪ ⎩ 2x3 + 2x4 + 5x5 = 3. 0

2 3 3 2 0 −1 −1 −1 0 −1 −1 −1 0 2 2 5

⎤ ⎡ ⎤ x1 1 ⎢x ⎥ 2⎥ ⎢0⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ x3 ⎥ = ⎢ ⎥ . ⎦⎢ ⎥ ⎣0⎦ ⎣ x4 ⎦ 3 x5 (7.8) ⎤



Clearly, it is impossible to bring a nonzero number into the .(2, 2)-position of matrix .A(2) by interchanging the second equation of (7.8) with any subsequent equation. We modify the elimination procedure as follows, in order to continue: 1. At Step k, we search in the coefficient matrix .A(k) of the current system .A(k) x = b(k) for the first column from left to right that has a nonzero value on or below the k-th position. Suppose that such value is found on the j -th column. Then, the pivotal position is .(k, j ). In our example (7.8), the second pivotal element is .−1 in the second row and third column of .A(2) . 2. If necessary, we need to exchange the k-th equation of .A(k) x = b(k) with one below it in order to bring a nonzero element into the .(k, j )-th position. 3. At this stage, we zero-out all the elements below the pivotal position so that we can eliminate variable .xj from the subsequent equations and compute the new system .A(k+1) x = b(k+1) .

170

7 Linear Algebra Background

4. If the .(k + 1)-th row of .A(k+1) and all the rows below contain only 0’s, the elimination process is completed, otherwise we repeat the same procedure at the next step. By applying the above algorithm to the previous example, we get ⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 ⎪ ⎪ ⎨ 2x1 + 4x2 + 5x3 + 5x4 + 3x5 . ⎪ 3x + 6x2 + 8x3 + 8x4 + 5x5 ⎪ ⎩ 1 x1 + 2x2 + 5x3 + 5x4 + 7x5

⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 = 1, = 1, ⎪ ⎪ ⎨ = 2, − x3 − x4 − x5 = 0, ⇒ ⎪ = 3, − x3 − x4 − x5 = 0, ⎪ ⎩ = 4, 2x3 + 2x4 + 5x5 = 3,

⎧ ⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 = 1, x1 + 2x2 + 3x3 + 3x4 + 2x5 = 1, ⎪ ⎪ ⎪ ⎪ ⎨ ⎨ − x3 − x4 − x5 = 0, − x3 − x4 − x5 = 0, ⇒ . ⎪ ⎪ 0 = 0, 3x5 = 3, ⎪ ⎪ ⎩ ⎩ 0 = 0. 3x5 = 3, Note that the final result is not anymore an upper triangular linear system like in the square case. The coefficient matrix is in jagged or stair-step form: ⎛

∗ ⎜0 .⎜ ⎝0 0

∗ 0 0 0

∗ ∗ 0 0

∗ ∗ 0 0

⎞ ∗ ∗⎟ ⎟. ∗⎠ 0

(7.9)

Such matrix structure is characterized by two properties: 1. If there are zero rows in the final matrix, they are located at the bottom of the matrix. 2. For each i-th nonzero row, suppose that the first nonzero element occupies the j -th column, then the matrix contains only 0’s below the i-th position in columns .1, 2, . . . , j .

7.2.5.1

Echelon Matrices

A matrix of the form (7.9) is called an Echelon matrix. In an Echelon matrix, the nonzero entries are allowed only on or above a stair-step line that starts from the upper left-hand corner and descends down and to the right. The pivots are the first nonzero values in each nonzero row. Example ⎡ ⎤ 1 2 3 1. .⎣ 0 0 4 ⎦ is not an Echelon matrix because it does not verify property 2 since 0 1 0 its nonzero entries do not lie on or above a stair-step line.

7.2 Linear Systems

171



⎤ 0 0 0 0 2. .⎣ 0 1 0 0 ⎦ is not an Echelon matrix because it contains one zero row, but this 0 0 0 1 is not located at the bottom of the matrix. ⎡ ⎤ ⎡ ⎤ 1 2 0 0 1 0 2 2 3 −4 ⎢0 0 0 1 0 0⎥ ⎥ 3. On the other hand, .⎣ 0 0 7 −8 ⎦ and .⎢ ⎣ 0 0 0 0 0 1 ⎦ are Echelon matrices 0 0 0 −1 0 0 0 0 0 0 because they satisfy both properties. The number of pivots of a matrix A is called the rank of A and is denoted by the symbol .rank(A). The columns of A that contain a pivot are called the basic columns of A. Since the rank is equal to the number of nonzero rows of an Echelon matrix derived from A, it represents the number of independent equations of the linear  An equation .0 = 0 should not count because it is redundant. It does system .A x = b. not contain a pivot and it does not increase the rank. Similarly, if one equation is a combination of other equations of the same system, it is reduced by elimination to the null equation .0 = 0, and thus it does not contribute to the solution and it does not increase the rank. We conclude that if the rank of A is r, then only r equations of the linear system .A x = b are independent and should be solved assuming that a solution exists.

7.2.5.2

Reduced Echelon Form

For square matrices, we have discussed an alternative to the Gaussian elimination procedure called the Gauss–Jordan method, which can further simplify the solution of a linear system although it costs about 50% more arithmetic operations. The differences with the standard elimination are essentially two: 1. At each step, the pivot is forced to be a 1. 2. At each step, all entries above and below the pivot are annihilated. What happens if the Gauss–Jordan method is applied to a rectangular .m × n matrix? Let us consider an example. Example Reduce the same linear system considered before ⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 ⎪ ⎪ ⎨ 2x1 + 4x2 + 5x3 + 5x4 + 3x5 . ⎪ 3x + 6x2 + 8x3 + 8x4 + 5x5 ⎪ ⎩ 1 x1 + 2x2 + 5x3 + 5x4 + 7x5



= 1, 123 ⎢2 4 5 = 2, or ⎢ ⎣3 6 8 = 3, = 4, 125

by applying the Gauss–Jordan elimination procedure.

3 5 8 5

⎤ ⎡ ⎤ x1 1 2 ⎢ ⎥ x ⎢ 2⎥ ⎢ ⎥ 3⎥ ⎥ ⎢ ⎥ ⎢ x3 ⎥ = ⎢ 2 ⎥ , 5⎦⎢ ⎥ ⎣3⎦ ⎣ x4 ⎦ 4 7 x5 ⎤



172

7 Linear Algebra Background

Solution We obtain the following sequence of steps: ⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 ⎪ ⎪ ⎨ 2x1 + 4x2 + 5x3 + 5x4 + 3x5 ⎪ 3x + 6x2 + 8x3 + 8x4 + 5x5 ⎪ ⎩ 1 x1 + 2x2 + 5x3 + 5x4 + 7x5

= 1, = 2, = 3, = 4,

⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 = 1, ⎪ ⎪ ⎨ . x3 + x4 + x5 = 0, ⎪ − x3 − x4 − x5 = 0, ⎪ ⎩ 2x3 + 2x4 + 5x5 = 3, ⎧ ⎨ x1 + 2x2 ⎩

− x5 = 1, x3 + x4 + x5 = 0, 3x5 = 3,



⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 = 1, ⎪ ⎪ ⎨ − x3 − x4 − x5 = 0, ⇒ ⎪ − x3 − x4 − x5 = 0, ⎪ ⎩ 2x3 + 2x4 + 5x5 = 3,



⎧ x1 + 2x2 ⎪ ⎪ ⎨ ⎪ ⎪ ⎩

⎧ ⎨ x1 + 2x2 ⎩

− x5 = 1, x3 + x4 + x5 = 0, 0 = 0, 3x5 = 3,

− x5 = 1, x3 + x4 + x5 = 0, x5 = 1.

Compare the final system of this example with the result obtained by the Gaussian elimination obtained: ⎧ ⎨ x1 + 2x2 + 3x3 + 3x4 + 2x5 = 1, . − x3 − x4 − x5 = 0, ⎩ 3x5 = 3. The stair-step form is the same for both systems because of the definition of the Gauss–Jordan procedure. The only difference is in the numerical values. In Gauss– Jordan, by construction, each pivot is 1 and all entries above and below each pivot are 0. Therefore, its associated row Echelon form contains less nonzeros, and for this reason it seems natural to name it as the reduced row Echelon form of A. The symbol .EA is often used to denote the unique reduced row Echelon matrix obtained from A by means of row operations and satisfying the conditions: 1. .EA is in row Echelon form. 2. The first nonzero entry in each row (i.e., each pivot) is 1. 3. All entries above each pivot are 0.

7.2.6 Consistency of Linear Systems With the Gaussian elimination tool in mind, we are in the position to answer the questions that we have established at the beginning of the chapter about the conditions for the existence and uniqueness, and the general formula, of the solution  with .A ∈ Rm×n . Suppose to reduce .A of a linear system .A x = b, x = b by Gaussian

7.2 Linear Systems

173

elimination into its row Echelon form .E x = c. If one equation of the reduced system resulting from the elimination has the form 0x1 + 0x2 + · · · + 0xn = f, f = 0,

.

(7.10)

 has no solution because no nclearly .E x = c (and consequently .A x = b) tuple .(x1 , x2 , . . . , xn ) exists satisfying (7.10). In this case, .A x = b is called an inconsistent system. If such an Eq. (7.10) cannot be found in .E x = c, then a solution x = b is called a consistent system. vector .x exists, and .A On the other hand, if an equation of the form (7.10) appears in the reduced system .E x  = c, the right-hand side .b contains the pivot .f = 0. Therefore, .rank(A) = rank(Ab ), where we denote by .Ab the matrix obtained by appending .b beside A. The vice versa is also true. We have derived the following condition to verify the  existence of the solution of a general linear system .A x = b.  with .A ∈ Rm×n , is inconsistent if and only if Properties A linear system .A x = b,  .rank(A) = rank(Ab ), where we denote by .Ab the matrix obtained by appending .b beside A, and it is consistent if and only if .rank(A) = rank(Ab ). In the sections below, we will answer the other two questions about the uniqueness and the general formula of .x, distinguishing two cases, namely homogeneous and nonhomogeneous systems.

7.2.7 Homogeneous Linear Systems A system of m linear equations in n unknowns written in the form ⎧ a11 x1 + a12 x2 + · · · + a1n xn = b1 , ⎪ ⎪ ⎪ ⎨ a21 x1 + a22 x2 + · · · + a2n xn = b2 , . .. .. .. .. .. ⎪ ⎪ . . . . . ⎪ ⎩ am1 x1 + am2 x2 + · · · + amn xn = bm

(7.11)

is called homogeneous if .bi = 0 for all .i = 1, 2, . . . , m. In matrix form, the system  Clearly, a homogeneous system always has at least one can be written as .A x = 0. solution, namely the zero solution .x1 = x2 = · · · = xn = 0 or .x = 0 in vector form (also called the trivial solution). The obvious question is whether and under which conditions it possesses also some nonzero solutions, and how many. Once again, the Gaussian elimination comes to aid to carry out the analysis. Consider, for example, the system ⎧ x1 + 2x2 + 3x3 + 3x4 + 2x5 ⎪ ⎪ ⎨ 2x1 + 4x2 + 5x3 + 5x4 + 3x5 . ⎪ 3x + 6x2 + 8x3 + 8x4 + 5x5 ⎪ ⎩ 1 x1 + 2x2 + 5x3 + 5x4 + 7x5

= 0, = 0, = 0, = 0.

(7.12)

174

7 Linear Algebra Background

Upon Gaussian elimination, we obtain the reduced homogeneous system: ⎧ ⎨ x1 + 2x2 + 3x3 + 3x4 + 2x5 = 0, . − x3 − x4 − x5 = 0, ⎩ 3x5 = 0. Since there are five unknown quantities to compute (the variables .x1 , .x2 , .x3 , .x4 , x5 ), but only three equations can be used to determine their values, it is impossible to find a unique value for each of them. We will select one unknown per equation and compute it in terms of the other unknowns, which will therefore remain unassigned. Different choices can be adopted to decide which variable should be solved from each independent equation. Here, we follow the strategy to select the unknowns corresponding to the pivotal positions. These are called basic variables; in our example, they are .x1 , .x3 , and .x5 . We express them in terms of the remaining variables .x2 and .x4 , which are referred to as free variables because their value is arbitrary. From the third equation, we compute

.

x5 = 0,

.

(7.13)

while from the second equation we get x3 + x4 = 0,

.

(7.14)

and substituting back into the first equation gives x1 + 2x2 = 0.

.

(7.15)

Combining (7.13)–(7.15), we find that all solutions of the original homogeneous system can be written as ⎧ ⎨ x1 = −2x2 , . x = −x4 , ⎩ 3 x5 = 0.

(7.16)

As the free variables .x2 and .x4 on the right-hand side of (7.16) can assume any arbitrary value, the above expression describes all possible solutions of the linear system. A particular solution can be obtained by assigning some special values to the free variables. For example, when .x2 = 1 and .x4 = −1, we obtain the particular solution ⎧ ⎪ x1 = −2, ⎪ ⎪ ⎪ ⎪ ⎨ x2 = 1, . x3 = 1, ⎪ ⎪ ⎪ x4 = −1, ⎪ ⎪ ⎩ x = 0. 5

7.2 Linear Systems

175

Any other choice of .x2 and .x4 will lead to a different particular solution of the linear system. We can write (7.16) conveniently in vector form as ⎡

⎤ ⎡ ⎤ ⎡ ⎤ x1 −2 0 ⎢x ⎥ ⎢ 1⎥ ⎢ 0⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . ⎢ x3 ⎥ = x2 ⎢ 0 ⎥ + x4 ⎢ −1 ⎥ , ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ x4 ⎦ ⎣ 0⎦ ⎣ 1⎦ 0 0 x5

(7.17)

where the free variables .x2 and .x4 again can range over all possible real numbers. Therefore, (7.17) is the general solution of the homogeneous system. An important observation is that the two vectors ⎡ ⎤ ⎡ ⎤ −2 0 ⎢ 1⎥ ⎢ 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎥ 1 = ⎢ 0 ⎥ and h2 = ⎢ .h (7.18) ⎢ −1 ⎥ , ⎢ ⎥ ⎢ ⎥ ⎣ 0⎦ ⎣ 1⎦ 0 0 in (7.17) are particular solutions of the linear system (7.12), precisely .h1 is computed by assigning the values .x2 = 1 and .x4 = 0 to the free variables, whereas the solution .h2 is obtained when .x2 = 0 and .x4 = 1. Therefore, we can conclude that the complete solution of (7.12) is a linear combination of two of its particular solutions. The last conclusion is general, in the sense that it remains valid for any homogeneous linear system .A x = b of m linear equations in n unknowns. Assuming that the rank of the coefficient matrix A is r, there will be exactly r basic variables .xb1 , xb2 , . . . , xbr corresponding to the indices of the basic columns in A and .n − r free variables .xf1 , xf2 , . . . , xfn−r corresponding to the indices of the nonbasic columns in A. Upon reducing A to row Echelon form (either ordinary or reduced) and then applying the back substitution method to compute the basic variables in terms of the free variables, we obtain the general solution x = xf1 h1 + xf2 h2 + · · · + xfn−r hn−r ,

.

(7.19)

where .h1 , h2 , . . . , hn−r are the n particular solutions of the system obtained by choosing, for .i = 1, . . . , n, .xfi = 1 and .xfj = 0 for .j = i in (7.19). As the free variables .xfi range over all arbitrary values, the general solution generates all possible solutions. Note that if the Gauss–Jordan procedure is used instead of Gaussian elimination, we can avoid the back substitution process. Looking at the general solution (7.19), we immediately see that, as long as there is at least one free variable, the linear system will have infinitely many solutions. Otherwise, if there are no free variables, the system has only the trivial solution. Since the number of free variables is .n − r, where .r = rank(A), we conclude that

176

7 Linear Algebra Background

a homogeneous system possesses a unique solution (i.e., the trivial solution) if and only if the .rank(A) = n. We can wrap up the previous discussion about homogeneous systems .A x = 0 as follows:  1. .A x = 0 always admits a solution, namely the zero or trivial solution .x = 0. 2. The general solution can be written as x = xf1 h1 + xf2 h2 + · · · + xfn−r hn−r ,

.

where the scalars .xf1 , xf2 , . . . , xfn−r are the free variables and .h1 , h2 , . . . , hn−r are n-th dimensional column vectors representing particular solutions of the homogeneous system. Here, r is the rank of matrix A. As the free variables .xfi range over all possible values, the general solution describes all possible solutions.  if and only if there 3. A homogeneous system possesses a unique solution (.x = 0) are no free variables, that is, the rank of A is equal to n. Otherwise, if the rank of A is strictly smaller than n, it will possess infinitely many solutions. Example 1. The homogeneous system ⎧ ⎨ x1 + 2x2 + 3x3 = 0, . 4x + 5x2 + 6x3 = 0, ⎩ 1 7x1 + 8x2 + 5x3 = 0 has only the trivial solution .x = 0 because the rank of the coefficient matrix ⎡ ⎤ 1 2 3 .⎣ 4 5 6 ⎦ is 3. 7 8 5 2. Suppose .A x = 0 has more unknowns than equations. The size of A is .m × n with .n > m, and A has more columns than rows. The number of pivots cannot exceed m because each row contains at most one pivot. Then, necessarily .rank(A) < n and the system has .n − r free variables, where .r = rank(A). Consequently, .A x = 0 has infinitely many nonzero solutions.

7.2.8 Nonhomogeneous Linear Systems A system of m linear equations in n unknowns written in the form

7.2 Linear Systems

177

⎧ ⎪ a11 x1 + a12 x2 + · · · + a1n xn = b1 , ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ a21 x1 + a22 x2 + · · · + a2n xn = b2 , .

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

.. .

.. .

.. .

.. .

.. .

am1 x1 + am2 x2 + · · · + amn xn = bm

is called nonhomogeneous whenever .bi = 0 for at least one i. In matrix form, it can be written as .A x = b with .b = 0. A nonhomogeneous system does not have the trivial solution .x = 0 and, indeed, it may not have any solution at all. Like in the homogeneous case, we use Gaussian elimination to simplify .A x = b into a row Echelon form .E x = c and to find the basic and free variables, as described previously, and then we apply back substitution to .E x = c solving for the basic variables in terms of the free variables. The final result is x = p + xf1 h1 + xf2 h2 + · · · + xfn−r hn−r ,

.

(7.20)

where .xf1 , xf2 , . . . , xfn−r are the free variables and .p,  h1 , h2 , . . . , hn−r are n-th dimensional column vectors. This is called the general solution of the nonhomoge It describes all possible solutions of .A neous system .A x = b. x = b by assigning arbitrary values to the free variables .xfi . The columns .hi and .p are independent of the particular row Echelon form used, either ordinary or reduced. However, if the reduced row Echelon form is used, it is not necessary to use backward substitution to compute the solution vector .x. Comparing (7.19) and (7.20), we observe that the difference between the general solution of a nonhomogeneous system and the general solution of a homogeneous system is the column .p that appears in (7.20). To better understand where .p comes from, we work out an example. Consider the nonhomogeneous system ⎧ ⎪ x1 + 2x2 + 3x3 + 3x4 + 2x5 ⎪ ⎨ 2x1 + 4x2 + 5x3 + 5x4 + 3x5 . ⎪ 3x1 + 6x2 + 8x3 + 8x4 + 5x5 ⎪ ⎩ x1 + 2x2 + 5x3 + 5x4 + 7x5

= 1, = 2, = 3, = 4.

(7.21)

Note that we have just appended the nonzero right-hand side vector .b to the  for example, by the Gauss–Jordan method, system (7.12). Upon reducing .A x = b, we obtain ⎧ ⎨ x1 + 2x2 = 2 . (7.22) x + x4 = −1, ⎩ 3 x5 = 1.

178

7 Linear Algebra Background

Solving for the basic variables .x1 , .x3 , and .x5 in terms of the free variables .x2 and x4 gives the solution

.

⎧ ⎨ x1 = 2 − 2x2 , . x = −1 − x4 , with x2 and x4 free. ⎩ 3 x5 = 1, Thus, the general solution is ⎤ ⎤ ⎡ ⎡ ⎤ ⎡ ⎤ 2 x1 −2 0 ⎢x ⎥ ⎢ 0 ⎥ ⎢ 1⎥ ⎢ 0⎥ ⎥ ⎢ 2⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ . ⎢ x3 ⎥ = ⎢ −1 ⎥ + x2 ⎢ 0 ⎥ + x4 ⎢ −1 ⎥ . ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎣ x4 ⎦ ⎣ 0 ⎦ ⎣ 0⎦ ⎣ 1⎦ 1 0 0 x5 ⎡

(7.23)



⎤ 2 ⎢ 0⎥ ⎢ ⎥ ⎢ ⎥ The column vector .p = ⎢ −1 ⎥ is one particular solution of the nonhomogeneous ⎢ ⎥ ⎣ 0⎦ 1 system obtained when the values .x2 = 0 and .x4 = 0 are assigned to the free variables. We found that the general solution of the associated homogeneous system ⎧ ⎪ x1 + 2x2 + 3x3 + 3x4 + 2x5 ⎪ ⎨ 2x1 + 4x2 + 5x3 + 5x4 + 3x5 . ⎪ 3x1 + 6x2 + 8x3 + 8x4 + 5x5 ⎪ ⎩ x1 + 2x2 + 5x3 + 5x4 + 7x5

= 0, = 0, = 0, =0

is given by ⎡

⎤ ⎡ ⎤ ⎡ ⎤ x1 −2 0 ⎢x ⎥ ⎢ 1⎥ ⎢ 0⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ . ⎢ x3 ⎥ = x2 ⎢ 0 ⎥ + x4 ⎢ −1 ⎥ . ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ x4 ⎦ ⎣ 0⎦ ⎣ 1⎦ 0 0 x5 Hence, we conclude that the general solution of the nonhomogeneous system is the sum of one particular solution plus the general solution of the associated homogeneous system. The last conclusion is general in the sense that it is valid for any nonhomogeneous linear system .A x = b of m linear equations in n unknowns. Assuming that the rank

7.2 Linear Systems

179

of the coefficient matrix A is r, the basic and free variables of .A x = b are the same of those of .A x = 0 since the two systems have the same coefficient matrix. If we denote the reduced system as .E x = c, where E is the Echelon form of A and necessarily ⎡ ⎤ η1 ⎢ ⎥ ⎢ .. ⎥ ⎢ . ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ηr ⎥ ⎢ ⎥ .c =⎢ ⎥, ⎢ 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ . ⎥ ⎢ .. ⎥ ⎣ ⎦ 0 solving the i-th equation of the associated reduced homogeneous system .E x = 0 for the i-th basic variable .xbi in terms of the free variables .xfi , xfi+1 , . . . , xfn−r gives xbi = αi xfi + αi+1 xfi+1 + · · · + αn−r xfn−r ,

.

while computing the i-th basic variable of the reduced nonhomogeneous system E x = c gives

.

xbi = ηi + αi xfi + αi+1 xfi+1 + · · · + αn−r xfn−r .

.

The two solutions differ only by one constant .ηi . Repeating this operation for all the basic variables .xb1 , xb2 , . . . , xbr , we obtain that the general solution of the homogeneous system is x = xf1 h1 + xf2 h2 + · · · + xfn−r hn−r ,

.

while the general solution of the nonhomogeneous system is x = p + xf1 h1 + xf2 h2 + · · · + xfn−r hn−r ,

.

in which the column .p contains the constants .ηi for the indices corresponding to the basic variables, and 0’s otherwise. Additionally, .p is one particular solution of the nonhomogeneous system produced when the free variables assume values .xf1 = xf2 = . . . = xfn−r = 0.

180

7 Linear Algebra Background

Example Compute the general solution of the nonhomogeneous system ⎧ = −1, ⎨ x1 − x2 + x3 . x1 + x2 + 3x3 − 2x4 = 3, ⎩ 2x1 + x2 + 5x3 − 3x4 = 4, and compare it with the general solution of the associated homogeneous system: ⎧ = 0, ⎨ x1 − x2 + x3 . x + x2 + 3x3 − 2x4 = 0, ⎩ 1 2x1 + x2 + 5x3 − 3x4 = 0, Solution We transform the system in reduced Echelon form by the Gauss–Jordan method, and we obtain  .

x1 + 2x3 − x4 = 1, x2 + x3 − x4 = 2.

(7.24)

First, we note that the system is consistent because the right-hand side vector does not contain a pivot. The system has two basic variables, .x1 and .x2 , and two free variables, .x3 and .x4 . From (7.24), the general solution writes as  .

x1 + 2x3 − x4 = 1, , x3 and x4 free, x2 + x3 − x4 = 2,

and in vector form as ⎤ ⎡ ⎤ ⎡ ⎡ ⎤ ⎤ −2 1 1 x1 ⎢ −1 ⎥ ⎢1⎥ ⎢ x2 ⎥ ⎢ 2 ⎥ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎥ .x =⎢ ⎣ x3 ⎦ = ⎣ 0 ⎦ + x3 ⎣ 1 ⎦ + x4 ⎣ 0 ⎦ . x4 0 0 1 ⎡

The general solution of the associated homogeneous system is ⎤ ⎡ ⎡ ⎤ ⎤ −2 1 x1 ⎢ −1 ⎥ ⎢1⎥ ⎢ x2 ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ .x =⎢ ⎣ x3 ⎦ = x3 ⎣ 1 ⎦ + x4 ⎣ 0 ⎦ . x4 0 1 ⎡

7.3 Least-Squares Problems

181

⎡ ⎤ 1 ⎢2⎥ ⎥ You can easily check that .p = ⎢ ⎣ 0 ⎦ is indeed a particular solution of the 0 nonhomogeneous system and that ⎡

⎡ ⎤ ⎤ −2 1 ⎢ −1 ⎥ ⎢1⎥ ⎢ ⎥ ⎥ 3 = ⎢  .h ⎣ 1 ⎦ and h4 = ⎣ 0 ⎦ 0 1 are particular solutions of the associated homogeneous system. Now, we are in the position to answer the question “How many solution has  Let .Am×n be the coefficient matrix for a a general linear system .A x = b?” consistent system of m linear equations in n unknowns. Then, the system has exactly .r = rank(A) basic variables and .n − r free variables. We have already answered  and we found that the completely the question for homogeneous systems .A x = b, solution is unique if and only if .rank(A) = n. Let us consider a nonhomogeneous system with .rank(A) = r. We know that the general solution is given by x = p + xf1 h1 + xf2 h2 + · · · + xfn−r hn−r ,

.

where x = xf1 h1 + xf2 h2 + · · · + xfn−r hn−r

.

is the general solution of the associated homogeneous system. Clearly, the nonhomogeneous system will have a unique solution (namely, .p)  if and only if there are no free variables—i.e., if and only if .r = rank(A) = n (recall that n is the number of unknowns). Consequently, the nonhomogeneous system will have a unique solution if and only if the associated homogeneous system has only the trivial solution.

7.3 Least-Squares Problems Rectangular linear systems may not, and often do not, have a solution. Consider  where .A ∈ Rm×n , .x ∈ Rn , .b ∈ Rm , and denote by .Ab the augmented A x = b,  obtained by appending the right-hand side column vector .b at the end matrix .[A, b] of matrix A. Different scenarios are possible:

.

1. .A x = b has no solution (it is inconsistent) when .rank(A) = rank(Ab ). 2. .A x = b has at least one solution (it is consistent) when .rank(A) = rank(Ab ).

182

7 Linear Algebra Background

3. .A x = b has a unique solution when .rank(A) = rank(Ab ) = n. 4. .A x = b has infinitely many solutions when .rank(A) = rank(Ab ) = r < n as .n − r variables are free and can take any arbitrary value. In computational science, there are circumstances where the linear system is not solvable but still some kind of solution associated with it can be computed and help in the analysis of a problem. Suppose, for example, to perform several experiments relating two sets of data, x and y, with the goal of understanding the precise relationship between these two quantities. Such relationship represents the mathematical model that can be used to make some predictions of the future behavior of the phenomenon under consideration or some analysis in the past where no data were available. Example of measured data could be temperature and pressures of a gas, time and velocity of a watering flow, time period and stock’s prices, etc. This type of problem arises in fitting equations to data. Suppose that a linear relationship exists between x and y. We plot the experimental data, and we have a situation similar to the figure below where the points fall close to a straight line representing the model that we want to derive. However, due to measurement errors, the points will not lie exactly on the line (Fig. 7.1). In order to compute the model, we consider a generic straight line of equation .y = mx + q, with two parameters m and q representing the angular coefficient and the intercept of the line with the y axis, respectively. Since the points are lying approximately on the straight line, we can write ⎧ q + mx1 ≈ y1 , ⎪ ⎪ ⎪ ⎨ q + mx2 ≈ y2 , . .. .. .. ⎪ ⎪ . . . ⎪ ⎩ q + mxm ≈ ym .

Fig. 7.1 Example of linear least squares fit for a set of data points

(7.25)

220

200

y

180

160

140

120 100 0

1

2

3 x

4

5

6

7.3 Least-Squares Problems

183

We want to compute the values of m and q for the straight line that minimizes the sum of the squares of the errors (the differences between measured values and expected values from the model), hence matching most closely our data:

.

min

m

(yi − (q + mxi ))2 .

i=1

Writing the system of Eq. (7.25) in matrix form, we get ⎡

1 ⎢1 ⎢ .⎢ . ⎣ ..

x1 x2 .. .



⎡ y1   ⎥ ⎢ y2 ⎥ q ⎢ ≈⎢ . ⎥ m ⎦ ⎣ ..

1 xm

⎤ ⎥ ⎥  x ≈ b. ⎥ or A ⎦

ym

By analogy with the case of linear systems, we continue to use letters A, .x, and .b and the problem is to find the vector .x that minimizes the squared residual norm ⎛ ⎞2 m n  2   ⎝bi − . b aij xj ⎠ . − A x = 2

i=1

j =1

This problem is called linear least squares problem. Note that in general .A x = b for every vector .x if no straight line can pass through the data. The best that we can ˆ for a slightly modified do is to compute an approximate solution .xˆ such that .Axˆ = b, ˆ ≤ ˆ chosen as the closest vector to .b in the sense that . b − b right-hand side .b,

b − A y 2 for all .y. Such .xˆ is the solution to the so-called normal equations AT Axˆ = AT b

.

2

(7.26)

The following result states the condition for the solvability of system (7.26). x = AT b is uniquely solvable Theorem 7.3.1 The system of normal equation .AT A if and only if .rank(A) = n. Proof The coefficient matrix .AT A is square of size .n × n. If .AT A x = AT b T is uniquely solvable, then .rank(A A) = n. Now, consider the linear system  Premultiplying it by .AT , we have .AT A  and thus .x = 0 since .A x = 0. x = 0, T  .rank(A A) = n. Therefore, .A x = 0 has only the zero solution, and this means that .rank(A) = n. On the other hand, if .rank(A) = n, from .AT A x = 0 follows 2 T T   .x  A A x = 0 or, equivalently, . A x 2 = 0. Hence, .A x = 0, and .x = 0 since T x = 0  has only the trivial solution .x = 0,  proving .rank(A) = n. Therefore, .A A that .rank(AT A) = n. We conclude that .AT A x = AT b is uniquely solvable.  

184

7 Linear Algebra Background

⎤ ⎤ ⎡    12  2 14 28 1 2 3 T ⎦ ⎦ ⎣ is singular 24 = 4 , then .A A = 28 56 246 36 6 ⎤ ⎡ 12 because .rank(A) = 1 < 2. On the other hand, if .A = ⎣ 2 4 ⎦, then .AT A = 35 ⎤ ⎡    12  14 25 123 ⎣ is nonsingular since .rank(A) = 2. 2 4⎦ = 25 45 245 35 ⎡

1 ⎣ Example If .A = 2 3

Example Solve the following least squares problem: ⎤ ⎡ ⎤ 2 1 −1   x 1 ⎦ ⎣ ⎣ . ≈ 0⎦. 0 1 x2 1 −1 2 ⎡

Solution Writing the normal equations  .

⎤ ⎡ ⎡ ⎤   2 1 −1    x 1 0 −1 ⎣ 1 0 −1 1 ⎣0⎦, = 0 1⎦ −1 1 2 −1 1 2 x2 −1 2 1

we obtain the following linear system that has to be solved for .(x1 , x2 ):  .

2 −3 −3 6



x1 x2



  1 . = 0

Solving this square linear system by Gaussian elimination, we find  .

⎧ ⎨ 2x1 − 3x2 = 1 2x1 − 3x2 = 1 ⇒ 3 3 ⎩ −3x1 + 6x2 = 0 x2 = 2 2

leading to the solution x1 = 2, x2 = 1.

.

The residual 2-norm for the solution is  ⎡ ⎡ ⎤ ⎡ ⎤ ⎤   2 1 −1   1  √     2  = ⎣ −1 ⎦ = 3. . ⎣ 0 ⎦ − ⎣ 0 1⎦     1    1 −1 2 1 2 2

7.3 Least-Squares Problems

185

Example A doctor wants to understand the effect of a certain drug on the blood pressure. He administers to his patients various dosages of the oral drug every day, and he records the average blood pressure variation as follows: Dosage (.×100 mg) Blood pressure value (in mm Hg)

1 120

0 100

2 140

3 160

When the data are plotted as in the figure below, it is clear that a linear trend exists between dosage and blood pressure. 100+20 x

190

Blood pressure value (in mm~Hg)

180 170 160 150 140 130 120 110 100

0

0.5

1

1.5 2 2.5 3 Dosage (x 100 mg)

3.5

4

4.5

Using a linear model .y = mx + q where we denote by q the initial pressure before the dosage, by y the blood pressure value after the dosage, by x the drug dosage, and by m the pressure variation, we can write the least squares problem  where .A x ≈ b, ⎤ ⎤ ⎡ 100 10   ⎢ 120 ⎥ ⎢1 1⎥ q ⎥ ⎥ ⎢ ⎢  .A = ⎣ 1 2 ⎦ , b = ⎣ 140 ⎦ , and x = m , 160 13 ⎡

then the previous discussion guarantees that .x is the solution of the normal equations  That is, solve AT A x = AT b.

.

AT A x = AT b ⇐⇒

.



4 6 6 14



   520 q . = 880 m

186

7 Linear Algebra Background

The solution is .q = 100 and .m = 20, and the value of blood pressure can be predicted from the relationship y = 100 + 20x,

.

which can be used for personalized healthcare. Example Although there is no clear guidance to predict tumor growth, one popular model hypothesizes that the number, say y, of cancer cells may grow exponentially with time (t in days or weeks), according to a law of the form y = αeβt ,

(7.27)

.

where .α and .β are parameters of the model which depend on the specific type of cancer. Using the experimental medical data reported in the table below, we want to determine the best fit for the parameters .α and .β in the least squares sense: .

t (weeks) y (cells)

1 140

2 238

3 434

4 756

5 1232

6 . 2254

Since Eq. (7.27) is nonlinear, it is not straightforward to apply the least squares method previously described. However, upon computing the logarithm of both members of (7.27), we get the equivalent model .

ln y = ln α + βt,

(7.28)

which expresses a linear relation between .ln y and t. Therefore, we can approximate the solution of Eq. (7.27) by finding the least squares solution of the linearized model (7.28). We build a new table of experimental data of .ln y versus time: .

t (weeks) ln y (cells)

1 ln(140)

2 ln(238)

3 ln(434)

4 ln(756)

and finally we obtain the least squares problem ⎡

1 ⎢1 ⎢ ⎢ ⎢1 .⎢ ⎢1 ⎢ ⎣1 1

⎤ ⎤ ⎡ ln(140) 1 ⎢ ln(238) ⎥ 2⎥ ⎥ ⎥  ⎢ ⎥ ⎥ ⎢ 3 ⎥ ln α ⎢ ln(434) ⎥ =⎢ ⎥. ⎥ ⎢ ln(756) ⎥ 4⎥ β ⎥ ⎥ ⎢ ⎣ ln(1232) ⎦ 5⎦ ln(2254) 6

5 ln(1232)

6 , ln(2254)

7.4 Permutations and Determinants

187

Gaussian elimination returns the approximate solution .ln α ≈ 4.3872 and .β ≈ 0.5538. Hence, .α ≈ e2.2753 ≈ 80.4119, .β ≈ 0.5538. The plot below confirms that the model .y = 80.4119e0.5538t is accurate for our data. Tumor growth model

3000

least-square model experimental data

Number of malignant cells

2500

2000

1500

1000

500

0

0

1

2

3 Weeks

4

5

6

7.4 Permutations and Determinants We define a permutation .p = (p1 , p2 , . . . , pn ) of the first n natural numbers {1, 2, . . . , n} any rearrangement of the natural order .(1, 2, . . . , n). Similar to n-dimensional vectors, we use an arrow to represent permutations in order to distinguish them from scalar quantities. For example, the set

.

.

{(1, 2) , (2, 1)}

contains two distinct permutations of .(1, 2), while .

{(1, 2, 3) , (1, 3, 2) , (2, 1, 3) , (2, 3, 1) , (3, 1, 2) , (3, 2, 1)}

contains all the six distinct permutations of .(1, 2, 3). There are .n! = n(n − 1)(n − 2) · · · 1 different permutations of the first n positive natural numbers .{1, 2, . . . , n}. Given a permutation .p = (p1 , p2 , . . . , pn ), we can restore it to the natural order .{1, 2, . . . , n} by an even or an odd sequence of pairwise interchanges depending on .p  and n. For example, the permutation .(4, 3, 2, 1) can be restored to .(1, 2, 3, 4) with only two interchanges (.4 ↔ 1 followed by .3 ↔ 2), .

    (4, 3, 2, 1) → 1, 3, 2, 4 → 1, 2, 3, 4 ,

188

7 Linear Algebra Background

or by six adjacent interchanges that bring first 4 in the fourth position in three steps, then 3 in the third position in two steps, 2 in the second, and finally 1 in the first position in one step: .

        (4, 3, 2, 1) → 3, 4, 2, 1 → 3, 2, 4, 1 → 3, 2, 1, 4 → 2, 3, 1, 4     → 2, 1, 3, 4 → 1, 2, 3, 4 .

Clearly, other possibilities exist. However, try to restore .(4, 3, 2, 1) to natural order by using an odd number of pair-wise interchanges, and you will fail. Similarly, .(1, 3, 2, 4) can be restored to natural order only using an even number of pair-wise exchanges. This observation motivates the following definition. Definition 7.4.1 The sign of a permutation .p is defined as the integer .σ (p)  = ±1 such that ⎧ +1 if p can be restored to natural order by an even ⎪ ⎪ ⎨ number of pair-wise interchanges, .σ (p)  = ⎪ −1 if p  can be restored to natural order by an odd ⎪ ⎩ number of pair-wise interchanges. Example If .p = (1, 3, 2, 4), then .σ (p)  = −1, and if .p = (4, 3, 2, 1), then .σ (p)  = +1. The sign of the natural order is always .σ (p)  = +1. At this stage, the general definition of determinant of a square matrix can be given. Definition 7.4.2 For an .n × n matrix .A = [aij ], the determinant of A is defined as the scalar . det(A) = σ (p)a  1p1 a2p2 · · · anpn , p

with the sum taken over all the .n! possible permutations .p = (p1 , p2 , . . . , pn ) of {1, 2, . . . , n}. The term .a1p1 a2p2 . . . anpn in the definition contains one entry from each row and from each column of A. The determinant of A can be denoted as .det (A) or sometimes as .|A|. Note that the concept of determinant is defined only for square matrices. .

Example For .n = 2, A is a .2 × 2 matrix and there are .2! = 2 permutations of {1, 2}, namely, .{(1, 2), (2, 1)}. Consequently, .det (A) contains only the two terms, .σ (1, 2)a11 a22 and .σ (2, 1)a12 a21 . Since .σ (1, 2) = +1 and .σ (2, 1) = −1, we obtain the formula .

.

a11 a12 = a11 a22 − a12 a21 . a21 a22

7.4 Permutations and Determinants

189

Example For .n = 3, A is a .3 × 3 matrix. There are .3! = 6 permutations of .{1, 2, 3}, namely .{(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)}, and .det (A) contains the six terms shown in the table below: .p 

= (p1 , p2 , p3 ) (1,2,3) (1,3,2) (2,1,3) (2,3,1) (3,1,2) (3,2,1)

.σ (p) 

.a1p1 a2p2 a3p3

.+

.+a11 a22 a33

– – .+ .+ –

.−a11 a23 a32 .−a12 a21 a33 .+a12 a23 a31 .+a13 a21 a32 .−a13 a22 a31

Therefore, the definition of determinant for a .3 × 3 matrix is . det(A) = σ (p)a  1p1 a2p2 a3p3 = +a11 a22 a33 − a11 a23 a32 − a12 a21 a33 p

+a12 a23 a31 + a13 a21 a32 − a13 a22 a31 . The terms of this sum can be rearranged pictorially as in the table below, where the first two columns of matrix A are copied beside it. The determinant can then be computed as the sum of the products of the elements on the three diagonals from north-west to south-east, minus the sum of the products of the elements on the three diagonals from south-west to north-east. This simple mnemonic for .3 × 3 matrices is known as the rule of Sarrus, and it can be helpful when one does not remember the mathematical definition of determinant. + a11

+ a12

+ a13

a11

a12

a21

a22

a23

a21

a22

a31 −

a32 −

a33 −

a31

a32

Example The determinant of a triangular matrix, either upper or lower triangular, is the product of its diagonal entries. For example, for an upper triangular matrix,

.

t11 0 .. .

t12 t22 .. .

··· ··· .. .

t1n t2n .. = t11 t22 · · · tnn . .

0 0 · · · tnn

190

7 Linear Algebra Background

The formula is a direct consequence of the rule of Sarrus, but it can also be derived from the definition of determinant observing that each term .t1p1 t2p2 . . . tnpn contains exactly one entry from each row and the expression simplifies to .t11 t22 . . . tnn due to the presence of 0’s in the structure of T . Although the rule of Sarrus does not generalize to matrices of dimension higher than 3, the theorem below simplifies the computation of determinants for matrices of size .n > 3 by reducing it to the computation of determinants of lower order (cofactor expansion). Theorem 7.4.1 Let A be an .n × n matrix with entries .aij . For any .i = 1, 2, ..., n, we have the cofactor expansion along the i-th row:

.

det(A) =

n

aij Cij = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin ,

j =1

where the cofactor .Cij of an .n × n matrix is the quantity .(−1)i+j multiplied the determinant of the .(n − 1) × (n − 1) submatrix of A obtained after suppressing the i-th row and j -th column of A. Analogously, for any .j = 1, 2, ..., n, we have the cofactor expansion along the j -th column:

.

det(A) =

n

aij Cij = a1j C1j + a2j C2j + · · · + anj Cnj .

i=1



Example The determinant of .A ⎛⎡

⎤ 1 −1 ⎥ ⎥ is equal to .1 · det = −2 ⎦ 3 ⎛⎡ ⎤⎞ 4 −1 5 1 −1 0 −2 ⎦⎠ − 2 · det ⎝⎣ 2 1 −2 ⎦⎠ − 1 · 0 3 2 −1 3 1 3 ⎢5 1 ⎢ ⎣2 1 2 −1 ⎤⎞

−2 4 0 0

⎤⎞ ⎛⎡ 1 4 −1 5 ⎝⎣ 1 0 −2 ⎦⎠ − 3 · det ⎝⎣ 2 −1 0 3 2 ⎛⎡ ⎤⎞ 5 1 4 det ⎝⎣ 2 1 0 ⎦⎠ = −4 + 120 + 2 + 16 = 134. 2 −1 0 The development can be done on any row or column of A. It would be convenient to use the third column in this case as it contains two 0’s, so that two terms vanish in the expansion. The following properties easily follow from the definition of determinant.

Theorem 7.4.2 Let A be an .n × n matrix and B the matrix obtained from A by elementary row operations of Type 1, 2, or 3. 1. For Type 1 operations (interchanging of two rows, say i-th and j -th, of A), it is .det (B) = −det (A).

7.4 Permutations and Determinants

191

Proof Since .B = A except that .Bi∗ = Aj ∗ and .Bj ∗ = Ai∗ , for each permutation p = (p1 , p2 , . . . , pn ) of .(1, 2, . . . , n),

.

.

b1p1 · · · bipi · · · bjpj · · · bnpn = a1p1 · · · ajpi · · · aipj · · · anpn = a1p1 · · · aipj · · · ajpi · · · anpn .

This means that the addends of the sum in the definition of .det (A) and .det (B) are the same except for their opposite sign, since they are associated with two permutations .(p1 , . . . , pi . . . , pj , . . . , pn ) and .(p1 , . . . , pj , . . . , pi , . . . , pn ) which differ only by one pair-wise interchange. Consequently, .det (B) = −det (A).   2. .det (B) = αdet (A) for Type 2 operations (multiply the i-th row of A by a constant .α = 0). Proof Since .B = A except that .Bi∗ = αAi∗ , for each permutation .p = (p1 , p2 , . . . , pn ), .

b1p1 · · · bipi · · · bnpn = a1p1 · · · αaipi · · · anpn   = α a1p1 · · · aipi · · · anpn .

By applying the definition of determinant, we get .det (B) = αdet (A).

 

3. .det (B) = det (A) for Type 3 operations (add .α times the i-th row to the j -th row of A). Proof Since .B = A except that .Bj ∗ = Aj ∗ + αAi∗ , for each permutation .p = (p1 , p2 , . . . , pn ), we have   b1p1 · · · bipi · · · bjpj · · · bnpn = a1p1 · · · aipi · · · ajpj + αaipj · · · anpn   = a1p1 · · · aipi · · · ajpj · · · anpn + α a1p1 · · · aipi · · · aipj · · · anpn , . and consequently, det(B) =



σ (p)a  1p1 · · · aipi · · · ajpj · · · anpn +

p

+α .



σ (p)a  1p1 · · · aipi · · · aipj · · · anpn .

p

The first sum on the right side of the last equation is exactly .det (A), while the ! that has two (the i-th and j second sum is the determinant expansion of a matrix .A ! = 0 since the sign of the determinant is reversed th) identical rows. Clearly, .det (A) whenever two rows are interchanged. We conclude that the second sum on the right side of the last expression is zero, and thus .det (B) = det (A).  

192

7 Linear Algebra Background

Theorem 7.4.3 An .n × n matrix A is nonsingular if and only if .det (A) = 0. Equivalently, A is singular if and only if .det (A) = 0. Proof The property follows from the fact that A is nonsingular if and only if it has n pivots. Therefore, by the Gaussian elimination, it can be reduced to un upper triangular matrix U with nonzero diagonal entries A. This means that .det (U ) = u11 u22 · · · unn = 0. Since A can be computed back from U by elementary operations of Types 1, 2, and 3, then .det(A) = 0. The vice versa is   obviously true.

7.5 Eigenvalue Problems 7.5.1 Introduction Another fundamental equation in computational science writes as A x = λ x,

(7.29)

.

where A is an .n × n square matrix, .x is an n-dimensional vector, and .λ is a real or complex number. Observe the difference with respect to the linear system  the right-hand side in (7.29) is not known, and the quantities equation .A x = b: to be determined are two, .λ and .x. The vector .A x has in general a different direction from vector .x, but for some .x, called eigenvectors of A, it happens that .x  and .A x are parallel. The number .λ represents the scaling factor by which .x gets shrunken, stretched, or maybe reversed when it is multiplied by A, and it is called an eigenvalue of A. Eigenvalues play a fundamental role in the study of dynamical processes that have solution which grow, decay, or oscillate in time. These problems cannot be solved by Gaussian elimination.

vector Ax=lx vector x

7.5 Eigenvalue Problems

193

Definition 7.5.1 For an .n×n matrix A, scalars .λ and .n×1 vectors .x = 0 satisfying A x = λ x are called eigenvalues and eigenvectors of A, respectively. Any such pair .(λ, x ) is called an eigenpair for A. The set of distinct eigenvalues, denoted by .σ (A), is called the spectrum of A. For square matrices A, the following number is called the spectral radius of A: .ρ(A) = max |λ| . Nonzero row vectors .yH such that .

λ∈σ (A)

.

yH (A − λI ) = 0 are called left-hand eigenvectors for A.

Example If A is the identity matrix, for every vector, .A x = x. Therefore, all vectors are eigenvectors of .A = I and all eigenvalues of .A = I are .λ = 1. This is unusual. Most of the .2 × 2 matrices have two eigenvector directions and two eigenvalues. Other matrices may have .λ = 2 or 0 or .−1 or 1 or a complex number.

7.5.2 Computing the Eigenvalues and the Eigenvectors Let us consider the problem of computing the eigenvalues and eigenvectors of the 23 . From the definition of eigenvalue, we have the following matrix .A = 14 characterization: λ is an eigenvalue ⇐⇒ A − λI is singular ⇐⇒ det (A − λI ) = 0,

.

and this equation can be used to determine all .λ’s and corresponding .x’s. By expanding .det (A − λI ), we obtain the second-degree polynomial p(λ) = det (A − λI ) =

.

2−λ 3 = λ2 − 6λ + 5 = (λ − 1) (λ − 5) , 1 4−λ

which is called the characteristic polynomial of A. The eigenvalues of A are the solutions of the characteristic equation .p(λ) = 0, that is, the roots .λ = 1 and .λ = 5 of the characteristic polynomial .λ2 − 6λ + 5 = 0. The eigenvectors associated with .λ = 1 and .λ = 5 are vectors .x = 0 that satisfy x = λ x . They can be determined by solving the two homogeneous the equation .A  For .λ = 1, the system .(A − linear systems .(A − 1I ) x = 0 and .(A − 5I ) x = 0.  1I ) x = 0 reduces to the single equation .x + 3y = 0, whose general solution writes  −3 .x =y , with .y = 0 since eigenvectors are by definition non-null. For .λ = 5, 1  the system .(A − 5I ) x=  0reduces to the single equation .x − y = 0, whose general 1 with .y = 0. Hence, we conclude that the eigenvectors solution writes .x = y 1

194

7 Linear Algebra Background



   −3 1 of A are all vectors .x = y and .x = y , with .y = 0. There are infinitely 1 1     −3 1 many eigenvectors of A; however, they are all nonzero multiples of . and . . 1 1 Theorem 7.5.1 Given an .n × n matrix A, the characteristic equation .p(λ) = det (A − λI ) = 0 is a polynomial equation of degree n with leading term .(−1)n λn , whose roots are the eigenvalues of A. Altogether, A has n eigenvalues. Some of them may be complex numbers, and some may be repeated. When the entries of A are real, if complex eigenvalues exist they must appear in complex conjugate pairs: if an eigenvalue .λ is complex, its complex conjugate .λ¯ is also an eigenvalue of A. Proof Recalling the definition of matrix determinant, we can write .

det(A − λI ) =



     σ (p)  a1p1 − δ1p1 λ a2p2 − δ2p2 λ · · · anpn − δnpn λ ,

p

where .δij is the so-called Kronecker symbol defined as  δij =

.

1, if i = j, 0, if i = j,

and .p = (p1 , p2 , . . . , pn ) varies over all the permutations of .(1, 2, . . . , n). It follows that .p(λ) = det (A − λI ) is a polynomial in .λ. The highest power of .λ is given by the term .

(a11 − λ) (a22 − λ) · · · (ann − λ) ,

so the degree is n and the leading term is .(−1)n λn . The eigenvalues are the roots of .p(λ) = 0. The Fundamental Theorem of Algebra states that every polynomial of degree n with real or complex coefficients has n roots, not necessarily distinct, and when the polynomial coefficients are real, some roots may be complex numbers, but they must occur in conjugate pairs. This proves completely the assertion.     1 1 Example Determine the eigenvalues of A = . . −1 1 Solution The characteristic polynomial is .

det(A − λI ) =

1−λ 1 = (1 − λ)2 + 1 = λ2 − 2λ + 2. −1 1 − λ

The characteristic equation is .λ2 − 2λ + 2 = 0, and the eigenvalues are

7.5 Eigenvalue Problems

195

λ=

.





2

−4

√ 2 ± 2 −1 = = 1 ± i. 2

Note that the complex eigenvalues of A are complex conjugates of each other as it should be since A is real, according to the previous theorem. Remark The coefficients of the characteristic equation can be expressed in terms of the so-called principal minors. A .k × k principal submatrix of a matrix A of dimension .n×n is computed by suppressing the same set of .n−k rows and columns from A. Its determinant is called a .k × k principal minor of A. The characteristic equation .det (A − λI ) = 0 can be written λn + c1 λn−1 + c2 λn−2 + · · · + cn−1 λ + cn = 0,

.

where ck = (−1)k



.

(all k × k principal minors of A).

Consider, for example, matrix ⎡

⎤ 3 1 −3 .A = ⎣ 0 1 0 ⎦. 0 2 0 The .1 × 1 principal minors of A are the diagonal entries 3, 1, and 0. The .2 × 2 principal minors of A are .

31 = 3, 01

3 −3 = 0, 0 0

10 = 0. 20

The only .3 × 3 principal minor of A is .det (A) = 0. Using the principal minors, we obtain the characteristic equation λ3 − 4λ2 + 3λ = 0,

.

which can be factored as λ (λ − 1) (λ − 3) = 0.

.

We conclude that the eigenvalues of A are .λ1 = 0, .λ2 = 1, and .λ3 = 3. For more advanced topics in linear algebra, we refer the reader to specialized textbooks, such as [99, 143, 186].

Chapter 8

Regression

8.1 Regression as a Geometric Problem Linear regression is a method for modelling the relationship between two scalar values: the input variables .x = (x1 , x2 , . . . , xN ) (usually called “predictors” or “independent variables”) and the output variable y (usually called “response” variable). The model assumes that y is a linear function of x, i.e., a weighted sum of the input variables y = β0 + β1 x1 + β2 x2 + · · · + βn xn + error .

(8.1)

.

The objective of the regression is to find the values for the coefficients .β, hereafter ˆ that minimize the error .ε (also called “residual”) in the prediction denoted with .β, of the output variable y. Linear regression can be stated using matrix notation as follows: y = Xβ + ε

(8.2)

.



x11 ⎜ x21 .X = ⎜ ⎝ ... xd1

x12 x22 ... xd2

x13 x23 ... xd3

... ... ... ...

⎞ x1n x2n ⎟ ⎟ ... ⎠ xdn



⎞ β1 ⎜ β2 ⎟ ⎟ β=⎜ ⎝...⎠ βn



⎞ y1 ⎜ y2 ⎟ ⎟ y=⎜ ⎝...⎠ yd



⎞ e1 ⎜ e2 ⎟ ⎟ e=⎜ ⎝...⎠. ed

Finding the best estimator .βˆ consists of finding the point in the plane that is closest to the observed values of the variable y. Indeed, .βˆ corresponds to the vector such that the difference between .βˆ and y is minimized, i.e., the vector of the errors .ε has to be perpendicular to the plane .π on which the vectors .Xβ belong (see Fig. 8.1). Recalling that two vectors u and v are orthogonal if .uT v = 0, this condition translates to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 P. Lecca, B. Carpentieri, Introduction to Mathematics for Computational Biology, Techniques in Life Science and Biomedicine for the Non-Expert, https://doi.org/10.1007/978-3-031-36566-9_8

197

198

8 Regression

Fig. 8.1 .βˆ is the point in the plane such that the regression error .ε is perpendicular to the plane .π . Minimizing the distance between .Xβ and y means minimizing the norm of the vector .ε

ˆ TX = 0 (y − Xβ) ˆ =0 XT (y − Xβ) .

XT y − XT Xβˆ = 0 XT y = XT Xβˆ

that result in the normal equation  −1 βˆ = XT X XT y.

.

(8.3)

This equation can be solved directly, by calculating the inverse of .XT X (this is called “direct method”). However, the presence of the matrix inverse can be numerically challenging or unstable, in many practical applications. A frequently efficient solution to the inverse is achieved by using matrix decomposition. If X is a square matrix, it admits a .QR factorization [where Q is an orthogonal matrix (i.e., .QT = Q−1 ) and R an upper triangular matrix]: X = QR ⇒ XT = R T QT .

.

Then .

βˆ =

 −1  −1 R T QT (QR) R T QT y = R T Q−1 QR R T QT y

 −1 −1  R T QT y = R −1 R T R T QT y = R −1 QT y. = RT R

(8.4)

Unlike the QR decomposition, all matrices have a singular value decomposition (SVD). As a basis for solving the system of linear equations for linear regression, SVD is more stable and the preferred approach. The SVD decomposition of X has the following form: X = U ΣV T ,

.

(8.5)

where matrices U and .V T are unitary matrices. One of the main benefits of having unitary matrices such as U and .V T is that if we multiply one of these matrices by its

8.1 Regression as a Geometric Problem

199

transpose (or the other way around), the result equals the identity matrix. The matrix Σ is diagonal, and it stores non-negative singular values usually listed in decreasing order. The regression coefficients can be found by calculating the pseudo-inverse of the input matrix X and multiplying that by the output vector y. Since

.

T   T XT = (U Σ)V T = VT (U Σ)T = V ΣU T XT X = V ΣU T U ΣV T = V Σ 2 V T −1  −1  −1  . Σ2 (XT X)−1 = (V Σ 2 )V T = VT V −1  −1  −1  −1  −1 XT X Σ2 XT = V T V −1 V ΣU T = V Σ 2 ΣU T = V Σ −1 U T , we obtain that βˆ = V Σ −1 U T ,

.

(8.6)

where .V Σ −1 U T is the pseudo-inverse of X. The value of the coefficient is affected by an error due to the fact that the model itself is affected by an error .ε endowed with a variance .σ 2 . In the following ˆ subsection, we will present the derivation of the standard error on .β.

8.1.1 Standard Error on Regression Coefficients The standard error is an estimate of the standard deviation of the coefficient. It can be thought of as a measure of the precision with which the regression coefficient is measured. If a coefficient is large compared to its standard error, then it is probably different from 0. In order to calculate the standard error of the regression coefficient, we consider y = Xβ + ε .

ε ∼ N(0, σ 2 I )

,

(8.7)

where I is the identity matrix. Using the normal equation (8.3), the variance of .βˆ is ˆ = Var . Var(β) ˆ = Var(β)



 T

  −1 −1 −1 T T T T T T X X X y = X X X Var(y) X X X



 T

 −1 −1 T T 2 T T X X X X X Iσ X

200

8 Regression

=σ 2

  T −1 −1 XT X XT X XT XT



   −1 −1 T T XT XT X =σ 2 XT X XT

.



  −1 −1 T T T T X X =σ X X X X 2



 −1   T −1 T T T X X X [X] X X =σ 2

 −1   −1 T T X X X X =σ XT X 2

so that .

 −1 ˆ = σ 2 XT X Var(β) .

(8.8)

As an explanatory example of what has been explained so far, let us consider the linear single variable model yi = a + bxi + εi ,

i = 1, . . . , n,

.

then ⎛

1 ⎜1 ⎜ .X = ⎜ . ⎝ ..

⎞ x1 x2 ⎟ ⎟ .. ⎟ , . ⎠

β=

 a b

(8.9)

1 xn

.

 −1 XT X =

n



1 xi2 −



xi2 −  − xi

 2 xi



xi n

 ,

(8.10)

and, therefore, we finally obtain the expression for the standard error of the ˆ coefficient .β: .

SE =



ˆ = Var(b)

  −1  σ 2 XT X

22

=

 n



nσ 2  2. xi2 −( xi )

(8.11)

8.2 Regression via Maximum-Likelihood Estimation

201

8.2 Regression via Maximum-Likelihood Estimation The estimation of regression coefficient with maximum-likelihood method starts with the definition of the Gaussian noise simple linear regression model as follows [176]: 1. The distribution of X is arbitrary. 2. If .X = x, then .Y = β0 + β1 x + ε, for some parameters .β0 and .β1 , and some random noisevariable .ε. 3. .ε ∼ N 0, σ 2 and is independent of X. 4. .ε is independent across observations. The response variable Y is independent across observations, conditional on the predictor X, i.e., .Y1 and .Y2 are independent given .X1 and .X2 , as a result of these suppositions. This is a special case of the simple linear regression model; the first two assumptions are the same, but we are assuming much more about the noise variable .ε: it is not just mean zero with constant variance, but it has a particular distribution (Gaussian). On the basis of these assumptions, it is possible   to calculate the conditional probability distribution function of Y for each x, .p y | X = x; β0 , β1 , σ 2 . Given any data set .(x1 , y1 ) , (x2 , y2 ) , . . . (xn , yn ), the probability density, under the model, of observing that data is

.

n n    (y −(β0 +β1 xi ))2 1 − i 2σ 2 . e p yi | xi ; β0 , β1 , σ 2 = √ 2 2π σ i=1 i=1

(8.12)

In multiplying together the probabilities like this, we are using the independence of the .Yi .   We note that any guess at the regression coefficients, say . b0 , b1 , s 2 , gives a probability density:

.

n n   (y −(b +b x ))2  1 − i 0 21 i 2s . e p yi | xi ; b0 , b1 , s 2 = √ 2 2π s i=1 i=1

(8.13)

Equation (8.13) is the likelihood, that is, a function of the parameter values. Usually, it is just as informative, and much more convenient, to work with the log-likelihood, n    L b0 , b1 , s 2 = log p yi | xi ; b0 , b1 , s 2 i=1

.

=

n 

 log p yi | xi ; b0 , b1 , s 2

i=1 n 1  n = − log 2π − n log s − 2 (yi − (b0 + b1 xi ))2 . 2 2s i=1

(8.14)

202

8 Regression

When using the maximum-likelihood technique, we calculate the parameter values that maximize the likelihood, or more precisely, the log-likelihood. This gives the following estimators: βˆ1 = .

n

¯ (yi − y) ¯ i=1 (xi − x) n 2 − x) ¯ (x i=1 i

=

Cov(X, Y ) 2 sX

βˆ0 = y¯ − βˆ1 x¯ σˆ 2 =

1 n

n  

(8.15) 

yi − βˆ0 + βˆ1 xi

2

.

i=1

The least squares estimators and the estimates for the slope and intercept agree completely. This is a unique characteristic of the isolated Gaussian noise assumption. Similar to this, the in-sample mean squared error is precisely .σˆ 2 . The fact that the parameter values are identical to those obtained using least squares may give the impression that we did not benefit significantly from the Gaussian noise assumption. The Gaussian noise premise is crucial because it provides us with a precise conditional distribution for each .Yi , which in turn provides us with a distribution for the estimators called the sampling distribution. Remember that we can write .βˆ0 , .βˆ1 in the form “constant plus sum of noise variables”, i.e., βˆ1 = βˆ0 +

.

n  xi − x εi , 2 nsX i=1

(8.16)

where x=

.

1 1  2 (xi − x)2 . xi , and sX = n−1 n n

n

i=1

i=1

Since .εi are all independent Gaussians, .βˆ1 is also Gaussian, so we can say that 

σ2 ˆ1 ∼ N β1 , .β 2 nsX

 .

(8.17)

The fitted value at an arbitrary point .x, m(x), ˆ is a constant plus a weighted sum of the .ε:   n 1 xi − x¯ .m(x) ˆ = β0 + β1 x + (8.18) εi . 1 + (x − x) ¯ 2 n sX i=1

8.3 Regression Diagnostic

203

Since the .εi are independent Gaussian, a weighted sum of them is also Gaussian, so that    σ2 (x − x) ¯ 2 (8.19) .m(x) ˆ ∼ N β0 + β1 x, . 1+ 2 n sX It is possible to demonstrate that .

nσˆ 2 2 ∼ χn−2 σ2

by manipulating the .εi slightly more intricately. All of these are crucial because they are necessary when performing statistical inference on the parameters, creating confidence intervals, or evaluating hypotheses. These sampling distributions will enable us to provide confidence intervals for the expected values, m(x), as well as prediction intervals (of the form “when .X = x ∗ , Y will be between l and .uwith 95% probability”) or full distributional forecasts when making predictions of new Y ’s.

8.3 Regression Diagnostic The goal of regression diagnostics, a subset of regression analysis, is to determine whether the estimated model and the assumptions we made about the data and the model are consistent with the data that have been collected. These diagnostics include graphical and numerical tools for assessing the suitability of the assumptions with regard to the data and the model’s structure, identifying outliers (extreme points) that may dominate the regression and potentially skew the results—and determining whether collinearity—strong relationships between the independent variables—is having an impact on the findings. The assumptions of the linear regression refer to the model (M), the predictors (P), and the error term (E) [62]. There are two assumptions about the model: (.M1 ) The regression model’s parameters are linear. (.M2 ) The specification of the regression model is accurate, i.e., we did not leave out any crucial variables or postulate the incorrect form. In other words, for each point within the range of the regression domain, Equation (8.7) is a reliable approximation of the relationship between the x’s and the y. There are two assumptions about the predictors: (.P1 ) In repeated sampling, the values of x are fixed. In other words, the x variables’ values are measured precisely. In practise, this assumption may not be met, and the x’s may exhibit some random variance. As a result, it is more suitable

204

8 Regression

to formulate the assumption as follows: the errors of the x’s are minimal (negligible) in comparison to the range across which they are measured. (.P2 ) The x-variables are linearly independent of each other, which means they have no exact linear correlations. Finally, there are five assumptions of the error term: (.E1 ) Given a value of .xi , the mean of .εi is zero, or E(εi |xi ) = 0.

.

(8.20)

This assumption refers to the “zero mean assumption” . (.E2 ) The conditional variance of .εi is equal for all observations, Var(εi |xi ) = σ 2 .

.

(8.21)

This assumption refers to the “homoscedasticity (constant variance) assumption”. This implies that Var(yi |xi ) = σ 2 .

.

(8.22)

(.E3 ) Given any two distinct observations i and j , the correlation between any two i and j is zero, indicating that the random errors .εi are uncorrelated. In formula, Cov(εi , εj |xi , xj ) = 0.

.

(8.23)

If this assumption is satisfied, the average value of y depends solely on x and not on .ε. It is important to note that being uncorrelated is a weaker requirement than being independent. If two random variables are statistically independent, the correlation coefficient between them is zero. However, statistical independence does not imply zero connection. (.E4 ) The covariance between .Cov(εi , xi ) = 0. This assumption is satisfied if x is non-random and the assumption .(E1 ) holds. (.E5 ) The random errors .εi have a normal distribution. This is referred to as the “normalcy assumption”. This assumption also implies that the .εi and .εi are not only uncorrelated but also independent (zero covariance implies statistical independence of the two variables for two normally distributed variables); thus, the assumptions about the error term are usually summarized as .εi ∼ NID(0, σ 2 ), where “NID” means “normally and independently distributed”. Furthermore, the measured y’s are independent. Indeed, any linear function with normally distributed variables is normally distributed in and of itself. It should be noted that the ordinary least square solution does not require this assumption. For the accurate interpretation of regression analysis results (estimated coefficients, forecasts, confidence intervals, testing of hypotheses, etc.), it is imperative that these assumptions be true. When these assumptions are violated, we encounter

8.3 Regression Diagnostic

205

issues with parameter nonlinearity (assumption I), model bias, or specification error (assumption .(M2 )), heteroscedasticity (assumption .(E2 )), autocorrelation (assumption .(E3 )), and error normality (assumption .(E5 )). Here, we look at the regression diagnostics used to find these issues. Regression diagnostics plots can be created in many computational environment using library functions. There are four types of diagnostic plots, as follows. Residuals Versus Fitted It is used to verify the assumptions of a linear connection. A horizontal line with no obvious patterns is a sign of a solid linear relationship. Normal Q–Q Quantile–quantile plots, often known as Q–Q plots, are a graphical tool that can be used to determine if a collection of data is likely to have come from a theoretical distribution such as the normal or exponential. Then, a normal Q–Q plot can be used to verify the assumption that the residuals are normally distributed when performing statistical analysis. Since it is only a visual inspection and not a complete proof, it is not completely objective. However, it enables us to quickly determine whether our assumption is reasonable and, if not, how the assumption is violated and which data points are involved. The assumption is verified if the plot shows a horizontal line with equally spread points. Scale Location Also known as spread location, it is used to verify the homoscedasticity (homogeneity of variance) of the residuals. A horizontal line is a reliable indicator of homoscedasticity. Residuals vs. Leverage It is used to find outliers and high leverage points that, when included or omitted from the study, may have an impact on the regression results. An outlier is a point with an exceptional value for the outcome variable. Outliers raise the residual standard error, which can have an impact on how the model should be interpreted. By studying the standardized residual, which is the residual divided by its estimated standard error, outliers can be identified. The amount of standard errors deviating from the regression line can be calculated from the standardized residuals. According to a standard thumb rule, potential outliers include observations with standardized residuals greater than 3 in absolute value. A data point has high leverage, if it has extreme predictor x values. This can be detected by examining the leverage statistic or the hat-value. A value of this statistics above .2(p + 1)/n indicates an observation with high leverage, where p is the number of predictors and n is the number of observations. Without deepening the details, we only recall here that the leverage score for the variable .xi is defined as hii = xTi (XT X)−1 xi .

.

(8.24)

The leverage score can be interpreted as a measure of distance, where individual observations are compared to the average of all observations [38]. Figures 8.2, 8.3, 8.4, 8.5 show the diagnostic plots obtained on different data sets and show when we can say that the assumptions of the regression are verified and when they are not.

206

8 Regression

Fig. 8.2 In the left plot, we do not see any distinctive pattern, but I see a parabola in the right plot, meaning that the linear model is no appropriate. Indeed, the second plot testifies that the nonlinear relationship was not explained by the model and was left out in the residuals

Fig. 8.3 The normal Q–Q plot shows if residuals are normally distributed. Only the first plot shows a straight course of the curve, while in the second case a deviation from the straight course is observed, indicating that the assumption on the normality of the residuals is not verified

8.4 How to Assess the Goodness of the Model There are various indicators of the fit of a linear model to the data. These include here the residual standard error, the .R 2 and the Student’s test on the regression coefficients. The residual standard error is defined as   n  (yi − yˆi )2 , (8.25) .RSE =  df i=1

where .yi are the observed values, .yˆi are the observed values, and df denotes the degrees of freedom, calculated as the total number of observations—total number of model parameters. The better a regression model fits a data set, the lower the residual

8.4 How to Assess the Goodness of the Model

207

Fig. 8.4 This plot shows if residuals are spread equally along the ranges of predictors. This is how the homoscedasticity (equal variance) assumption can be verified. A horizontal line with evenly (randomly) spaced points is ideal

Fig. 8.5 The residuals in the left plot seem to be dispersed randomly. While in the right plot, as it approaches 5, the residuals start to spread out more down the x-axis. In the right plot, the red smooth line is not horizontal and displays a sharp angle as a result of the residuals spreading wider and wider

standard error. On the other hand, a regression model fits a data set worse the bigger the residual standard error. The fitted regression line will be closely packed with data points in a regression model with a modest residual standard error. In contrast, data points in a regression model with a large residual standard error will be more widely dispersed around the fitted regression line: The .R 2 is defined as n 2 i=1 (yi − yˆi ) 2 , (8.26) .R = 1 − n 2 i=1 (yi − y)  where .y = n1 ni=1 yi . .R 2 is a statistical metric that quantifies how much of the variance in the dependent variable can be accounted for by the independent variable. 2 .R can take any values between 0 and 1.

208

8 Regression

For instance, an R-squared of 60% indicates that the regression model accounts for 60% of the variability seen in the target variable. A greater .R 2 typically means the model is better at explaining the variability. A high .R 2 does not, however, always indicate a successful regression model. The nature of the variables included in the model, how the variables are measured, how many data are collected, and how the data are transformed are just a few of the variables that affect how accurate a statistical measure will be. Therefore, a high .R 2 can occasionally point to regression model issues. Predictive models should generally avoid having low .R 2 values. A good model, however, might occasionally display a little value. We finally mention some statistical methods for the assessment of the goodness of the linear fit. The t-tests are used to examine whether a specific variable in the model is statistically significant. A variable is statistically significant if it significantly affects the model’s accuracy and has a strong relationship with the dependent variable. The importance of the various factors in the model is also compared using t-tests, which can assist determine which variables are most crucial for forecasting the dependent variable. In a linear regression model such as in Eq. (8.1), n are performed to test the null hypothesis .H0 against the alternative hypothesis for each coefficient .βi , as follows: H0 : βi = 0

.

H1 : βi = 0. According to the null hypothesis .H0 , there is no relationship between y and .xi , while according to .H1 there is a relationship between y and .xi . The formula for the one-sample t-test statistic is implemented as follows: t=

.

(βi − 0) , SE

(8.27)

where SE is the standard error of the coefficient estimate. Another method for evaluating the goodness of fit is the F-statistic. Your statistical programme only needs to include the correct terms in the two models that it compares in order to produce the F-test of overall significance. The overall F-test compares the specified model with a model with no independent variables. An intercept-only model is another name for this kind of model.

8.5 Other Types of Regression Here we briefly mention other types of regression, in particular logistic regression, ridge regression, lasso regression, and Bayesian linear regression.

8.5 Other Types of Regression

209

Logistic Regression It is used when the dependent variable Y is discrete. Logistic regression is a model of the probability of a discrete outcome given an input variable X. The most common logistic regression models a binary outcome such as true/false, yes/no, 0/1, as a probability. There are many applications of logistic regression in healthcare: The Trauma and Injury Severity Score, for instance (TRISS). This is used all over the globe to forecast patient fatality in cases of injury. Logistic regression was used in the development of this model. To forecast patient health results, it makes use of elements such as the revised trauma score, injury severity score, and patient age. It is a method that can even be used to forecast a person’s likelihood of contracting a specific illness. For instance, based on characteristics such as age, gender, weight, and genetics, diseases such as diabetes and heart disease can be predicted. In logistic regression, the model is fit by maximum likelihood by assuming that the elements of y, .yi , follow a Binomial distribution, i.e.,

yi ∼ Binom 1,

.

 eβ0 +β1 xi . 1 + eβ0 +β1 xi

(8.28)

This is equivalent to a model where the probability of success .π(x) is related to the predictor x by

.

log

π(x) 1 − π(x)

 = β0 + β1 x.

(8.29)

In logistic regression, the odds—that is, the chance of success divided by the probability of failure—are transformed using the logit formula:

π .π →  log 1+π

 (8.30)

that defines the so-called link function. Maximum-likelihood estimation (MLE) method is used for estimating the beta parameters. In order to find the best fit for the log odds, this technique iteratively tests various beta values. The log-likelihood function is created after each of these iterations, and logistic regression aims to maximize this function to discover the most accurate parameter estimate. The conditional probabilities for each observation can be calculated, recorded, and added together to produce a predicted probability once the best coefficient (or coefficients, if there are multiple independent variables) has been identified. If the classification is binary, a chance of less than .5 predicts 0 and a probability of more than 0 predicts 1. It is recommended to assess the model’s goodness of fit, or how well it forecasts the dependent variable, after the model has been computed. The Hosmer–Lemeshow test [101] is a well-liked technique for evaluating model fit.

210

8 Regression

Ridge Regression Ridge regression is a model tuning method that is used to analyse any data that suffer from multicollinearity, i.e., the existence of a correlation between independent variables in modelled data. The estimates of the regression coefficients may be inaccurate as a result of multicollinearity. Additionally, it may amplify the standard errors of the regression results and lessen the effectiveness of any t-tests on the regression coefficients. It can lead to misleading findings, inflated p-values, and increased model redundancy, which makes prediction ineffective and unreliable. Indeed, when the issue of multicollinearity occurs, least squares are unbiased, and variances are large, this results in predicted values being far away from the actual values. One way out of this situation is to abandon the requirement of an unbiased estimator. In order to pursue this objective, ridge regression reduces the size of the regression coefficients, resulting in close to zero coefficients for factors that have little influence on the outcome. By penalizing the regression model with a penalty term, it is possible to reduce the coefficients. In 1970, Hoerl and Kennard [97] proposed to add a small constant value .λ > 0 to the diagonal entries of the matrix .XT X before taking its inverse. The result is the ridge regression estimator βˆridge = (XT X + λI )−1 XT y.

(8.31)

.

In (8.31), .λ > 0 can be used to adjust the penalty’s severity. Choosing a suitable value for .λ is crucial. The penalty term is ineffective when .λ = 0, and ridge regression will result in the conventional least squares values. However, as approaches infinity, the shrinkage penalty becomes more significant and the ridge regression values approach zero. It should be noted that the scale of the predictors has a significant impact on ridge regression, in contrast to conventional least square regression. In order to ensure that all the variables are on the same scale, it is therefore preferable to standardize (i.e., scale) the predictors before using the ridge regression [28, 107]. The expression .X = X/sd(X), where sd(X) is the standard deviation of X, can be used to standardize a predictor X. This has the effect of giving all standardized predictors a standard deviation of 1, which makes it possible for the final fit to be independent of the scale used to quantify the predictors. So, in ridge regression, X’s and Y have to be standardized. According to formula (8.31), the ridge coefficients minimized a penalized sum of squares  2  p p n   2 ˆ yi − β0 = .βridge = argmin xij βj + λ βj (8.32) β

j =1

i=1

j =1

that is equivalent to ˆridge = argmin .β β

 2  p p p n    2 yi − β0 = xij βj + λ βj , subject to βj2 ≤ s, i=1

j =1

j =1

j =1

(8.33)

8.5 Other Types of Regression

211

p where . j =1 βj2 is termed .L2 penalty term. Equation (8.33) points out the size constraint on the parameters. There is one-to-one correspondence between .λ in (8.32) and s in (8.33). In the presence of multicollinearity, in the least squares regression, it can happen that a wildly large positive coefficient on one variable is cancelled by a similarly large negative coefficient on it correlated “cousin” [90]. By imposing a size constraint on the coefficients, as in (8.33), this phenomenon is prevented from occurring. Finally, note that the intercept .β0 is left out of the penalty term for the purpose of not making the regression procedure dependent on the choice of origin for y. Lasso Regression The acronym “lasso” stands for least absolute shrinkage and selection operator. It is a shrinkage method such as ridge, with subtle but important differences. The lasso estimate is defined as .

βˆridge = argmin β

 n

yi −β0 −

p  j =1

i=1

2 xij βj



p 

 p  βj2 , subject to |βj | ≤ s,

j =1

p

j =1

p

(8.34)

where . j =1 |βj | is termed .L1 penalty term. The constraint . j =1 |βj | ≤ s makes the solutions nonlinear in the .yi , and quadratic programming algorithm is used to compute them. Here, making s sufficiently small  causes some of the coefficients to p be exactly zero. If s is chosen larger than .s0 = j =1 |βˆj |, where .βˆj = βˆjLS , the least squares estimates, then the lasso estimates are the .βˆj ’s. On the other hand, for .s = s0 /2, then the least squares coefficients are shrunk by about 50% on average. We refer the reader to [90] for a more in-depth discussion of the shrinkage both in ridge and in lasso. Bayesian Regression This approach is used in place of the other approaches as much as: (i) there is a desire to include in the regression model a priori knowledge useful in obtaining a more realistic estimate of the parameters, (ii) when there is a conspicuous risk of overfitting and we want to avoid it. Overfitting happens when a statistical model fits its training data precisely. When this occurs, the algorithm’s goal is lost because it is unable to correctly perform against unseen data. Consider a linear model as in (8.7), where the response variable y is sampled from a normal distribution. y ∼ N(β T X, σ 2 I ).

.

(8.35)

The coefficient array is transposed and multiplied by the predictor matrix, and the result is the mean for linear regression. The square of the standard deviation represents the variation (multiplied by the identity matrix because this is a multidimensional formulation of the model). The goal of Bayesian linear regression is to ascertain the posterior distribution for the model parameters rather than to identify the singular “best” value of the model

212

8 Regression

parameters. In addition to the model parameters also coming from a distribution, the response is also produced from a probability distribution. After collecting the data X, we form the likelihood function L(y|β, σ ) = √

.

2

1 2π σ 2

exp

 (y − βX)T (y − βX) . − 2σ 2

(8.36)

β and .σ 2 are then calculated as those parameters that maximize the likelihood function (e.g., .β and .σ 2 are those parameters that make the derivative of the logarithm of this function null). In contrast to the least squares method, the parameters are estimated from the data alone, and the Bayesian approach allows us to incorporate other information through the use of a prior. We can combine evidence (i.e., our data) and prior knowledge to form the posterior distribution of our parameters as

.

H (β, σ 2 ) =

.

L(y|β, σ 2 ) × P (β, σ 2 ) . L(y)

(8.37)

Formula (8.37) says that the prior probability of the parameters multiplied by the likelihood of the data, .P (β, σ 2 ), divided by a normalization constant .L(y) is the posterior distribution of the parameters. Indeed, .L(y) in the denominator is a normalizing constant to ensure the distribution integrates to 1. This is merely an expression of Bayes Theorem, which serves as the cornerstone of Bayesian inference, and says that .

Pr(parameters | data) =

Pr(data | parameters) Pr(parameters) , Pr(data)

(8.38)

where .Pr(parameters | data) is the posterior distribution of the parameters, .Pr(data | parameters) is the likelihood, and .Pr(data), i.e., the normalization factor, is obtained by integrating out the parameters from the join probability, .Pr(data, parameters). .Pr(data) is the marginal probability of the data. The two main advantages of Bayesian linear regression are evident here: 1. Priors are set by the user. Unlike the frequentist approach, which presupposes that all information about the parameters comes from the data, we can include subject knowledge or an educated estimate as to what the model parameters should be in our model. We can use non-informative priors for the parameters, such as a normal distribution, if we do not already have any values. 2. A distribution of potential model parameters based on the data and the prior is the outcome of conducting Bayesian linear regression. With fewer data sets, the posterior distribution will be more dispersed, allowing us to measure how uncertain we are about the model. When there are endless amounts of data, the outputs for the parameters converge to the values determined by ordinary least squares as the number of data points rises,

8.6 Case Study 1: Regression Analysis of Sweat Secretion Volumes in Cystic. . .

213

washing out the prior as it does so. The Bayesian worldview is encapsulated in the formulation of model parameters as distributions: we begin with an initial estimate, our prior, and as we collect more evidence, our model becomes less erroneous. Our intuition naturally leads to Bayesian thinking. We frequently begin with an initial theory, and as we gather information that either confirms or refutes our theories, we alter our worldview (this is how we should ideally reason).

8.6 Case Study 1: Regression Analysis of Sweat Secretion Volumes in Cystic Fibrosis Patients In this section, we report an analysis of medical data on cystic fibrosis performed through multilinear regression. What is summarized in this section has been published in Proceedings of the 23rd Conference of Open Innovations Association FRUCT [127]. Here in the following, we briefly introduce the reader to cystic fibrosis and diagnostic methods. The abundance and performance of secretory glands called the cystic fibrosis transmembrane conductance regulator (CFTR), which are found in the apical membrane of epithelial cells, are impacted by the chronic, genetic illness cystic fibrosis (CF). The creation of fluids such as mucus, tears, saliva, perspiration, and digestive enzymes is regulated by these membrane proteins. The CFTR gene mutations, which are inherited as an autosomal recessive disorder in which a person gets one defective CFTR allele from each parent, are linked to the development of CF. The patient gets CF when there are two mutated CFTR alleles as opposed to just one defective allele, which causes CF in carriers. Patients experience an excessive mucus production that affects the respiratory, digestive, and reproductive systems as a result of aberrant secretory gland function. The Gibson and Cooke technique is the gold standard for diagnosing CF [73]. A test for concentration of electrolytes in sweat in cystic fibrosis of the pancreas utilizing pilocarpine by iontophoresis [73] for measuring sweat chloride concentration (.>60 mmol/L in CF; .