Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations 9811625735, 9789811625732

A major objective of this monograph is to present an agent-based simulation of artificial populations. The focus is on p

202 98 9MB

English Pages 199 [190] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Acknowledgements
Contents
About the Author
Chapter 1: Catastrophes and Agent-Based Models
1.1 Catastrophes
1.2 Discrete Events and Agent-Based Models
1.2.1 Discrete Events and DEVS
1.2.2 Some Software Tools
1.2.3 The Event Queue
1.2.4 Agent-Based Models
References
Chapter 2: BLUESSS Simulation Package
2.1 BLUESSS Concepts
2.2 BLUESSS Code, Program Structure
2.3 Inheritance
2.4 BLUESSS Code Generators, Continuous and Discrete Simulation Support
2.5 Variance Analysis
References
Chapter 3: Behavior Patterns of an Artificial Society
3.1 Introduction
3.2 Agent-Based Models
3.3 Simulation Tool
3.4 The Model
3.4.1 Process food
3.4.2 Process agent
3.4.2.1 Event moves
3.4.2.2 Event rwalk
3.4.2.3 Event foodseek
3.4.2.4 Event gregarious
3.4.2.5 Event attract
3.4.2.6 Event repro
3.4.2.7 Event dies
3.4.2.8 Event agres (aggression)
3.4.2.9 Event works
3.5 Simulations
3.5.1 Experiment 1
3.5.2 Experiment 2 (unexpected behavior)
3.5.3 Experiment 3-aggression
3.5.4 Experiment 4-Work
3.6 Conclusions
References
Chapter 4: Extended Prey-Predator Model
4.1 Introduction
4.2 Continuous Model
4.2.1 Simple Simulation
4.2.2 Uncertainty and Differential Inclusions
4.3 Agent-Based Simulation
4.3.1 General Remarks
4.3.2 Simulation Tool
4.4 The Model
4.4.1 Resources and Agent Types
4.4.2 Implementation, Processes, and Events
4.4.3 The Food Resource
4.4.4 Agents
4.4.5 Process Static
4.4.6 Process Control
4.5 Simulations
4.5.1 Basic Simulation Mode
4.5.2 Incoming Agents, Low System Isolation
4.5.3 Gregarious Effect
4.5.4 Group Forming-Simulation Mode 2
4.5.5 A Slow Catastrophe; Static Agents
Appendix-Model Data Specification
The Food Resource
Agents
Static Agents (SA) Mode
References
Chapter 5: Stock Market: Uncertainty and Catastrophes
5.1 Introduction
5.2 Continuous Model
5.2.1 Model Equations
5.2.2 Differential Inclusion Solver
5.2.3 The Reachable Set Example
5.3 Agent-Based Model
5.3.1 Processes and Events
5.3.1.1 Process Company
5.3.1.2 Process Agent
5.3.1.3 Event Buy
5.3.1.4 Process Brokers
5.3.1.5 Process monit
5.3.2 Other Elements
5.4 Simulation Tool: BLUESSS Implementation
5.5 The Simulations
5.5.1 Experiment 1
5.5.2 Experiment 2
5.5.3 Experiment 3
5.6 Final Remarks
References
Chapter 6: Epidemics
6.1 Introduction
6.2 Continuous Models
6.2.1 Susceptible-Infectious-Removed Models
6.2.2 Differential Inclusions and Uncertainty
6.2.3 Examples of Reachable Sets
6.3 Agent-Based Model
6.3.1 General Concepts
6.3.2 The Model
6.3.2.1 Agent Events
6.4 Simulations
6.4.1 Fast Propagation
6.4.2 Lower Trip Frequency
6.4.3 Near Cities
6.4.4 Long Epidemics-Adverse Conditions
6.5 Conclusion
References
Chapter 7: Growing Organism and Cancer
7.1 Introduction
7.2 Agent-Based Modeling
7.2.1 How It Works
7.2.2 Simulation Tool
7.3 The Model
7.3.1 Overview
7.3.2 Implementation
7.3.2.1 Process Cell
7.4 Simulations
7.4.1 The Growth of the Organism and Aging
7.4.2 Infection and Defense
7.4.3 The Cancer
7.5 Conclusion
References
Chapter 8: Work, Salary, and Gini
8.1 Introduction
8.2 Inequality and the Gini Coefficient
8.3 Agent-Based Modeling
8.4 Simulation Tool
8.5 The Model
8.5.1 Work, Income, and Object Function
8.5.2 Implementation
8.5.3 Simulations
References
Chapter 9: Waiting Lines
9.1 Introduction
9.2 Queuing Model Generator (QMG)
9.2.1 Overview
9.2.2 QMG Blocks
9.2.3 Additional Entity Actions: The SVOP Function
9.3 Simulations
9.3.1 Simple Model, Useless Statistics
9.3.2 The Bus Stop Paradox
9.3.3 Queue and Server Chain
9.3.4 Conveyors with Feedback
9.4 Conclusion
References
Chapter 10: Simultaneous Events, Semi-Discrete Events, and Chicken
10.1 Introduction
10.2 The Chicken Game
10.3 Simulation and Model Convergence
10.4 Three Body Collision
10.4.1 Compliance Collision
10.4.2 Elastic Collision: Compliance Zero
10.5 Conclusion
References
Index
Recommend Papers

Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations
 9811625735, 9789811625732

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Evolutionary Economics and Social Complexity Science 27

Stanislaw Raczynski

Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations

Evolutionary Economics and Social Complexity Science Volume 27

Editor-in-Chiefs Takahiro Fujimoto, The University of Tokyo, Bunkyo-Ku, Tokyo, Japan Yuji Aruka, Kyoto, Kyoto, Japan

More information about this series at http://www.springer.com/series/11930

Stanislaw Raczynski

Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations

Stanislaw Raczynski Faculty of Engineering Universidad Panamericana Mexico City, Distrito Federal, Mexico

ISSN 2198-4204 ISSN 2198-4212 (electronic) Evolutionary Economics and Social Complexity Science ISBN 978-981-16-2573-2 ISBN 978-981-16-2574-9 (eBook) https://doi.org/10.1007/978-981-16-2574-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

Computer simulation is an important research tool for more than seven decades, and evolves together with the evolution of computer hardware. At the very beginning, the simulations have been used by some “gurus” and specialists. Then, more userfriendly software tools have been developed. In recent years, there are so many software packages and simulation languages, that nearly everybody can simulate anything. However, more professional simulation tasks cannot be separated from model building. To create valid models, the researcher must dominate several disciplines, like mathematics, physics, computer science, and others. Modeling and simulation require from the researcher certain interdisciplinary and generalized insight on complex systems to be modeled. This book is a continuation of the research described in the book “Interacting Complexities of Herds and Social Organizations” (Raczynski S., Springer Nature, 2019). However, here we concentrate on the phenomena of catastrophes, unexpected, and spontaneous events that are neither explicitly coded in the model nor provoked by external impulses. A short reference to the known theory of catastrophes is done. The classic examples of catastrophes in that context refer rather to mathematical descriptions of events that occur while solving differential equations. An example of such catastrophe is described in Chap. 1. In this book we understand “catastrophes” in more general sense, as events that appear in models of complex populations. As for the complexity, there is not completely clear when a model is complex or not. Complexity should not be confused with the system size. Huge systems can be quite simple, and some small ones more complex. MacMillan dictionary defines complexity as “features of something that make it confusing or difficult to deal with.” The Oxford Learner’s Dictionary says that the complexity is “the state of being formed of many parts; the state of being difficult to understand.” There are many similar definitions. In a book of Meyers “Mathematics of Complexity and Dynamical Systems” (Springer, 2011), we can read “Complex systems are systems that comprise many interacting parts with the ability to generate a new quality of collective behavior through self-organization, e.g. the spontaneous formation of temporal, spatial, or functional structures.” The main topic of the present v

vi

Preface

book is somewhat related to the last definition, namely to spontaneous formation of structures and generation of macro-behavior patterns. Comparing to populations of individuals in the real world, our models are rather simple. However, from the point of view of model building, they are not trivial and generate, like the complex systems, structures and interesting behavior patterns. In Chap. 1, there are some remarks on the classic mathematical approach to catastrophes, and the overview of agent-based modeling. Chapter 2 is dedicated to the simulation package BLUESSS, used in the rest of the book. In Chap. 3, we discuss the model of an artificial society where the individuals consume resources, move and interact with each other. The simulations show unexpected events that appear spontaneously in the model, when it seems to reach a steady state. Chapter 4 deals with an extended prey–predator system of multiple species, where the model dynamics is quite different from the classic Lotka–Volterra continuous model. Chapter 5 contains simulations of possible catastrophic events in a stock market model. Chapter 6 describes a model of epidemics that is itself a kind of catastrophe. Here, I would like to express some personal view on epidemic models. First of all, it should be emphasized that the presented model is not any attempt to simulate the recent epidemy of covid-19. As stated in Chap. 6, we should appreciate all effort done to simulate this epidemy, related in numerous publications. However, we should remember that one of the most serious errors in scientific research is to look for something that does not exist. To construct a simulation model, first we should know if the model exists at all. Many researchers forget this stage of model building. This may lead to wrong and confused results. In the case such as covid-19, this is extremely important because an invalid model can be used to take wrong strategical decisions about epidemy treatment and provoke millions of deaths. It is important to remember that dealing with new kind of disease with unknown dynamics, we cannot just modify any existing epidemics model, and use known modeling paradigms. Anyway, this is also a question of ethics in computer simulation profession. In Chap. 7, you can find a model of a growing organism that evolves and may reveal some degeneration like a cancer in a living body. There are some remarks on the DNA and RNA mechanism in living organisms. However, we do not pretend to simulate the extremely complex DNA functions. Instead, we use the abbreviation EOB to denote the genetic information included in each model agent (living cell). Further research on this kind of models could give us hints to better understand the growth of cells in living organisms. Chapter 8 contains the description of a model of a population of agents that are working and obtain salaries. The work effort increases the common pool of goods in the society. The salary may depend on the amount and quality of work done by the agent. This may result in differences and inequality between agents. The amount of inequality is measured using the Gini inequality index. The increased income motivates the agent to intensify his work. This is a feedback that influences the inequality, the level of common goods and welfare. The dynamics of the population is simulated.

Preface

vii

In Chap. 9, we simulate some examples of populations with waiting lines. It is pointed out that certain queuing models may reveal a considerable “catastrophic” and irregular growth of queues. Finally, in Chap. 10, there are some remarks about possible simultaneous events. This is not a simulation of big populations, but rather of some special cases when the simultaneous events in the model may lead to ambiguity and wrong results. The general conclusion of this chapter is that the discrete-event models form a singularity in the space of models. An alternative paradigm named semi-discrete events is proposed. Mexico City, Mexico

Stanislaw Raczynski

Acknowledgements

An updated version of my article published earlier has been used in Chap. 10 of this book, as listed below. “Simultaneous events, singularity in the space of models and chicken game,” International Journal of Modeling, Simulation, and Scientific Computing, World Scientific Publishing Company, online-ready article, May 2021.

ix

Contents

1

Catastrophes and Agent-Based Models . . . . . . . . . . . . . . . . . . . . . 1.1 Catastrophes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Discrete Events and Agent-Based Models . . . . . . . . . . . . . . . . 1.2.1 Discrete Events and DEVS . . . . . . . . . . . . . . . . . . . . 1.2.2 Some Software Tools . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 The Event Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Agent-Based Models . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

1 1 4 5 6 7 9 10

2

BLUESSS Simulation Package . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 BLUESSS Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 BLUESSS Code, Program Structure . . . . . . . . . . . . . . . . . . . . 2.3 Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 BLUESSS Code Generators, Continuous and Discrete Simulation Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Variance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

15 15 17 18

. . .

22 27 28

Behavior Patterns of an Artificial Society . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Agent-Based Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Simulation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Process food . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Process agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 Experiment 2 (unexpected behavior) . . . . . . . . . . . . . 3.5.3 Experiment 3—aggression . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

29 29 31 32 33 33 33 37 39 41 43

3

xi

xii

Contents

3.5.4 Experiment 4—Work . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45 48 48

4

Extended Prey–Predator Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Continuous Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Simple Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Uncertainty and Differential Inclusions . . . . . . . . . . . . 4.3 Agent-Based Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 General Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Simulation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Resources and Agent Types . . . . . . . . . . . . . . . . . . . . 4.4.2 Implementation, Processes, and Events . . . . . . . . . . . . 4.4.3 The Food Resource . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.5 Process Static . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.6 Process Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Basic Simulation Mode . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Incoming Agents, Low System Isolation . . . . . . . . . . . 4.5.3 Gregarious Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Group Forming—Simulation Mode 2 . . . . . . . . . . . . . 4.5.5 A Slow Catastrophe; Static Agents . . . . . . . . . . . . . . . Appendix—Model Data Specification . . . . . . . . . . . . . . . . . . . . . . . . . The Food Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Static Agents (SA) Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 51 53 53 55 58 58 59 60 60 61 61 62 64 65 65 67 69 69 69 71 74 75 75 76 76

5

Stock Market: Uncertainty and Catastrophes . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Continuous Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Model Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Differential Inclusion Solver . . . . . . . . . . . . . . . . . . . 5.2.3 The Reachable Set Example . . . . . . . . . . . . . . . . . . . 5.3 Agent-Based Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Processes and Events . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Other Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Simulation Tool: BLUESSS Implementation . . . . . . . . . . . . . . 5.5 The Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79 79 80 81 84 85 87 89 92 93 94 95 96 98

. . . . . . . . . . . . . .

Contents

xiii

5.6 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6

Epidemics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Continuous Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Susceptible-Infectious-Removed Models . . . . . . . . . . 6.2.2 Differential Inclusions and Uncertainty . . . . . . . . . . . 6.2.3 Examples of Reachable Sets . . . . . . . . . . . . . . . . . . . 6.3 Agent-Based Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 General Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Fast Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Lower Trip Frequency . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Near Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Long Epidemics—Adverse Conditions . . . . . . . . . . . 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

103 103 104 104 107 107 112 112 113 115 115 116 118 120 121 121

7

Growing Organism and Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Agent-Based Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Simulation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 The Growth of the Organism and Aging . . . . . . . . . . 7.4.2 Infection and Defense . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 The Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

123 123 124 124 125 126 126 129 132 132 132 134 135 136

8

Work, Salary, and Gini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Inequality and the Gini Coefficient . . . . . . . . . . . . . . . . . . . . . 8.3 Agent-Based Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Simulation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Work, Income, and Object Function . . . . . . . . . . . . . 8.5.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

139 139 141 142 143 144 144 146 147 152

xiv

Contents

9

Waiting Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Queuing Model Generator (QMG) . . . . . . . . . . . . . . . . . . . . . 9.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 QMG Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Additional Entity Actions: The SVOP Function . . . . . 9.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Simple Model, Useless Statistics . . . . . . . . . . . . . . . . 9.3.2 The Bus Stop Paradox . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Queue and Server Chain . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Conveyors with Feedback . . . . . . . . . . . . . . . . . . . . . 9.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

155 155 157 157 157 160 162 162 164 165 168 169 171

10

Simultaneous Events, Semi-Discrete Events, and Chicken . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 The Chicken Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Simulation and Model Convergence . . . . . . . . . . . . . . . . . . . . 10.4 Three Body Collision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Compliance Collision . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Elastic Collision: Compliance Zero . . . . . . . . . . . . . . 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

173 173 175 177 177 179 180 181 181

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

About the Author

Stanislaw Raczynski received his master’s degree from the Academy of Mining and Metallurgy (AGH) in Krakow, Poland, his Ph.D. and habilitation degree from the same academy, in the area of control theory and optimization methods. He was the head of the Computer Center of the AGH and of the Systems Analysis Group at the AGH. Dr. Raczynski worked as a researcher in the International Research Group in Moscow, USSR and participated in the activities of the European Workshop on Industrial Computer Systems. He was a visiting professor at the National University of Mexico and then a professor at the Panamericana University in Mexico City. Dr. Raczynski also has been the International Director of the Society for Computer Simulation. He wrote three books on computer simulation (Wiley UK, BrownWalker Press, and Springer Nature) and has more than 140 articles and papers published in professional journals and conference proceedings.

xv

Chapter 1

Catastrophes and Agent-Based Models

Abstract Some general remarks on the term catastrophe are presented. A short recall of the concept of catastrophes in mathematics is given and examples are provided. The catastrophes discussed in this book are understood in some more general terms, quite different from the mathematical point of view. We rather discuss unexpected events that occur during computer simulation and that have not been explicitly inserted in the model algorithm and code. The main modeling tool used here is the agent-based modeling. An overview of this methodology and related software is provided in this chapter. Keywords Catastrophe theory · Discrete-event simulation · DEVS · Agent-based modeling

1.1

Catastrophes

The Merriam-Webster dictionary defines catastrophe as “A momentous tragic event ranging from extreme misfortune to utter overthrow or ruin.” Other meanings are defined as “A violent and sudden change in a feature of the earth” or “the final event of the dramatic action especially of a tragedy.” The word catastrophe is also used in somewhat more general sense, denoting a sudden change in the behavior of an object under investigation, without the “tragic” attribute. Thus, we can consider of being catastrophe an earthquake, as well as a discontinuity in the trajectory of a computersimulated dynamic system. This last meaning of catastrophe has been adopted by mathematicians to denote strange or irregular behavior of mathematical objects. This meaning was the subject of the theory of catastrophes developed by Thom (1975) and Zeeman (1976). In mathematics, catastrophes are closely connected to the bifurcation phenomenon when a small change of a model parameter may cause a sudden qualitative change in its behavior, see Luo (1997). A good example in the theory of catastrophes is the cusp catastrophe. Consider the following function:

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Raczynski, Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations, Evolutionary Economics and Social Complexity Science 27, https://doi.org/10.1007/978-981-16-2574-9_1

1

2

1 Catastrophes and Agent-Based Models 1.0 0.8 0.6

b=0.8

b=–0.8

0.4 0.2 0.0

b=0.8

b=–0.8

–0.2

min b=–0.32

–0.4 –0.6 –0.8 1.0000 –1.000

X –0.600

–0.200

0.200

0.600

1.000

Fig. 1.1 Cusp catastrophe; Plots of function V(x) for different values of b

Fig. 1.2 Position P(x) of the global minimum of V(x) as function of parameter b

V ðxÞ ¼ x4 þ ax2 þ bx

ð1:1Þ

Figure 1.1 shows the graphs of this function for a ¼ 1, b changing from 0.83 to 0.83, and  1 < x > 1. The graph of the function is continuous, and its shape changes continuously with respect to a and b. It might appear that the other properties of V(x) should also change continuously. However, this is not true. Consider two properties: (a) The number of local minima and (b) The position of the global minimum with respect to the variable x. These two properties change abruptly when b changes. Observe that the curve for b ¼ 0.8 has one local (and global) minimum, while the curve for b ¼ 0.32 has two local minima. The position of the global minimum changes, with discontinuity at b ¼ 0, as shown as the function P(x), in Fig. 1.2. Now, if we use the function V(x) as a part of a mathematical model of a dynamic system, we may observe a catastrophic change of model behavior like the loss of

1.1 Catastrophes

3

stability, for even very small changes of parameter b oscillating around zero. For example, if V(x) is used as a potential function in the equation dV ðxÞ dx ¼ dx dt then the solution can become ambiguous or unstable. Such catastrophes may also occur in ODE models of heartbeat and nerve impulse, or in the behavior of stock exchange market. The catastrophes in social systems are frequently related to conflict and aggression. In Kadar et al. (2019) we can find a good review of this approach. The authors describe an analytic model that includes violations of the moral order, principles and ideologies, breaches of moral norms, conflict, and aggression. The model reflects “the relationship between various methodologies through which one can examine morality and the moral order in the context of language conflict and aggression.” In multiple publications of Konrad Zacharias Lorenz (for example, Lorenz (1964, 1981, 2002), we can find an exhaustive source of works about aggression, social system behavior and related catastrophes, known as Lorenzian Theory of Catastrophes. Lorenz is recognized as the founder of ethology science. He asserts that “centrally coordinated sequences of endogenously generated impulses, have evolved phylogenetically and are very resistant to any individual modification. A fixed motor pattern’s most important distinction from motor patterns that are not fixed and, simultaneously, the cogent argument for its being genetically programmed, consists in its taxonomic distribution.” The Lorenzian theory can be used in advanced agent-based models of social behavior. The catastrophes result from the appearance of aggression in the groups of individuals. Lorenz provides the bases to create models of catastrophes of these types that perhaps could be used to predict future catastrophic events (Weaver 1980). The aggression in the Lorenzian theory can be treated as a kind of catastrophe. For more results on this approach to catastrophes see Kim (1976) and Baron (1977). For newer works on aggression simulation, see, for example, Danaf et al. (2015). Computer simulation of aggression is not new. Coe (1964) presents a report on computer simulation of aggression simulated on IBM 650 computer, machine with rotating drum memory of up to 4000 words. The simulation programs running on such machines have been already quite complex. Of course, the huge exponential growth of hardware in present days makes it possible to carry out greater and more sophisticated simulations. However, the complexity and capability of the software depend not only on the power of used hardware. Simulation programs that ran on IBM650 or ICL1900 in the sixties were not trivial. We do not intent to provide a strict definition of catastrophe, different from these mentioned above. Roughly speaking, we denote as “catastrophic” events that occur in the model behavior that can abruptly change the model state or the behavioral pattern, like the presence and magnitude of oscillations, if any. This includes also other, relevant and unexpected events along the model trajectory. By “unexpected”

4

1 Catastrophes and Agent-Based Models

we mean events that occur spontaneously and which were not explicitly, or in other, hidden way, implanted into the model while it was created. There are many other examples of mathematical catastrophes provided in the literature. However, this, classic mathematical catastrophe theory is not the topic of the present book. Here, we use the discrete-event, agent-based models, mainly the models of artificial populations of moving objects, instead the models given by ordinary (ODE) or partial differential equations. The ODE models appear in the methodology of system dynamics (SD-models) that has been widely used and applied in modeling of a variety of real systems, during the last six decades (Forrester 1958). Our point is that not everything that happens in the real world can be modeled using the ODE or SD methodology. Thus, we rather use the agentbased models, where all the necessary model specification is reduced to a logical description of object behavior and simple arithmetic, with no mathematics at all. In the post-mortem model analysis, the only mathematics is the eventual use of statistics to analyze the results.

1.2

Discrete Events and Agent-Based Models

Looking at the annals of the development of computer simulation methodology, we can see a clear division between continuous and discrete modeling and simulation. As for the continuous simulation (Raczynski 2003) observe that the only machines that supported such kind of simulation were the analog computers of 1950s. In a digital computer nothing is continuous. So, any numerical method implemented on digital computer provides an approximation to the real continuous process. Of course, such methods have reached a high level of sophistication, and permit the simulation of continuous systems with a satisfactory accuracy. The models used in continuous simulation are based on ordinary or partial differential equations. If the model time is discrete and the model state is a real variable, difference equations are used. The system dynamics methodology (SD) uses the ordinary (ODE) or difference equation models (Forrester 1958). In the discrete simulation, both the model time and model state “jump” from one value to another in, theoretically, time interval zero. Some of the recent simulation tools also support combined simulation (see BLUESSS package, Chap. 2 of this book). BLUESSS is an object-oriented simulation package that carried out discreteevent simulation, but also supports continuous models given in form of the ODEs, Bond Graphs, or Signal Flow Graphs. In Raczynski (2009, 2019, see also Chap. 9) you can find a description of a new simulation tool that uses the graphical user interface almost identical to that used by the system dynamics packages. This interface is used to define the model. However, behind the interface, the simulation is carried out using discrete-event method. This discrete simulation machine is transparent to the user and runs automatically.

1.2 Discrete Events and Agent-Based Models

1.2.1

5

Discrete Events and DEVS

By the model time we understand the time variable that is controlled by the simulation program during the simulation run. The real time represents the time of our (or computer) physical clock. For example, simulating the movement of a galaxy, we simulate several millions of model time years. On a fast computer, his simulation may take several minutes in the real time. There are many real systems, where we can define events that consist in changing the state of the system. For example, the events may describe the start or the end of a service process, a birth or death of a model entity, or taking place in a waiting line. In many situations, such events can be considered to be executed in a very small interval of time, compared to the total length of model simulation time. The discrete-event simulation means that we suppose that the model events are discrete, i.e., they are accomplished within model time interval of length zero. This model simplification makes the simulations very fast. In the object-oriented programming we declare several generic code segments called classes. According to these declarations, objects are created at the runtime. Each object is equipped with a data set, and several methods that perform operations on the data. The Discrete Event Specification (DEVS) formalism is used to describe models in discrete-event simulation. In the DEVS formalism, an “atomic” model M is defined as follows (Zeigler 1987). M ¼< X, S, Y, σ int , σ ext , λ, τ > σ int : S ! S, σ ext : Q  S ! S, λ : Q ! Y,

ð1:2Þ

where X is the input space, S is the system state space, Y is the output space, σint is the internal state transition function, σext is the external transition function, and Q is the “total state.” Atomic models can be coupled to form a coupled model. The coupled models can also be coupled in hierarchical way, to form more complex models. The coupled DEVS model is as follows.  coupled DEVS < X self , Y self , D, fM i g, fI i g, Z i,j , select > The sub-index self denotes the coupled model itself. D is a set of unique component references. The set of components is fM i ji 2 Dg Ii is the set of influencees for I, Zi,j is the i-to-j output translation function. The select component defines the order of execution for simultaneous events that may occur in the coupled model. This component must be added to the model to avoid ambiguities in the simulation algorithm and to make the model implementation

6

1 Catastrophes and Agent-Based Models

independent. There is a huge research done on the select algorithms because treating of the simultaneous events is rather difficult task. To treat complex models with variable structure, the Dynamic Structure Discrete Event System Specification (DSDEVS) is used. We will not discuss the DSDEVS formalism here. The use of the DEVS formalism is relevant in big models, where the time of execution, hierarchical model building, and portability are important factors. The Time and Event Management (TEM) is the time clock and event queue management inside the “simulation engine,” including the basic queuing model operations provided by the simulation package. The Object Behavior Modeling (OBM) is a set of additional items like user-defined distributions and logical functions, non-typical operations, object attributes, and the general object behavior.

1.2.2

Some Software Tools

Among good and forgotten tools we should mention a very nice language of 1950s named CSL (Control and Simulation Language). It was based on Fortran and already included the concept of the class of objects. Also, we should recall the language SIMULA and its known version SIMULA67 (Dahl and Nygaard 1967). Although it is a tool developed more than 50 years ago, it is still perhaps one of the most advanced and elegant object-oriented languages that also supports the objectoriented simulation. We will not provide here a big survey of simulation languages. The main (but not all) software tools that should be mentioned are the following. GPSS (General Purpose Simulation System 1960), see Gordon 1975. It is a complete and well designed language for queuing model simulation. Newer versions of GPSS also include a graphical user interface (GUI) that facilitates model definition. Arena modeling system from Systems Modeling Corporation is a nice and widely used simulation tool. It is equipped with a GUI and animation mechanisms, see Kelton et al. (2004). Simio, a multi-paradigm software delivered by Simio LLC. Simio® is created by a team of simulation software developers led by Pegden and Sturrok (2010). SLAM II by Alan Pritsker (1986), a tool for waiting lines and discrete-event simulation. ProModel by ProModel Corporation, a professional tool for waiting lines, manufacturing systems, and discrete-event simulation. JAVA MODELLING TOOLS (JMT), a suite of applications developed by Bertoli et al. (2010). SIMUL8—a versatile queuing software tool, a product of the SIMUL8 Corporation, see Hauge and Paige (2001). SIMAN—a general purpose discrete-event simulation package, see Pegden and Ham (1982),

1.2 Discrete Events and Agent-Based Models

7

The BLUESSS simulation system, developed by the author, will be commented further on. These are only few of the software tools. A large list of discrete-event simulation tools can be found in https://web2.uwindsor.ca/math/hlynka/qsoft.html For a comprehensive summary of the classic discrete-event simulation, consult also Hollocks (2006).

1.2.3

The Event Queue

The main contribution of CSL was the implementation of the clock mechanism, transparent to the user. This is why a simulation code in an appropriate simulation language is up to 20 times shorter than the equivalent code written in an algorithmic language like Pascal, C, or Fortran. A discrete simulation language is as good, as the algorithm that manages the event queue is. This queue should not be confused with a queue we want to simulate, for example, a queue of clients in a mass service system or a buffer in a manufacturing system where the processed part can wait to be processed. The event queue contains a set of event messages, each of them telling which model event to execute and specifying the time instant when the execution will occur. The advantage of discrete simulation is that the model time jumps to the next (in time) event to execute, instead of advancing continuously. This means that the system (the program, which controls the discrete-event simulation) must know which the next event to execute is. There are two ways to achieve this. First, we can maintain the event queue always sorted due to the execution time, and second, to add new event messages at the end of the queue, and then look for the nearest event to execute. Both options involve the problem of sorting or scanning. This process is simple and fast if there are few events in the queue. However, if the model is not trivial (has more than, say, two queues and servers), the event queue can grow to hundreds of thousands of events, and the event handling strategy becomes crucial to the whole system performance. Observe that the event queue in the simulation process constantly changes. Any event, while being executed, can generate one or more new event messages or cancel some of the existing ones (see Fig. 1.3). Moreover, there are events that cannot be scheduled through the event queue mechanism, being executed due to the changes of the model state and not of the model time. Such events are called state-events and must be handled separately. There are three basic strategies in discrete simulation: Activity scanning (AS), Event scheduling (ES), and Process interaction (PI). In this section we treat activity and event as synonyms. More advanced strategies are being developed and can be found in publications on the DEVS. See Chow and Zeigler (1994) and Zeigler (1987). Activity Scanning (AS) was the first discrete simulation strategy, developed in the 1950s. One of the first implementations was the language CSL. According to this strategy, the model time is set equal to the time instant of the nearest event. Then, all model activities (events) are scanned and those, which can be executed, are

8

1 Catastrophes and Agent-Based Models

Fig. 1.3 Internal event queue mechanism

executed, the others remaining inactive. Next, the time jumps to the next possible event, and the whole process is repeated. This clock loop stops if no possible events remain in the model. Obviously, an event, while being executed, can schedule itself or other events to be executed in the future, so the event sequence can be long and complicated even if the source program is relatively short. The Event Scheduling (ES) strategy is somewhat more effective. In the computer memory the event queue is created. Every component or event message of this queue stores the time the event will be executed, and the event identifier. So, the only problem is to maintain the event queue sorted according to execution time. If we do this, then we simply take the first event and execute it, without scanning all possible events. This event queue management is transparent (invisible for the user) and works automatically. The user can schedule events, but he/she cannot redefine the model time or manipulate directly the event queue. The most effective event management algorithms are those using binary tree techniques to sort the event queue. The Process Interaction (PI) strategy is more advanced. The model is defined in terms of processes that can run concurrently. The rules of interaction between processes are defined, while the specification of a process includes the necessary event scheduling. PI can be implemented in any object-oriented programming language, and became the main feature of Simula, Modsim, BLUESSS, and other languages. The three-phase strategy in discrete simulation is a combination of these three strategies. The three phases are as follows. By the state-event we mean an event which execution depends on the model state rather than the model time. 1. Model time jumps to the next event. 2. The event(s) scheduled to be executed at this time instant are executed. 3. All state-events are revised. Those that can be executed are.

1.2 Discrete Events and Agent-Based Models

9

Consult Lin and Lee (1993) and O’Keefe (1986).

1.2.4

Agent-Based Models

The methodological focus of this book is the object- and agent-based simulation. No state equations or system dynamics schemes are used. Recall that in the discrete object-based modeling, we create objects that behave according to the user-defined rules, and execute their events in discrete moments of the model time. The agentbased models manage objects called agents, which are equipped with certain “intelligence.” They can take decisions, optimize their actions, and interact with each other and with the environment. Agent-based models (ABMs) are a type of microscale models that simulate the simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the global, complex phenomena. The individuals in ABM models may be of different types. Although the rules of behavior are the same for individuals of the same type, the behavior is not identical for all of them. This modeling method has many applications, mainly in ecology, biology, and social sciences. A key notion is that simple behavioral rules (micromodel) generate complex (macro) behavior. An important central tenet is that the whole is greater than the sum of the parts. Individual agents are typically characterized as rational. They are presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules (Railsback et al. 2006; Bandini et al. 2009). Note the main difference between object-oriented and simulation package. The latter, in addition to object creation, provides (or should provide) a “clock” mechanism that automatically manages the model time and the event execution. The ABM modeling is supported by many programming and simulation tools. Let us mention only some of the most popular tools: SWARM developed in 1994 by the Santa Fe Institute (Swarm Development Group 2001), Ascape developed in 2001 (Parker 2001), Breve-2.7.2 (Klein 2002), Recursive Porous Agent Simulation Toolkit released in 2003 (Michael et al. 2006), Cormas developed in 2004 by VisualWorks (Bommel et al. 2015), MASON (Luke et al. 2005), MASS package (Tatai et al. 2005), FLAME (Coakley et al. 2006; Holcombe et al. 2013), MATSim of EHT Zürich (Bazzan 2009), and SOARS developed in 2010 (Tanuma et al. 2005, 2006), among others. ABMs are widely used in modeling of the dynamics of organizations. An example of an agent-oriented model, called the BC model, can be found in the article by Krause (2000). In that model, the agent’s attributes include “opinions,” and the interactions between agents depend on the distance between their opinions in a non-linear way. These interactions can result in an action being taken by the agent. Other examples of models of social structures based on the concept of opinion interactions can be found in Latane and Nowak (1997) and Galam and Wonczak (2000). A similar approach is presented by Chatterjee and Seneta (1977) and Cohen et al. (1986). These works refer to the dynamics of forming of social groups in accordance with the existing agents’ attributes (opinions). Some quite interesting

10

1 Catastrophes and Agent-Based Models

results, more closely related to the terrorism problem, are described by Deffuant et al. (2002). Some more general concepts of “computational sociology” and agent-based modeling (ABM) can be found in the article of Macy and Willer (2002). Other general recommended readings in the field are Bak et al. (1997), Cioffi-Revilla (1998), Gotts et al. (2003), Axelrod (1997), Epstein and Axtell (1996), and Holland (1998). An interesting contribution to a model of the structure of the Osama Bin Laden organization is included in a Vitech Corporation page (link: see References, Long 2002). Other, ABM-oriented approach can be found in Crowder et al. (2012) and Hughes et al. (2012). In these publications we can find discussions about the potential advantages of the ABM approach through a range of examples and through the identification of opportunities in the field of organizational psychology. Another approach is used by Lustick (2000), where the agents interact on a landscape. It is shown that macro-patterns emerge from micro-interactions between agents. An important conclusion is that such effects are more likely when a small number of exclusivist individuals are present in the population. The simulations of other mechanisms of clustering in agent-oriented models are described by Younger (2003), who deals with the creation of social structures in the process of food and material storage.

References Axelrod R (1997) The dissemination of culture - a model with local convergence and global polarization. J Confl Resolut 41(2):203–226 Bak P, Paczuski M, Shubik M (1997) Price variations in a stock market with many agents. Phys A Statist Mech Appl 246(3):430–453 Bandini S, Manzoni S, Vizzari G (2009) Agent based modeling and simulation: an informatics perspective. J Artif Soc Soc Simul 12(4):4 Baron RA (1977) Aggression: definitions and perspectives. In: Human aggression. Perspectives in social psychology (a series of texts and monographs). Springer, Boston. https://doi.org/10.1007/ 978-1-4615-7195-7_1 Bazzan AIC (2009) Auton Agent Multi-Agent Syst 18(3):342–375. https://doi.org/10.1007/ s10458-008-9062-9 Bertoli M, Casale G, Serazzi G (2010) User-friendly approach to capacity planning studies with Java modelling tools. SIMUTOOLS. https://doi.org/10.4108/ICST.SIMUTOOLS2009.5650 Bommel P, Becu N, Le Page C, Bousquet F (2015) Cormas, an agent-based simulation platform for coupling human decisions with computerized dynamics. In: Hybrid simulation and gaming in the network society series. Translational systems sciences. Springer, Singapore. https://doi.org/ 10.1007/978-981-10-0575-6_27 Chatterjee S, Seneta E (1977) Towards consensus: some convergence theorems on repeated averaging. J Appl Probab 14(1):89–97 Chow AC, Zeigler BP (1994) The simulators of the parallel DEVS formalism. In: Proceedings of the Fifth Annual Conference on AI, Simulation and Planning in High Autonomy Systems Cioffi-Revilla C (1998) Politics and uncertainty: theory, models and applications. Cambridge University Press, Cambridge Coakley S, Smallwood R, Holcombe M (2006) From molecules to insect communities – how formal agent based computational modeling is undercovering new biological facts. http://www.

References

11

jams.or.jp/scm/contents/e-2006-7/2006-69.pdf. Scientiae Mathematicae Japonicae Online, e-2006, 765–778 Coe RM (1964) Conflict, interference, and aggression. Behav Sci 9(2):186–197 Cohen JE, Hajnal J, Newman CM (1986) Approaching consensus can be delicate when positions harden. Stoch Process Appl 22(2):315–322 Crowder RM, Robinson MA, Hughes HPN, Sim YW (2012) The development of an agent-based modeling framework for simulating engineering team work. IEEE Trans Syst Man Cybern Part A Syst 42(6):1426–1439 Dahl O, Nygaard B (1967) Simula – an Algol-based simulation language. Commun ACM 9:671–678 Danaf M, Abou-Zeid M, Kaysi I (2015) Modeling anger and aggressive driving behavior in a dynamic choice–latent variable model. Accid Anal Prev 75:105–118., ISSN 0001-4575. https:// doi.org/10.1016/j.aap.2014.11.012 Deffuant G, Amblard F, Weisbuch G, Faure T (2002) How can extremism prevail? A study based on the relative agreement interaction model. J Artif Soc Soc Simul 5(4):27 Epstein JM, Axtell R (1996) Growing artificial societies: social science from the bottom up. Brookings Institution Press, Washington, DC Forrester JW (1958) Industrial dynamics-a major breakthrough for decision makers. Harv Bus Rev 36(4):37–66 Galam S, Wonczak S (2000) Dictatorship from majority rule voting. Euro Phys J B 18(1):183–186 Gordon G (1975) The application of GPSS to discrete system simulation. Prentice-Hall, Englewood Cliffs Gotts NM, Polhill JG, Law ANR (2003) Agent-based simulation in the study of social dilemmas. Artif Intell Rev 9(1):3–92 Holcombe M, Coakley S, Kiran M (2013) Large-scale modelling of economic systems. Compl Syst 22(2):175–191. http://www.complex-systems.com/pdf/22-2-3.pdf Holland JH (1998) Emergence: from chaos to order. Helix Books: Addison-Wesley Publishing Company, Reading Hauge J, Paige K (2001) Learning SIMUL8: the complete guide (and SIMUL8 version 6). PlainVu Publishers (and SIMUL8 Corporation), Bellingham Hollocks B (2006) Forty years of discrete-event simulation—a personal reflection. J Oper Res Soc 57:1383–1399. https://doi.org/10.1057/palgrave.jors.2602128 Hughes HPN, Clegg CW, Robinson MA, Crowder RM (2012) Agent-based modelling and simulation: the potential contribution to organizational psychology. J Occup Organ Psychol 85:487–502 Kadar DZ, Parvaresh V, Ning P (2019) Mortality, morel order, and language conflict aggression. J Lang Aggress 7(1):6–31. https://doi.org/10.1075/jlac.00017.kad Kelton D, Sadowski R, Sadowski D (2004) Simulation with ARENA. McGraw-Hill, New York Kim SS (1976) The Lorenzian theory of aggression and peace research: a critique. J Peace Res 13 (4):253–276. https://doi.org/10.1177/002234337601300401 Klein J (2002) Breve: a 3D environment for the simulation of decentralized systems and artificial life. In: Conference paper: ICAL 2003 Proceedings of the Eighth International Conference on Artificial Life. MIT Press, Cambridge. ISBN/ISSN 0-262-69281-3 Krause U (2000) A discrete nonlinear and non-autonomous model of consensus formation. In: Elaydi S, Ladas G, Popenda J, Rakowski (eds) Communications in difference equations. Gordon and Breach, Amsterdam Latane B, Nowak A (1997) Self-organizing social systems: necessary and sufficient conditions for the emergence of clustering, consolidation and continuing diversity. In: Barnett FJ, Boster FJ (eds) Progress in communication sciences, vol 13. Ablex Publishing Corporation, Norwood. ISBN-13: 978-1567502770 Lin JT, Lee CC (1993) A three-phase discrete event simulation with EPNSim graphs. SIMULATION 60(6):382–392. https://doi.org/10.1177/003754979306000603

12

1 Catastrophes and Agent-Based Models

Long JE (2002) Systems analysis: a tool to understand and predict terrorist activities. Internet communication Vitech Corporation, Vienna. http://www.umsl.edu/~sauterv/analysis/62SLong-INTEL.pdf Lorenz KZ (1964) Ritualized fighting. In: Carthy JD, Ebling FK Jr (eds) The outward handling of aggression. Academic Press, New York Lortenz KZ (2002) On aggression. Psychology Press, New York Lorenz KZ (1981) The centrally coordinated movement or fixed motor pattern. In: The foundations of ethology. Springer, Vienna. https://doi.org/10.1007/978-3-7091-3671-3_6 Luke S, Cioffi-Revilla C, Panait L, Sullivan K (2005) MASON: a multiagent simulation environment. SIMULATION 81(7):517–527 Luo D (1997) Bifurcation theory and methods of dynamical systems. World Scientific, Singapore, p 26. ISBN 981-02-2094-4 Lustick S (2000) Agent-based modeling of collective identity. J Artif Soc Soc Simul 3(1) http:// jasss.soc.surrey.ac.uk/3/1/1.html Macy MW, Willer R (2002) From factors to actors: computational sociology and agent-based modeling. Annu Rev Sociol 28(1):143–166 Michael JN, Nicholson T, Collier JR, Vos JR (2006) Experiences creating three implementations of the repast agent modeling toolkit. ACM Trans Model Comput Simul 16(1):1–25. https://doi.org/ 10.1145/1122012.1122013 O’Keefe RM (1986) The three-phase approach: a comment on strategy-related characteristics of discrete event languages and models. SIMULATION 47(5):208–210 Parker MT (2001) What is Ascape and why should you care? J Artif Soc Soc Simul. http://jasss.soc. surrey.ac.uk/4/1/5.html Pegden CD, Sturrok DT (2010) Introduction to Simio. In: Conference paper: proceedings of the 2010 winter, PA, USA Pegden D, Ham I (1982) Simulation of manufacturing systems using SIMAN. CIRP Ann 31 (1):365–369.,ISSN 0007-8506. https://doi.org/10.1016/S0007-8506(07)63329-0 Pritsker A (1986) Introduction to simulation and SLAM II. Wiley, New York. ISBN:978-0-47020087-2 Railsback SF, Lytinen SL, Jackson SK (2006) Agent-based simulation platforms: review. SIMULATION 82(9):609–623. https://doi.org/10.1177/0037549706073695 Raczynski S (2003) Continuous simulation. In: Encyclopedia of information systems. Academic Press, Elsevier Publication, New York Raczynski S (2009) Discrete event approach to the classical system dynamics. In: Conference paper: Huntsville simulation conference, SCS, Huntsville AL, 2009 Raczynski S (2019) Interacting complexities of herds and social organizations: agent based modeling. Springer Nature, Singapore SWARM Development Group (2001) Swarm simulation system. Electronic Citation Electron 8 (1–10) http://digitalcommons.usu.edu/nrei/vol8/iss1/2 Tanuma H, Deguchi H, Shimizu T (2005) Agent-based simulation: from modeling methodologies to real-world applications, vol 1. Springer, Tokyo Tanuma H, Deguchi H, Shimizu T (2006) SOARS: spot oriented agent role simulator – design and implementation. In: Agent-based simulation: from modeling methodologies to real-world applications. Springer, Tokyo. ISBN 9784431269250 Tatai G, Gulyas L, Laufer L, Ivanyi M (2005) Artificial agents helping to stock up on knowledge. In: Conference paper: 4th international central and eastern European conference on multi-agent system, Budapest, Hungary, ISBN:3-540-29046-X 978-3-540-29046-9. https://doi.org/10. 1007/11559221_3 Thom R (1975) Structural stability and morphogenesis. (trans: Fowler DH). Benjamin-Addison Wesley, New York

References

13

Weaver DS (1980) Catastrophe theory and human evolution. J Anthropol Res 36(4):403–410. Accessed 8 Jan 2021 http://www.jstor.org/stable/3629609 Younger SM (2003) Discrete agent simulations of the effect of simple social structures on the benefits of resource. J Artif Soc Soc Simul 6(3) Zeeman EC (1976) Catastrophe theory. Sci Am 234(4):65–83. https://doi.org/10.1038/ scientificamerican0476-65 Zeigler B (1987) Hierarchical, modular discrete-event modelling in an object-oriented environment. SIMULATION, SIMULATION 49(5):219–230. https://doi.org/10.1177/003754978704900506

Chapter 2

BLUESSS Simulation Package

Abstract An overview of the BLUESSS simulation package is presented. The package is object-oriented simulation tool, related to the C++ language. It has a clear process/event structure and supports inheritance of object properties. It is pointed out that BLUESSS supports both discrete-event and continuous models. It includes several source code generators for models given by ordinary differential equations, bond graphs, and signal flow graphs, as well as the discrete-event models of queuing, waiting lines, and manufacturing. Keywords Computer simulation · Simulation software · BLUESSS package · Discrete event · Waiting lines

2.1

BLUESSS Concepts

BLUESSS (Blues Simulation System) is the tool used in the simulations discussed in this book. The package evolved from the Delphi-based languages PASION and PSM ++. The use of this particular package is not necessary. The models described in the following chapters may be simulated using other agent-based tools. This package was used because of its relation to the C++ language. Important questions both in teaching and implementing computer simulation are as follows: • Must a simulationist be a programmer? • Must he/she be a mathematician? Depending on what is the role of the simulationist in the whole process of creating and using a simulation program the response can be yes or no. It seems that the commercial simulation tools are being developed in order to prevent the user from any coding and to make all the mathematics and statistical considerations as transparent and simple as possible. To say that this is a correct tendency, first of all we must know who the simulationist is. If he/she is a plant engineer, a sociologist, a warehouse manager, or just an amateur, the tool should be “fast and easy” (user of © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Raczynski, Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations, Evolutionary Economics and Social Complexity Science 27, https://doi.org/10.1007/978-981-16-2574-9_2

15

16

2 BLUESSS Simulation Package

kind 1). However, if he/she is a professional simulationist and develops simulations in serious and professionally advanced way, he/she should be able to create and to code necessary algorithms and be aware of the model mathematics (user of kind 2). While teaching computer simulation it is not always clear to which kind of users we should address. This is an important question, because the contents of the simulation course and the tools we use strongly depend on the student/user kind. At the very beginning, PASION development has been inspired by Simula, an old, but excellent and still active language, commonly used by computer science researchers (Dahl and Nygaard 1967). Recently, Simula is not very popular. It is difficult to learn and to easy coding, for its relation to Algol68. Its main concepts of object class and process class are perfectly implemented and can be used in objectoriented simulation. These concepts have been used in PASION, and then in PSM and in the BLUESSS packages. It seems that the new discrete -event and general purpose packages are being designed mostly for the users of kind 1. The new software has always a Graphical User Interface (GUI) and offers a ready-to-use, encapsulated probability distributions and statistics. However, despite good manuals and additional materials, the users of kind 1 frequently commit fundamental errors, like using the Poisson interarrival time distribution for the Poisson arrival process. One could say that users of kind 2 can do their simulations in any algorithmic language and need no simulation packages. However, this is not true. A simulation tool should provide ready to use mechanisms to avoid unnecessary work (but nothing more). In general, the question is if, in our simulations, we need intelligent objects (agents) at all. The answer is yes. Such objects are not needed in academic examples and simple simulations (users of kind 1). However, if we face the reality, for example, a real manufacturing system, it is quite sure that there will be objects that do not fit in standard blocks or facilities offered by most of the simulation packages. In such case the use of intelligent objects will be inevitable. BLUESSS evolved from the PASION and PSM++ packages, related to Delphi. Some applications and remarks on discrete-event simulation in these packages can be found in Raczynski (1989, 2000, 2004, 2006). The BLUESS package runs over the Embarcadero™ C++Builder. The user can be of kind 1 or of kind 2 (programmer skills). Taking about a professional simulationist we should rather think about users of kind 2. My point is that there are few really professional simulationists who do not dominate C++. BLUESSS is a simple simulation language and has the BLUESSSto-C++ translator. So, the user can code model events in C++, if necessary, or use one of the BLUESSS code generators to create models without coding. After defining the model, the BLUESSS system generates the source BLUESSS code, translates it into C++, and invokes the C++Builder, which produces the executable program. The event queue in BLUESSS works due to the three-phase discrete simulation strategy, see O’Keefe (1986).

2.2 BLUESSS Code, Program Structure

2.2

17

BLUESSS Code, Program Structure

The following sketch shows the BLUESSS program structure. The reserved BLUESSS words are in boldface. global declarations reference declarations process name1,n1; functions - - - - - function declarations ----endfunctions event ev1; - - - - - event body, coded in C++ ----endev; process name2,n2; ---------start k,f; - - - - - main program, initialization ----$

C++ Global Declarations can only include type and variable declarations, and not function headers or bodies. However, it is not recommendable to put any type definitions there. If you want some global functions to be visible from the whole program, you must put them in separate units (*.cpp and *.h) and use the #include directive with the complete path to the unit. In the path indicate the *.cpp unit. The reference variables and functions sections are optional. In the above sketch, name1 and name2 are the process names. Consult the trigger example to see the reference variables declarations and use. The number that follows the process name is the maximal number of objects that can be generated at the runtime, for the corresponding process. The process declarations are translated into the class declarations of C++. Events are translated into class functions. You can add the functions section to a process. The difference between functions declared in these sections is that these functions are not processed by the BLUESSS translator, but copied into the resulting code. Because of this, you must add to each function the qualifier namex::, where namex is the name of the current process, for example process something,5; functions something::float fun1(TIME){fun1 = sin(TIME*0.1);} endfunctions

18

2 BLUESSS Simulation Package

The start word indicates the beginning of the main program. k is the number of repetitions of the simulation run, and f is the simulation final model time. If k is equal to 1, then the executable program will not ask you to give the number of repetitions and the value of f at the runtime. If you are in the varying parameter mode (for ODE models), then the number of repetitions is fixed and equal to 25. In the main program you must create at least one object with the newpr instruction and activate at least one of its events using the schedule instruction. The character $ in the first column of the last code line terminates the program code. The words process, functions, endfunctions event, start and $ must start at the first column of a separate line. This restriction will probably be relaxed in the next versions of BLUESSS. The visibility rules for process attributes, events, and functions are simple. Any function, attribute, or event is directly visible from inside the current process. Attributes and events of other processes can be reached using the - > notation. Suppose, for example, that a process named Person has an attribute x and includes event AnEvent, and another process, say, AMan also includes attribute x and event AnEvent, then from the code of any event of Person you can access x and AnEvent directly. To access x and AnEvent of AMan from an object with index equal to 3 of Person, you may codes AMan[3]- > x and AMan[3]- > AnEvent respectively.

2.3

Inheritance

While constructing complex models, it is convenient to add new properties to already existing processes without reediting the whole program. In BLUESSS, this can be done by using parent-derived processes. This mechanism can be applied by adding prefixed process declarations. For example, if PA is the name of an existing process and we wish to create a new one, say PB, having all properties of the process PA (this means all its attributes and events), then simply use the name PA/PB in the heading of the process declaration, after the word process. While processing such declaration, the translator looks for the process PA (parent process) and inserts all the attribute declarations and event descriptions from PA into the new process PB (derived process). The heading of the declaration of the derived process may be, for example, process PA/PB,25;

The BLUESSS translator has a built-in preprocessor that produces new source file called NEWFIL.CSM. This file contains the text of the program being translated, where the derived processes are equipped with the code derived from the parent processes. The name of corresponding process type does not contain any reference to the parent process, e.g., the name PA/PB becomes PB. Next, the file NEWFIL.CSM is translated into the C++ code. The file NEWFIL.CSM remains in the BLUESSS directory and may be used for future processing. BLUESSS permits only one

2.3 Inheritance

19

inheritance level. However, if more inheritance levels are needed, it can be done by renaming the NEWFIL.CSM and using it as the file with parent processes for a new version of the program.If the derived process PB contains an event which name is equal to the name of one of the events in the parent process, then the old one is overwritten and in the derived process appears the new version of the event.An example of a model with inheritance can be shown in the below code of the cats program. This is a complete BLUESSS program. This is an example of inheritance and multiple object model. It can be used to check the virtual memory management on your computer and the speed of the BLUESSS event queue mechanism. Three processes are defined in the model: CAT, HE, and SHE. CAT includes the following events: to hunt, to eat, to sleep, and to die. HE is a male cat, he inherits all the properties of’ CAT, but can also fight with other cats. As the result of this event, one of them may die. The process SHE is a female cat, she inherits all the properties from CAT, but she can have several little cats in certain time intervals. In the main program only one object is generated, of type SHE. After some time SHE has little cats, they start with their own activities, and the population grows. Note that even with several thousands of cats (some tens of thousands of events in the event queue) the program slow-down is not significative. The program speed is limited rather by the text display procedure. The relevant model instructions are shown in boldface. The other instructions are auxiliary operations like results display and pause/stop items. BLUESSS programs use some utilities from the auxiliary unit Cutil.csm, included automatically. The following Cutil items are used: xf0 - a global char variable, defined by kaypress or keydown C++ events. cnv - the canvas of the main program form, used for display. outlne(...) - displays one text line on canvas (the screen). Frandom() - returns a float random variable between o and 1. czys(r,g,b) - clears the screen with color (r,g,b). pisak(...) - displays a message of up to five text lines. zaczek() - interrupts the run and waits for mouse click. Negexp(s) - returns an exponentially distributed random variable with expected value equal to s. //Simulation of a population of cats. Process CAT is // the parent process for the processes HE and SHE. // HE is a male cat. It inherits the properties of CAT // and, in addition, can fight with other cats. // SHE is a female cat and can have little cats // (the event CATS) int idk; // a global integer variable, the unique cat identifier process CAT,1; String NAME;

20

2 BLUESSS Simulation Package

event HUNTS; outlne(cnv,clBlack,"%T ="+FloatToStrF(TIME,ffGeneral,7,7)+" Cat "+NAME+" hunts."); if(Frandom()>0.2)schedule EATS to TIME+Negexp(0.1); endev; event EATS; Application->ProcessMessages(); if(xf0==char(27))exit(0); if(xf0==char(32))zaczek(); xf0='?'; outlne(cnv,clBlack,"%T ="+FloatToStrF(TIME,ffGeneral,7,7)+" Cat "+NAME+" eats."); schedule SLEEPS to TIME+Negexp(0.1); endev; event SLEEPS; outlne(cnv,clBlack,"%T ="+FloatToStrF(TIME,ffGeneral,7,7)+" Cat "+NAME+" sleeps."); schedule HUNTS to TIME +35+Negexp(15.0); endev; event DIES; outlne(cnv,clBlack,"%T ="+FloatToStrF(TIME,ffGeneral,7,7)+" Cat "+NAME+" dies."); kill thisobject; endev; process CAT/HE,50000; event FIGHTS; int n; find HE n iff n!=INDX; // INDX is the predefined object index, unique inside each process if(nNAME); schedule FIGHTS to TIME+Negexp(12.0); if(Frandom()>0.2)return; if(Frandom()>0.5)schedule DIES to TIME; else schedule HE[n]->DIES to TIME; // this cat is fighting with cat n. // The cat n will die with probability equal to 0.5 // Note that Frandom() returns a different value every time it is invoked endev; process CAT/SHE,50000; event CATS; int k,j,a,b; j=random(3)+3; outlne(cnv,clBlack,"%T ="+FloatToStrF(TIME,ffGeneral,7,7)+" Cat "+NAME+ " has "+IntToStr(j)+" little cats."); for(k=0;k0.5){ newpr pHE; schedule pHE->HUNTS to TIME+Negexp(100.0); schedule pHE->DIES to TIME+Negexp(5000.0); schedule pHE->FIGHTS to TIME+Negexp(200.0); pHE->NAME=IntToStr(idk); }else{ newpr pSHE; schedule pSHE->HUNTS to TIME+Negexp(100.0); schedule pSHE->DIES to TIME+Negexp(5000.0); schedule pSHE->CATS to TIME+Negexp(1000.0); pSHE->NAME=IntToStr(idk); } } outlne(cnv,clBlack,"%Number of cats = "+IntToStr(N_HE+N_SHE)); schedule CATS to TIME+Negexp(300.0); endev; start 1,100000.0; randomize(); Application->ProcessMessages(); czys(cnv,250,250,250); pisak(cnv,0,100,clSkyBlue,clBlack, "Press space bar to pause/resume Esc key to terminate.", "Click to continue","","",""); zaczek(); czys(cnv,250,250,250); xf0='?'; idk=1; newpr pSHE; schedule pSHE->HUNTS to TIME+100.0; schedule pSHE->DIES to TIME+Negexp(12000.0); schedule pSHE->CATS to TIME+Negexp(100.0); pSHE->NAME=IntToStr(idk); $

BLUESSS produces an executable (*.exe) program. Below is a fragment of program output. Each cat reports to the screen its actually executed event. T is the model time variable TIME.

22

2 BLUESSS Simulation Package

Looking at the program display during the simulation, it can be observed how fast is the simulation program (most time consuming are the display operations). The program runs, and the population of cats grows. After several seconds we already have tens of thousands of cats generated. Of course, this is only a very simple model, shown here to illustrate the program structure.

2.4

BLUESSS Code Generators, Continuous and Discrete Simulation Support

The following modules (source code generators) are included in the BLUESSS package: • • • •

Queuing Model Generator. Flow diagrams. Continuous simulation using signal flow diagrams. Continuous simulation using bond graphs. Continuous simulation, ordinary differential equations.

Using the QMG module the user defines the model in the Arena-like style, with no coding at all. As the process of creating .exe file (stand-alone, independent executable) passes through the C++ compilation, BLUESSS QMG module can use all the features of C++ (see Fig. 2.1). In other words, the comparison of BLUESS with, for example, Arena can be summarized in the following Table 2.1: Note that the QMG graphical model editor is very simple and can be dominated in 15 minutes of “training,” even without consulting any documentation. The objects created in QMG can be equipped with simple abilities (logical variables, additional attributes) or with any complicated decision-making algorithms, like fuzzy logic, iterative optimization algorithms, neural nets, or data base consulting. They can

2.4 BLUESSS Code Generators, Continuous and Discrete Simulation Support

23

Fig. 2.1 BLUESSS features Table 2.1 Creating intelligent objects (Arena vs. BLUESSS) Use Arena: Create model flowchart with arena Learn SIMAN, Learn arena manual including 30 pages of entity attribute and expression specifications, code the necessary expressions

Why not replace it with: Use QMG graphical model editor or code directly in BLUESSS Use C++

execute external programs or use external files. The object can do everything what can be coded in C++. There are no restrictions on the type and size of its attributes (those can be numbers, strings, arrays, pointers, and/or C++ structures of any kind). If required, the object can communicate through Internet, sing a song, display an OpenGL image, execute an external program, etc. There are some restrictions on the use of pointers. When the object disappears, it must execute a user-provided code to free the memory allocated to the pointer attributes if any. Otherwise, memory leaks can occur. To create a QMG flowchart the user picks up blocks, and defines the basic block parameters like inter-arrival times, service times, etc. Then, the simulation can be invoked. The entities (dynamic objects) appear, go through the blocks and disappear, if required. As stated before, the additional entity attributes can be declared, of any available C++ type. The relation between the entities and the C++ entity-related code is very simple. Any entity which enters to any of the model blocks simply calls a global C++ function named SVOP. The index of the calling entity, block identifier, as well as all entity attributes are passed to SVOP as actual parameters. For the assembly operation SVOP is called by each entering entity and for the new (assembled) one. So, in the SVOP body the user can identify the block/entity pair and code any required action.

24

2 BLUESSS Simulation Package

Fig. 2.2 Fragment of a QMG model. Automatic calls to the SVOP function

For example, entities can enable or disable model blocks (using the semaphore logic variables) or execute more complicated actions (Fig. 2.2.). Suppose, for example, that we need the following actions to be taken: • If an entity enters the queue number 59, its string attribute myname includes the string “dog” and the entity age is greater than 100 model time units (the time spent in the system), then it invokes external program other.exe. • If any entity enters the assembly block 42 and the sum S of the length of queue 39 and queue 41 exceeds 24, then close (disable) generators 29 and 40. If S is less or equal to 24, then enable these generators. • If any entity waits in any queue for more than 20 time units, display a warning message. In the below code n is the calling block number, SOURCE is the number of block where the entity has been created, TIMIN is the model time instant when the entity has been created, TIMQ is the time the entity has been waiting in a queue (if it is actually waiting), and myname is an additional, user-defined entity attribute. TIME is a global variable representing the model time. The function nr returns the queue length. QUEx is the reference to the queue block number x, and SEMx is a boolean variable (a semaphore) that enables (if true) or disables (false) the block number x. DisplayWarning is a user-defined C++ function (may show something on the screen, emit a sound, etc.). The SVOP procedure in this case may be as follows: void SVOP(int n, int SOURCE,float TIMIN, float TIMQ, String* myname) {

2.4 BLUESSS Code Generators, Continuous and Discrete Simulation Support

25

Fig. 2.3 BLUESS animation example. Manufacturing

bool b; if(n==59 && AnsiPos("dog",*myname)>0 && TIME-TIMIN>100) WinExec ("other.exe",1); if(n==42){ b=nr(QUE39)+nr(QUE41)>24; SEM29=b; SEM40=b;} if(TIMQ>20)DisplayWarning(); }

This is a very simple example. Inside the SVOP function the user can insert any C ++ code to define the entity behavior and/or block operations. Queuing and manufacturing models of BLUESSS may use animation. See Fig. 2.3 for an example of manufacturing animation. As stated before, BLUESSS is a general purpose simulation package. It contains several modules (source code generators) for queuing/manufacturing models, continuous simulation using ordinary differential equations, signal flow graphs, bond graphs, or combined models. The user can create the source code or use any of the BLUESSS modules to avoid coding. The final product is an independent *.exe file, ready to run. The package structure is shown on Fig. 2.3. In BLUESSS, everything

26

2 BLUESSS Simulation Package

Fig. 2.4 BLUESSS package structure

(except the code taken from C++ libraries) passes through the BLUESSS source code and through the C++ code (generated automatically). The user can create his/her code or use the code generators. The options are as follows (see Fig. 2.4). Queuing models: The queuing module generates the source code which is translated to C++ and compiled. ODE (Ordinary Differential Equations) module receives the right-hand sides of the equations. The rest is done automatically (source codes generation, compilation). Block diagrams and signal flow module: The user defines graphically the model structure and the necessary parameters. The module generates the model equations, the rest is done as above. Bond graphs: The user draws the bond graph model and gives its parameters. The rest is done automatically. Animator: 2D off-line animation of queuing models is available.

2.5 Variance Analysis

2.5

27

Variance Analysis

A post-mortem additional statistical analysis can be invoked. This includes the max-min and confidence intervals for the model trajectories, shown as functions of time. This feature, provided by few simulation packages, is very useful while simulating queuing and stochastic models. In Fig. 2.5 an example of such analysis is shown. This is the length of a simulated queue. The gray region is where the length of the queue is supposed to belong with probability 0.92. The curve inside the region is the average queue length in function of model time. The average is taken over a series of repeated simulations. If the gray region is big (big variance), then it can be seen how little informative the average value is. The user can see and modify both the BLUESSS and C++ codes. For queuing models, he/she can also use the SVOP functions as described earlier. Although the queuing blocks of BLUESSS are rather simple, the possibility of working on the generated code makes it possible to simulate any required object behavior. These are only some examples of BLUESSS features. BLUESSS can use any tools available in C++. Interesting animations, both for continuous and discreteevent models can be created using the OpenGL graphics. Observe (Fig. 2.4) that both discrete-event and continuous models result in the BLUESSS source code. The only difference is that the continuous models are simulated as a sequence of events with a small time step, each event being a call to one of the possible numerical methods for ODE. This means that at the source code level the user can mix discrete and continuous models in the same simulation program. In Fig. 2.6 you can see a screen of a continuous model simulation (multiple pendulum), animated with OpenGL graphics.

Fig. 2.5 Variance analysis of BLUESSS package. Confidence intervals for the length of a waiting line

28

2 BLUESSS Simulation Package

Fig. 2.6 BLUESSS simulation of a multiple pendulum

References Dahl O, Nygaard B (1967) Simula - an Algol-based simulation language. Commun ACM 9:671–678 O'Keefe RM (1986) The three-phase approach: a comment on strategy-related characteristics of discrete event languages and models. Simulation, SCS 47(5):208–210 Raczynski S (1989) Simulating our immune system. Conference paper: Simulation on microcomputers, The Society for Computer Simulation Int., ISBN/ISSN 0–911801-43-X Raczynski S (2000) Alternative mathematical tools for modeling and simulation: metric space of models, , differential inclusions and semi-discrete events. In: Conference paper: European Simulation Symposium ESS2000, Hamburg,, Hamburg, Germany Raczynski S (2004) Simulation of the dynamic interactions between terror and anti-terror organizational structures. The Journal of Artificial Societies and Social Simulation 7(2) ISBN/ISSN 1460-7425 Raczynski S (2006) In: Bargiela A (ed) Modeling and simulation: computer science of illusion. Wiley, England

Chapter 3

Behavior Patterns of an Artificial Society

Abstract A model of an artificial society is presented. The actions of the individuals in the simulated population include search for food, reproduction, herd (gregarious) instinct, aggression, and more. The reproduction process enables the evolution of the properties of the individuals. The agent-based modeling methodology and the corresponding software are used to simulate the behavior of the population. It is pointed out that the behavior patterns of the whole group can hardly be explained using the properties of the individuals and from the chaotic initial state. Some interesting unexpected events appear in the simulated population. Keywords Artificial society · Simulation · Agent-based model

3.1

Introduction

One of the most important tenets of the systems theory and the research on group behavior is that the whole is greater than the sum of its components. In other words, the behavior of the group (macro behavior) is more complex than the behavior rules of the individuals that form the group (Railsback et al. 2006; Bandini and Vizzan 2009). The topic of this chapter is the computer simulation of such complex behavior, generated by the individuals (called agents in the following) that obey simple rules of action. It is important to note that we equip the agents with action rules as simple as possible that, however, comply with the general rules of herds, like in animals or human agglomerations. Modeling and simulation of artificial societies have been the subject of a vast research work. Here, we focus on some phenomena that may be difficult to forecast or deduce from the properties of the initial society. Namely, we show some unexpected events or “catastrophes” that occur in the population which behavior has reached an apparently steady behavioral pattern and abruptly change the model state. Other observation is that the population never reaches any constant equilibrium and always remains in movement. The simulation experiments we carry out start with the initial population with chaotic movement. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Raczynski, Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations, Evolutionary Economics and Social Complexity Science 27, https://doi.org/10.1007/978-981-16-2574-9_3

29

30

3 Behavior Patterns of an Artificial Society

The behavior of populations is the topic of a great number of publications. In Sambrook (2008) we can find an overview of the research and the idea of using fractal methods to investigate global settlement patterns. As for the human populations, the research is focused to behavior of groups that have certain organizational structure. In Krause (2000) we can find a study of interactions between individuals with “opinions” that depend on opinions of the neighbors. In Latane and Nowak (1997), and Galam and Wonczak (2000) there is also a discussion on the interaction of agent opinions. Chatterjee and Seneta (1977) and Cohen et al. (1986) also discuss the group behavior influenced by the opinions of the agents. A discussion of the terrorism problem and organizations can be found in Deffuant et al. (2002). The influence of the spatial distribution on the group behavior is considered by Lustick (2000), where the macro-behavior patterns are determined by the micro-interactions between agents. Younger (2003) deals with clustering of agents in social structures, related to material and food storage. Several examples of models of social structures and organizations are described in the book of Raczynski (2019a). In Raczynski (2019b) we can find a model of an artificial society treated with a methodology similar to that presented here. In that paper, it is pointed out that the herd instinct may rise spontaneously in the simulated population, where the individuals have no a priori defined behavior patterns that may invoke such instinct. In that paper, the problem statement is different from that presented here. In Raczynski (2019b) we deal with a population of “prey-predator” type, while here the model is less complicated. We start with a chaotic set of individuals as simple as possible that just move, reproduce, and consume a renewable resource. The model used here consists in a collection of agents that are moving over a limited region, consuming a renewable resource ( food). This provides to an agent the energy needed to move and to survive. Each agent has a predefined lifetime that is generated at the time instant of agent creation, due to some probability distribution. However, the agent may die before this predefined natural death because of the lack of food and lack of energy. The other actions of the agent are as follows (a more detailed description is given the section “The model”). Food Seeking The agents look for the food in their neighborhood and follow the direction of greater food amount. Reproduction In certain time intervals, the agent may create another one that inherits the attributes of the creator (see the evolution process below). Group Forming Agents attract each other (with a variable intensity) that may result in the appearance of clusters. Herd (Gregarious) Instinct The decision about the direction of movement includes a component that is a weighted sum of the movement directions of nearby agents.

3.2 Agent-Based Models

31

Aggression An agent may steal (rob) some energy from nearby one. This may occur when the agent’s power (defined in the following section) is greater than the power of the agent being robbed. Work Agents can “invest” a part of their energy into a spot of the region where they move. If so, the food renewal in the spot is faster, and after some time they can recover even more energy to survive. Evolution Process The population evolves because during the reproduction, the new agent inherits some of its parameters charged with small deviations. The evolution is a slow process and results in new populations that may be quite different from the original group.

3.2

Agent-Based Models

A larger overview of agent-based models has been already given in Chap.1. Here, let us recall the main concepts. In the object-oriented programming we declare several generic code segments called classes. According to these declarations, objects are created at the runtime. Roughly speaking, each object is equipped with a data set, and several methods that perform operations on the data. One of the first and the most “elegant” objectoriented programming language was Simula 67 that provided all necessary objectoriented mechanisms. Simula ran on 64 KB (kilobytes) memory machines of 1960s. Further development resulted in many similar tools, like C++, Delphi, and others. To simulate the behavior of dynamical models some more features are needed. Namely, a simulation language must provide a clock mechanism that controls the advances of the model time and the execution of events (discrete-event simulation). Such mechanism has already been included in Simula, as a predefined process (class) simulate. In the same decade of 1960s other tools have been developed, one of the most important seems to be the GPSS simulation package. In the agent-based modeling (ABM) we create individuals named agents that are objects equipped with certain artificial intelligence. At the runtime, a set of agents is created. Each agent is activated and executes its events concurrently with other agents. Agents can interact with each other and with the global environment. Recently, many programming and simulation tools support ABM models. In 1994 a SWARM package vas developed (Swarm Development Group 2001). A similar ABM toll Ascape was created by the Santa Fe Institute in 2001 (Parker 2001), Other agent-based simulation tools are Breve-2.7.2 (Klein 2002), Recursive Porous Agent Simulation Toolkit released in 2003 (Michael et al. 2006), Cormas by VisualWorks (Bommel et al. 2015), MASON (Luke et al. 2005), the MASS package (Tatai et al. 2005), FLAME package, and the parallel architecture designed for Graphics Processing Units, FLAMEGPU (Coakley et al. 2006; Holcombe et al. 2013), MATSim by EHT Zürich (Bazzan and Klugl 2009), and Spot Oriented Agent Role Simulator SOARS (Tanuma et al. 2005).

32

3 Behavior Patterns of an Artificial Society

Recall that the macro-dynamics of great systems can also be simulated using the System Dynamics (SD) approach that appeared in early 1960s (Forrester 1961). The SD methodology is completely different from object- and agent-based modeling. In SD the model is described by ordinary differential or difference equations, and no micro-behavior of the individuals is simulated. SD models have been widely used during the last 6 decades. However, maybe because of the excessive use of SD, a strange conviction arose that everything in the real world can be described by differential equations. On the other hand, the ABM approach provides a more flexible modeling paradigm, starting with the micro-behavior of agents. Observe that ABM modeling needs little or zero high mathematics (an advantage?). In the model described below only simple arithmetic expressions are used.

3.3

Simulation Tool

To simulate our population, we need a software tool that permit us to declare classes of objects, and inside each class the corresponding data structures and possible events invoked by the object. At the runtime we must be able to create any number of instances of each class and activate them. Object events may be simple ones (for example, “erase an object”) or more complex micro-behavior patterns, needed in the agent-based modeling. The tool must include a clock mechanism that manages the model event queue and executes the actions (events) of each agent. Consequently, the simulation tool used here is the BLUESSS simulation package (consult Raczynski 2019a, b). Recall that BLUESSS supports the following features: it is object-oriented and permits continuous/discrete models, includes the clock mechanism and inheritance. BLUESSS has a clear process/event structure. It runs with the Embarcadero C++Builder. The package is extremely flexible. The user has access to any feature available from the C++ code. See Chap. 2 for more information about BLUESSS. The BLUESSS source code is converted into C++, then compiled and executed. The resulted exe file is a stand-alone Windows application. For more information, consult http://www.raczynski.com/pn/bluesss.htm. In BLUESSS we declare a series of processes (class declarations) and, inside each process, several events descriptions. These events are invoked during the “life” of the corresponding object (or agent) created according to the corresponding process declaration. For example, if we declare a process “car,” then we can generate and launch thousands of cars, each of them running its own events, such as “enter the highway,” “accelerate,” “move,” “crash,” “stop,” etc. What makes BLUESSS useful in agent-based simulation is that the body of the event is coded in C++. Agents can interact with other agents, execute a complicated computational procedure, execute an external program, change its own attributes or those of other objects, or create an OpenGL image. These are features needed in agent-based simulation. BLUESSS is perhaps not such user-friendly as other

3.4 The Model

33

packages because it requires some abilities in C++ programming. On the other hand, it is extremely flexible and permits the use of anything available from C++.

3.4

The Model

Here, we describe the main elements of the model. The detailed specification of model parameters with their numerical values can be found in the descriptions of simulation experiments (sect. 3.5). As stated before, to denote a variable or a parameter, we use words of several characters instead of single letters with sub-indices. The necessary expressions are given in C language notation (* and/for multiplication and division, respectively) and not in the form of conventional arithmetic. In data definitions, such as the arrays, and other operation, we also use the notation of C language. The variable step0 is the overall (reference) time step of the model. The event moves of an agent is repeated with the step approximately equal to step0, namely step ¼ step0*(1 + 0.05*rnd), where rnd is a random number uniformly distributed within (1,1). This is done to avoid simultaneous events in the model. Thus, the time step of each agent is different from others, when they run concurrently. The step value changes in consecutive time steps. In the sequel, by time step we refer to step. The agents move over a rectangular region R of size 1x1. The region includes 2500 equal rectangular spots, with the food density constant inside each spot (but variable in time). So, the agent coordinates are (x,y) both belonging to the interval [0,1]. A spot is defined by two integer numbers (k,j), k,j belongs to {0,1,2...,49}.

3.4.1

Process food

The amount of food is given by the array food [50][50], with values equal to zero (no food) to one (maximal food content). If a food level at the spot is less than one, then it renews each hren time units, where the renew step hren ¼ 50*step0. The renewal amount is given by the parameter vren. Thus, food[k][j] + ¼vren in each renewal step. If one or more agents invest in a spot (work action), then the vren parameter is multiplied by the factor (1 + inv.[k][j]), where inv. is the amount of investment in the spot. The values of inv decay with rate 0.0005 at each renewal step.

3.4.2

Process agent

Each agent has the following attributes. Position (x,y) inside the (1x1) region R.

34

3 Behavior Patterns of an Artificial Society

Actual velocity v ¼ (vx,vy). By vt we denote the absolute value (length) of vector v. The actual velocity is the combination of several partial velocities, described further on. v ¼ q  pvel  vt, where vt ¼ nrmðvrand  prand þ vgreg  pgreg þ vfood  pfood þ 0:1  vatr  patÞ ð3:1Þ In the above, q—overall fixed velocity factor pvel—actual velocity parameter, variable, may be different for each agent nrm—function that normalizes the argument vector to length equal to one, preserving the vector direction. vrand— direction of random movement (random walk) prand—random walk parameter, defines the amount of random walk component vgreg—direction of movement due to the herd (gregarious) instinct pred—herd instinct amount vfood—direction of movement towards a spot with greater food amount pfood—food seek parameter vatr—direction of attraction toward nearby agent, used in group forming patr—attraction parameter In the movement event, agent position change is equal to step*Length(v) energy—the amount of energy, gained mainly by eating food, necessary to move and survive. power—equal to one for isolated agents, greater than one for agent that belongs to a group liftim—agent predefined lifetime, used for the natural death of the agent pagr—parameter that controls aggression (see the event specifications) pwork—parameter that controls the amount of work done by the agent consum—amount of food consumed per time unit. While moving, the agent may consume consum*step of food at the spot. reprate—expected value of the time interval when the agent may create a new agent. Note that pvel, prand, pgreg, pfood, patr, pagr, pwork, vrand, vgreg, vfood, and vatr are agent attributes, with values different for each different agent. The value of prand is fixed, equal to one and it is not subject to fluctuation during reproduction. This does not mean that the agent moves with fixed random walk component. The velocity vector vt of (1) is normalized to unit length. Thus, when the other components grow, the influence of the random walk component decreases automatically. There are three global parameters: lcost—cost of living per time unit mcost—cost of moving per time step per velocity unit gfc—the parameter used to calculate the group factor gfactor.

3.4 The Model

35

The actions (events) of the agent are as follows.

3.4.2.1

Event moves

While executing this event, the agent may move to a new position according to the following expression. x + ¼step*vx; y + ¼step*vy; Here, vx and vy are components of the velocity v calculated according to (3.1). If the amount of food at the spot where the agent is located is greater than 0.01, the agent consumes consum*step of food at each step of movement. In the actual model, then consum parameter is fixed, equal to one. Of course, the amount of consumed food cannot be greater than the actual amount of food at the spot. If 0.3 < energy > > dt > > > > > dx > < 2 dt dx > 3 > > > > dt > > > > : dx4 dt

¼ ax1 þ bx1 x2 þ cx1 x4 ¼ dx2 þ ex1 x2 þ fx2 x4 ¼ gx3 þ hx2 x3 þ ix2 x4 þ jx3 x4

ð4:4Þ

¼ kx4 þ lx1 x4 þ mx2 x4 þ nx3 x4

where a ¼ 0.1, b ¼ 0.002, c ¼ 0.0001, d ¼ 0.3, e ¼ 0.001, f ¼ 0.000015, g ¼ 0.003, h ¼ 0.00001,i ¼ 0.000012, j ¼ 0.000012, k ¼ 0.01, l ¼ 0.00002, m ¼ 0.00004, and n ¼ 0.000015. With these parameters, species of type 1 reproduces with rate a, type 2 eats those of type 1, type 3 reproduces with rate g and eats those of type 2, species of type 4 eats those of type 1, 2, and3. Species of type 2 and 4 dies with rate d and k, respectively. The initial conditions are x1 ¼ 500, x2 ¼ 50, x3 ¼ 100, and x4 ¼ 20; Let us see only some plots for model (4.4). Figure 4.3 shows the behavior of species 1 and 2. It can be seen that after some initial time interval, where the

58

4 Extended Prey–Predator Model

Fig. 4.4 X-Y (phase) plot, amount of type 2 vs type1

oscillations seem to stabilize, a “catastrophe” occurs that results in abrupt change of behavioral pattern. However, for a larger time, the system enters in apparently stable oscillations. Figure 4.3 depicts the X-Y plot, type 1 being the horizontal axis, and type 2 the vertical axis. Figure 4.4 shows how complicated the model behavior may be. It is a fragment of model trajectory projected into the pale agent1-agent2. We will not discuss more properties of L-V equations in the continuous case. The main topic of this chapter is the agent-based and not continuous simulation. We are looking for irregular, hardly expected events, treated as catastrophes.

4.3 4.3.1

Agent-Based Simulation General Remarks

An overview of agent-based simulation can be found in Chap. 1. Here, we only recall some basic concepts. In the discrete object-based modeling, we create objects that behave according to the user-defined rules, executing their events in discrete moments of the model time. The agent-based models manage objects called agents, equipped with certain “intelligence.” They can take decisions, optimize their actions, and interact with each other. Agent-based models (ABMs) simulate the concurrent operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The agent-based approach consists in describing individuals interacting with one another and with their environment. The individuals may be of different types. Although the rules of behavior are shared by a group of individuals of the same

4.3 Agent-Based Simulation

59

type, the behavior is not identical for all of them. This modeling method has many applications, mainly in ecology, biology, and social sciences. A key notion is that simple behavioral rules of individuals (micro model) generate complex (macro) behavior. An important central tenet is that the whole is greater than the sum of the parts. Individual agents are typically characterized as rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules (Railsback et al. 2006). Consult also Bandini and Vizzari (2009). In the case of the prey–predator system, the agent-based simulation seems to be a quite realistic approach to system modeling. This is an alternative approach, completely different from the continuous L-V model. There is a strange conviction among some simulationists that everything that happens in the real world can be described by the differential equations. The point of this book is that this is not exactly true. An agent-based model of the prey–predator system consists of two or more populations of objects (preys and predators), interacting with each other. In few words, the simulation program generates the corresponding populations, according to the generic declarations of the object types. The objects appear in the computer memory and each of them runs its own “life,” moving, escaping, devouring other ones, and dying.

4.3.2

Simulation Tool

The BLUESSS (Blues Simulation System) package was used to simulate the prey– predator model. It is the C++ version of packages PASION and PSM++, developed earlier by the author. See Chap. 2 for more detailed overview of BLUESSS. Here, we repeat some main concepts, for the reader convenience. The main features of the package are as follows. • • • • • •

Object-oriented simulation Continuous/discrete models Clock mechanism Inheritance Permits the use of all C++ features Easy to use, clear process/event structure

The BLUESSS source code is converted into C++ code compatible with the C+ +Builder, then compiled and executed. The resulted exe file is a stand-alone Windows application. The user can also see and edit the resulted C++ source code. There are several additional modules that work as BLUSSS code generators. Queuing models module receives a user-defined graphical queuing model structure and the necessary data. It produces the BLUESSS source code. ODE (continuous) module converts the model differential equations into the BLUESS code. These equations

60

4 Extended Prey–Predator Model

can be explicitly defined by the user or obtained from the block diagrams/signal flow or bond graphs modules. A post-mortem variance analysis for stochastic models can be done. The animator module shows animated simulations for queuing and manufacturing models. BLUESSS provides a “clock” mechanism that manages the model time, using an efficient internal event queue algorithm. BLUESSS can be used to create agent-based models. The general concept is to declare a series of processes and, inside each process, several events that may occur during the “life” of the corresponding objects. The process declarations are generic, and the objects are instances of the processes. For example, if we declare a process “client” as a client at a bank, then we can generate and launch thousands of clients, each of them running its own events, like “enter the bank,” “join a waiting line,” “occupy teller,” and so on. Each event has a body that includes a code for any, simple or complicated, algorithm of object behavior. So, the user can create objects that represent the model agents and equip them with a desired behavior. Event body is coded in C+ +. This way, the objects can do anything that is available from the C++ language. Inside the process declaration, there are several event definitions. The events can be scheduled for their execution, which is controlled by the BLUESSS clock mechanism. Any object can interact with other objects, change its own attributes or those of other objects, execute a complicated computational procedure, execute an external program, display an image, be able to run external code (e.g., sing a song or communicate by the internet). This makes BLUESSS a convenient tool for the agentbased simulation.

4.4 4.4.1

The Model Resources and Agent Types

Consider the following scenario. In a rectangular region M, we have a resource (food) that can be eaten by some species, and renew in time. There are species of four types, called agent 1, agent 2, agent 3, and agent 4 in the following. Agent 1 eats the food resource and reproduces. Agent 2 eats those of type 1, in order to get some energy necessary to reproduce. Agent 3 eats food and agents of type 1. Agent 4 eats those of type 1, 2, and 3. The possible actions of an agent include the gregarious (herd) instinct, group forming, seeking food, escape from predators, reproductions, and natural deaths. The model also includes a background birth process that slowly generates new agents. This may be interpreted as eventual immigration from other areas. This process is slow compared to agent reproductions, but it was added to avoid total extinction of species. Other additional processes are the “hunters.” These are static entities, located in fixed spots inside the region. The hunter can catch a nearby moving agent and consume it. This increases the total energy of the hunter set. This

4.4 The Model

61

energy is consumed slowly and it is necessary to reproduce. This way, the set of hunters can expand and occupy new spots in the region, like urban areas that invade a forest. If there is a lack of food, the hunter’s energy and population as well as the invaded area shrink. In the following you can find a detailed specification of the model. The model time interval for the simulation experiments changes between 200 and 6000 time units.

4.4.2

Implementation, Processes, and Events

A stated in Sect. 4.3.2, the BLUESSS program consists in a series of process declarations and the main (start) section. Inside the process there are several event codes. At the run time, the process is used to generate a number of instances (object of process type). In the start section, necessary model parameters are defined, and one or more events are activated. Then, the model runs, the generated objects execute their events, reproduce or die, and interact with each other.

4.4.3

The Food Resource

The whole region M under consideration is divided into 1600 spots (a rectangular region 40x40), The food resource is defined as a matrix R [40,40], with possible values 0 < R[k,j] < 1. If a part of the food at a spot is consumed, then the resource begins to renew. The renewal speed is equal to dR½k, j ¼ vr R ½k, j dt

ð4:5Þ

where the normal value of vr, if not stated differently, is equal to 1.2 per time unit. This means that the food grows exponentially. The value of R is truncated from above to 1.0. Note that the food consumption by the agents may decrease R to very small values. For such small initial condition, the food renewal is very slow. The curve of the growth is extremely dependent on this initial value, and for small values it may result in long time recovery. The events of food process are initialize (parameter setup) and renew. The renew event re-schedules itself and executes repeatedly.

62

4.4.4

4 Extended Prey–Predator Model

Agents

The agent process has the following attributes. These are its private data attached to the agent. Some of the data can be visible and changed by other model objects. Type of agent. It may be 1,2,3, or 4. Position (x,y) is the position inside the region M. We have 0 < x < 1 and 0 < y < 1. So, the position (1,1) corresponds to the food spot R[40,40]. Time step of movement h. The movement event described further on is repeated each h time units. In that event, new agent position is defined. Lifetime. When being created, the agent is given its lifetime, and its natural death is scheduled to actual time plus td that is a random variable with expected value equal to lifetime. In this model, the distribution of td is lognormal. However, the agent may die earlier, for example, by starving (energy less than 0.1) of being devoured by a predator. Reproduction period r. The agents of type 1 and 3 can reproduce each r time units. Those of type 2,3, and 4 may reproduce when devouring a prey. See event “reproduce” for more detail. Energy E. The energy is necessary to reproduce and survive. Normal (reference) value of energy is equal to one. Agent dies if its energy becomes lower than 0.1. Number of neighbors L. This is the number of nearby agents, within the circle of radius 0.05 around the agent. Overall speed factor sp. This is the actual speed of agent movement. Movement direction. The direction of movement v ¼ (vx,vy) is calculated as a weighted sum of the following velocities (each one normalized to a unit vector), as explained below. Random walk—randomly generated direction vr. Gregarious movement— the gregarious (herd instinct) direction vg of an agent is defined as follows: vg ¼

n X 1

d2n

vn þ 0:1

ð4:6Þ

where the sum is taken over all neighbor agents within a circle of radius 0.05 around the agent, vn is the velocity of agent n, and dn is the distance between the current agent and agent n. The constant 0.1 is added to decrease the weight for very near agents, and to avoid division by zero. The gregarious direction is slightly modified by the direction of mutual attraction. This makes agent to form groups (herds). Food seek vf. This is the direction towards an adjacent food spot with maximal amount of food. Prey seeking and escape direction ve. For the predators, this is the direction towards the nearest prey they can devour. For the prey it is the direction of escape from the nearest predator. The total velocity is equal to

4.4 The Model

63

v ¼ c r vr þ c g vg þ c f v f þ c e ve

ð4:7Þ

The coefficient cr is fixed, equal to one. The direction vector v, after being calculated by expression (4.7), is normalized to have the length equal to the speed attribute sp. So, if the coefficients cg,cf and ce increase, then the share of the random movement decreases automatically. The Agent process includes the following events: Initialize This event is the “constructor” of the agent. It is invoked always when the new agent is created, and defines its attributes. RWalk The random walk direction is defined. Gregarious The gregarious instinct direction is calculated according to (4.6). The gregarious direction is slightly modified by the direction of mutual attraction. This makes agent to form groups (herds). Prey-pred In this event the pursuit/escape direction is defined. Dies The agent disappears from the model. It may be the natural death, or the death provoked by a predator. Eat-food Agents of type 1 must eat the food to survive. Agents of type 3 can also eat food. Agents attempt to eat food each 30 time steps h of movement, in average (uniform distribution inside the interval). Before eating, the agent scans the nearby food spots and selects that with maximal food (Rkj). The amount of eaten food is calculated as follows. The tentative amount U is set equal to 0.25, and the minimal food remaining is set to W ¼ 0.0001. Then if U  Rkj  W, then U food units are eaten. If U > Rkj  W, the agent eats Rkj  W units of food. This way, the remaining food at the spot never reaches zero. This allows the food to renew because the food amount on a spot never reaches zero. After eating, the agent of type 1 increases its energy by 8 times the amount eaten. Agents of type 3 increase their energy by 2 times amount eaten. To eat food, the agent must stop for a while. The stop interval is set to 0.1 time units. Devour Agents of type 2 and 3 can devour that of type 1. Agents of type 4 eat agents of type 1, 2 or 3. To devour a prey, the distance predator–prey must be less than 0.03. After eating the prey, the predator increases its energy by 0.1, and reproduces. New agent inherits the type and other parameters from the parent agent. Some attributes can be changed a little bit if the model runs in evolution mode. The position of the new agent is close to the parent. Reproduce Agents of type 1 and 3 may reproduce each r time units. When initialized, the agent parameter r is defined to be smaller than the agent lifetime. The reproduction is possible if the agent energy is greater than 0.3. Agents of type 2,3, and 4 can reproduce while devouring a prey. In the reproduction event a new agent of the same type is generated. It inherits all attributes from the parent agent. The new agent is given 40% of the parent energy, and the parent energy decreases by

64

4 Extended Prey–Predator Model

the same amount. However, the parent is given extra amount of energy, equal to 0.05. This is some “premium” for reproducing. This way, the total energy of the agent of the same type slightly increases. The position of the new agent is close to the parent. Moves The direction of movement is calculated according to (4.7). Then, the vector v is normalized to have the length equal to the speed attribute sp. The agent movement consists in changing position (x,y), as follows. xnew ¼ x þ h vx , ynew ¼ y þ hvy To move, the agent must have energy greater than 0.1, if it does not, it dies. The movement has its cost. Agent of type 1 loose 0.4h|v| energy units, and the energy loss for the other agents is equal to 0.2h|v|. Hunted If the “hunters” simulation mode is activated (see Sect. 4.4.5) and the distance of the agent from a hunter spot is less than 0.05, then the agent triggers the event hunt of process Hunters, and is eaten by hunter. There are several auxiliary events in the agent process, like updating the index of the nearest object, that are not very relevant and will not be described in detail.

4.4.5

Process Static

There is only one instant of this process generated. The static agents, called “SA” or “hunters” in the following, occupy fixed spots within the region M of 40x40 available positions. This process has the following attributes: hp—the time step P—food per hunter Th—threshold for P N—the size of hunter’s population F—amount of food gained C—number of the region spots (cells) occupied by hunters. Moreover, the hunters object uses two global arrays: int cells[1600][2] and bool st[40][40].cells is the list of cell coordinates occupied by SA (in cell coordinates 40x40). There is a boolean array st[40][40], where st[k] [j]¼¼true means that the spot (k,j) is not occupied by hunters. Note that the hunters are not created as separate objects. In fact, it is not necessary, because what is relevant are only the total food F and the SA population size N, and no events related to any individual “hunter” are executed. If the static process is activated (mode 3), then one spot of the food region is defined as occupied by SAs, and the number of SAs is set equal to 8.

4.5 Simulations

65

Event hunt. This event is invoked by any agent of type 1,2,3, or 4 that becomes close to a hunter cell. The agent is devoured by hunters and the total food F is increased by one. Event consumes. Total food F decreases by 0.001 N. Value of P is updated, equal to F/N. This event is repeated each hp time units. Population change rate: d ¼ 1 + c(P  Th), Th ¼ 0.2 N ¼ N + dN each time step hp Event expands. It is supposed that each cell should be occupied by eight individuals. So, if N/C becomes greater than eight, then a new cell (k,j) is occupied with st[k][j] set to false. The cell number C increases by one, and the new cell is registered in the cells array. The event is repeated each two hp time units. Event shrinks. If N/C becomes less than eight, then one hunters cell is freed, number of cells decreases. This event repeats each two hp time units.

4.4.6

Process Control

The object of this type performs all necessary global operations over the actual state of the model. It gathers the data about the model parameters, the changes in the attributes and agent populations as function of time, displays the main model variables during the simulation, and creates online animation.

4.5

Simulations

The number of model parameters, simulation models and their combinations make it possible to carry out hundreds of thousands of experiments. The simulation program can run in the following simulation modes. Mode 0—basic prey–predator with four types of agents and food resource Mode 1—gregarious instinct Mode 2—group forming (herds) Mode 3—static agents enabled The normal (reference) set of model parameters can be found in the Appendix to this chapter. If not stated differently, we use these parameters in the simulations. The program provides an online animation. It can be seen how the agents move over the region M and how the food resource renews. Figure 4.5 shows the icons are Fig. 4.5 Icons used to display agent position

66

4 Extended Prey–Predator Model

Fig. 4.6 An example of animation screen. Gregarious effect enabled

used for agents of type 1, 2, 3, and 4. The short line sections indicate the actual direction of movement. In Fig. 4.6 we can see an example of animation screen with agents. It is a simulation with the gregarious instinct enabled. In this moment of the model time there is a lot of moving agents. The gregarious effect can be seen: most of the agents, mainly of type one, move to a “common” direction, taken from the nearby ones. The green areas indicate the presence of food, and the “no food” spots are white. It can be seen that the prey of type 1 escapes from the predators. Looking at the animations, we can see that the prey-agents of type 1 are agile, they move fast looking for food and escaping from the predators. The predators of type 2 move also quickly and reproduce if they catch a prey. Agents of type 3 are rather slow. They eat food as well as those of type 1. These agents reproduce rapidly and, when there is no food or prey to devour, they disappear. The agents of type 4 normally enter in action at the final stage of the episode, quickly devour those of type 1, 2 and 3, and then also disappear. At the beginning of the simulation, the initial populations of 200, 50, 20, and 80 agents are generated, for types 1, 2, 3, and 4, respectively. The initial food amount is set equal to 0.1 for all spots.

4.5 Simulations

67

The short “warm up” initial interval is as follows. At the beginning, agents of type 1 multiply rapidly, but stop to reproduce quickly because of the lack of food. The predators of type 1 and 2 enter in action, and the agents of type 1 disappear. Then, the agents of type 2 and 3 start to die by starving. Moreover, predators of type 4 multiply and devour those of type 2 and 3. Finally, those of type 4 also die for starving. After this initial period, there is an interval with few active agents, and then the episodes start to appear. By an episode we mean a process of growing one or more agent types that terminates again in a partial extinction of agents. By a catastrophe we understand an episode with abnormal multiplication of agents (mainly of type 1) that results in a severe lack of food for certain next time interval.

4.5.1

Basic Simulation Mode

In this mode, the gregarious effect and group forming are disabled, and no static agents generated. Figure 4.7 shows the plot of one typical episode. Suddenly, the population of type 1 grows. Then the predators of type 2 enter in action, and devour those of type 1. In this episode, the predators of type 3 and 4 do not act significantly. Their populations are of few agents, and they are not visible on the plots. The episode is similar to that produced by the continuous L-V equations. However, such episodes repeat with irregular pattern, and some of them are completely different from other ones. In Fig. 4.8 we can see the changes of all four populations in the same simulation run, over time interval 0–300. The vertical scales are fixed for all four plots. Some curves are truncated, when the population grows abnormally, to more than 1000 agents. There are some episodes that can be treated as “catastrophic,” marked with 500 450

Number of agents

400 350

Type 1

Type 2

300 250 200 150 100 50 0 220

Fig. 4.7 An episode

228

236

244

252

260

68

4 Extended Prey–Predator Model

Type 1

N

720 560

C

Up to 2950

C

400 240 80 0

60

120 Type 2

180

240

Time 300

60

120 Type 3

180

240

Time 300

N

720 560 400 240 80 0 720 560

N

C

400 240 80 0

60

120 Type 4

180

240

Time 300

60

120

180

240

Time 300

N

720 560 400 240 80 0

Fig. 4.8 Gregarious effect and group forming are disabled

C. Observe the extremely irregular pattern of agent behavior, compared to the continuous solutions to the L-V equations. Some episodes are similar to those provided by the L-V continuous model. However, some of them, namely the catastrophic grow of prey number, can hardly be modeled with L-V equations. Catastrophes with abnormal population growth strongly depend on the spatial distribution of agents. The conditions for the catastrophic episode are as follows. 1. In a part of the region, there is enough food, and there are one or more preys of type 1. 2. There are no predators of type 2 or 3 nearby 3. Possible, there are predators of type 4 in the neighborhood Note that predator of type 4 eats agents type 2 and 3 and not those of type 1.

4.5 Simulations

69

If, at a moment, the above conditions are satisfied, then the prey of type 1 commences to multiply quickly and may reach very high population level. Then, the predators also multiply and finally nearly all agents disappear. After such event, there is no food left, and the food recuperates very slowly.

4.5.2

Incoming Agents, Low System Isolation

The irregularity of the model behavior shown in the previous section considerably depends on the model isolation from the outside world. The background birth process (Sect. 4.1) creates new agents as if they were immigrating form outside. Without this process, it would be very probable that the total extinction of the populations occurs just after the first episode. It can be seen that the birth process stabilizes the system. Figure 4.9 shows the population changes with the parameters as these for Fig. 4.8, but with the birth process ten times more intensive. It is interesting to see that the behavior of the model switches to a different pattern after the episode at time ¼ 125.

4.5.3

Gregarious Effect

Now, we enable the gregarious effect (simulation mode 1). In this mode, the gregarious effect influences the agent movement direction. However, this is not only the average direction of the neighbors. For example, one or a group of preys may detect a nearby predator and start to escape. The gregarious effect helps to propagate this information about the threat to other agents of the same type, making, perhaps, such collective escape more efficient. Figure 4.10 depicts population changes with gregarious effect. Generally, there is little difference between the plots of Figs. 4.10 and 4.8 (mode 0). Running more simulations, we can observe that in mode 0 the size of the catastrophic growth is greater than in the simulation mode 1. The maximal number of agents of type 1 without gregarious effect is equal to 2950, and with gregarious effect it is equal to 1102. In other words, the gregarious effect slightly stabilizes the model. However, the overall pattern does not change significantly. This is perhaps because of the fact that both prey and predator follow the similar herd instinct.

4.5.4

Group Forming—Simulation Mode 2

In this mode, it is added some attraction force to the gregarious direction component. This force is similar to a gravity force, taken over nearby neighbors. As a result, the agents form groups (herds) and tend to move in common direction. Figure 4.11

70

250 225 200 175 150 125 100 75 50 25 0 250 225 200 175 150 125 100 75 50 25 0 250 225 200 175 150 125 100 75 50 25 0 250 225 200 175 150 125 100 75 50 25 0

4 Extended Prey–Predator Model Type 1

N

0

60 N

0

60

60

240

Time 300

120

180

240

Time 300

180

240

Time 300

180

240

Time 300

120 Type 4

N

0

180

Type 3

N

0

120 Type 2

60

120

Fig. 4.9 Population changes with intensive birth process (low isolation)

shows a screenshot of the animation. Be a member of a herd provides some additional advantage. A predator cannot devour a prey that has more than 9 near neighbors located at distance less than 0.05 from the agent. Moreover, the predators do not enter into the herds, and rather look for preys separated from herds. The result of simulation is shown in Fig. 4.12. In this simulation mode the prey type 1 has an advantage and it is more difficult for predators to devour it.

4.5 Simulations

71

Type 1

N

720

Uop to 1102

560 400 240 80 0 0

60

120 Type 2

180

240

Time 300

60

120 Type 3

180

240

Time 300

60

120 Type 4

180

240

Time 300

60

120

180

240

Time 300

N

720 560 400 240 80 0 0

N

720 560 400 240 80 0 0

N

720 560 400 240 80 0 0

Fig. 4.10 Population changes with gregarious effect

4.5.5

A Slow Catastrophe; Static Agents

This simulation runs in mode 3. The gregarious effect is enabled, and the static agents are present. The mechanism of action of the static agents (SAs) is explained in Sect. 4.4.5. In this simulation, the static process is activated. As stated before, the hunters can move only in a close neighborhood around their spot, and hunt any agent of type 1, 2, 3, or 4 that approaches the spot (hunter cell). The moving agents do not “understand” the

72

4 Extended Prey–Predator Model

Fig. 4.11 Animation screen with gregarious effect and group forming

threat and no do not detect the nearby hunter. If an agent has been hunted, it is converted in hunter’s food that is distributed between all hunters. With enough food, the SA population grows, and new hunter calls appear. It is supposed that each cell may contain up to eight individuals. The food is consumed slowly, and if it is not enough food, then the SA population decays and the area occupied by SA cells shrinks. This simulation mode requires a long time simulation. Figure 4.13 shows the growth of the hunter’s area. We simulate the evolution of the population over the time interval [0,5000]. All the simulation could be treated as a “slow catastrophe.” When the hunter’s area grows, then the moving agents have less space to multiply, and many of them are being hunted by the static elements. Figure 4.14 depicts the plots of the total population of moving agents, and the number of hunter cells. It can be seen that all the system tends slowly to the situation with reduced movement area of moving agents. Observe another catastrophe that occurs between time ¼ 3500 and 4000. Within this period, the agent’s population

4.5 Simulations

73

Up to 2895 N

720

Type 1

560 400 240 80 0 0

60

120 Type 2

180

240

Time 300

60

120 Type 3

180

240

Time 300

60

120 Type 4

180

240

Time 300

60

120

180

240

Time 300

N

720 560 400 240 80 0 0

N

720 560 400 240 80 0 0

N

720 560 400 240 80 0 0

Fig. 4.12 Simulation mode 2: Gregarious effect and group forming

unexpectedly begins to decay, and the hunter’s area shrinks. After this catastrophe, the number of cells does not recover to earlier level. The decay of the number of moving agents is caused by the growing probability of being hunted. However, this is not the only cause of the decay. To reproduce and survive, moving agents need some space to move, and this space is being “robbed” by the hunters, like the devastation of green regions by growing urban areas in an ecological system.

74

4 Extended Prey–Predator Model

Fig. 4.13 Animation screen with the static process. Early stage (left) and advanced growth

Total population, agents type 1,2,3 and 4

681 N 544 408 272 136 0

0 1329 N

1000

2000 3000 Hunter’s cells

4000

Time 5000

1000

2000

4000

Time 5000

1063 797 531 265 0

0

3000

Fig. 4.14 Total population of agent type 1, 2, 3, and 4, and the number of hunter’s cells

Appendix—Model Data Specification This is the standard data set for the Extended Prey–Predator model. If not stated differently, this data set is used in the simulations.

Appendix—Model Data Specification

75

The Food Resource food renewal rate, food units per time unit vr ¼ 1.2 -. Initial food equal to 0.2 for all spots.

Agents Time step of movement h ¼ 0.05. Lifetime ¼ 3.5, 1.5, 1.0 and 1.0 for agents of type 1,2,3, and 4, respectively (default). Overall speed factor sp ¼ 0.6, 0.4, 0.1 and 0.5 for type 1,2,3, and 4, respectively. The agent velocity is equal to v ¼ c r vr þ c g vg þ c f v f þ c e ve , where vf—food seek direction ve—prey seeking/escape direction vg—gregarious movement direction vr—random walk direction The default coefficients are as follows. For type 1 : cr ¼ 1:0, cg ¼ 10:0, c f ¼ 5:0, ce ¼ 2:0 For type 2 : cr ¼ 1:0, cg ¼ 4:0, c f ¼ 0:0, ce ¼ 4:0 For type 3 : cr ¼ 1:0, cg ¼ 4:0, c f ¼ 5:0, ce ¼ 4:0 For type 14 cr ¼ 1:0, cg ¼ 2:0, c f ¼ 0:0, ce ¼ 4:0 Event Eat-food is repeated each 30 h time units. The amount of eaten food is calculated as follows. The tentative amount U is set equal to 0.25, and the minimal food remaining is set to W ¼ 0.0001. Then if U  Rkj  W, then U food units are eaten. If U > W, the agent eats W units of food. After eating, the agent of type 1 increases its energy by 8 times the amount eaten. Agents of type 3 increase their energy by 2 times amount eaten. To eat food, the agent must stop for a while. The stop interval is set to 0.1 time units. Devour. Agents of type 2 and 3 can devour that of type 1. Agents of type 4 eat agents of type 2 or 3. To devour a prey, the distance predator–prey must be less than 0.03. After eating the prey, the predator increases its energy by 0.1, and reproduces. New agent is given energy equal to 1, and inherits the type and other parameters from the parent agent. Reproduce parameters (agent type 1 and 3). r - agent reproduction period. r ¼ 1.0 for agents type 1, r ¼ 1.6 for agents type 3

76

4 Extended Prey–Predator Model

Minimum energy to reproduce ¼ 0.3. The new agent is given 40% of the parent energy, and the parent energy decreases by the same amount. The parent is given extra amount of energy, equal to 0.05. Event Moves: Agents of type 1 loose 0.4h|v| energy units, and the energy loss for the other agents is equal to 0.2h|v|. To move, the agent must have energy greater than 0.1, if it does not, it dies.

Static Agents (SA) Mode The distance of the agent from a hunter’s spot is less than 0.05, then the agent triggers the event hunt of process Hunters. The parameters of the SAs are as follows. Time step hp ¼0.5 Amount of food gained when devouring F ¼ 0.5 Decay rate: the population decays by 0.0002 N hp each time step hp (food consumption) Food-per-hunter threshold Th ¼0.2

References Aubin JP, Cellina A (1984) Differential inclusions. Springer Verlag, Berlin. https://doi.org/10. 1007/978-3-642-69512-4. ISBN: 978-3-642-69514-8 Bandini S, Vizzari G (2009) Agent based modeling and simulation: an informatics perspective. J Artif Soc Soc Simul 12(4) ISBN/ISSN ISSN 1460-7425:1–4 Borshchev A, Filippov A (2004) From system dynamics and discrete event to practical agent based modeling: reasons, techniques, tools. In: Conference paper: the 22nd international conference of the system, Wiley, Oxford, England Cropp RA, Norbury J (2015) Population interactions in ecology: a rule-based approach to modeling ecosystems in a mass-conserving framework. SIAM Rev Soc Ind Appl Math 57(3):437–465 Gasull A, Kooij RE, Torregrosa J (1997) Limit cycles in the Holling-Tanner model. Publications Matematiques 41(1):149–167. https://doi.org/10.5565/PUBLMAT_41197_09. ISBN/ISSN ISSN 0214-1493 Gras R, Devaurs D, Wozniak A, Aspinall A (2009) An individual-based evolving predator-prey ecosystem simulation using a fuzzy cognitive map as the behavior model. Artif Life 15 (4):423–463 Hoppensteadt F (2006) Predator-prey model. Scholarpedia 1(10):1563. https://doi.org/10.4249/ scholarpedia.1563 Lotka AJ (1910) Contribution to the theory of periodic reaction. J Phys Chem 14(3):271–274 Mangioni SE (2012) A mechanism for pattern formation in dynamic populations by the effect of gregarious instinct. Phys A Statis Mech Appl 391:113–124 McDougall W (1926) The Gregarious Instinct. In: Introduction to Social Psychology (Revised Edition) Series: Nowak AM (2006) Evolutionary dynamics. Harward University Press. ISBN: 9780674023383

References

77

Pontriagin LS (1962) The mathematical theory of optimal processes. Wiley Interscience, New York Raczynski S (2002) Differential inclusion solver. Conference paper: International Conference on Grand Challenges for Modeling and Simulation, The Society for Computer Simulation Int., San Antonio, TX Raczynski S (2019) Differential inclusions in modeling and simulation. Brown Walker Press, Irvine. ISBN-10 0-262-69281-3 Railsback SF, Lytinen SL, Jackson SK (2006) Agent-based simulation platforms: review and development recommendations. Simulation 82(9):609–623. https://doi.org/10.1177/ 0037549706073695 Takeuchi Y (1996) Global dynamical properties of Lotka-Volterra systems. World Scientific, Singapore Tanner JT (1975) The stability and intrinsic growth rates of prey and predator populations. Ecology 56(4):856–867. https://doi.org/10.2307/1936296 Volterra V (1926) Variazioni e fluttuazioni del numero d’individui in specie animali convivent. 2:31–113 Volterra V (1931) Variations and fluctuations of the number of individuals in animal species living together. McGraw-Hill, New York Williams BA (1933) Gregariousness: a critical examination of the concept of the gregarious instinct. Aust J Psychol Philos 11(1):50–68 Zhang JF, Kooij RE, Torregrosa J (2012) Bifurcation analysis of a modified Holling-Tanner predator-prey model with time delay. Appl Math Model 36:1219–1231

Chapter 5

Stock Market: Uncertainty and Catastrophes

Abstract An agent-based model as well as the continuous model version for a stock market with the “band wagon” (gregarious or herd instinct) are presented. For the continuous model, a new approach to uncertainty treatment is applied. The problem of uncertainty is formulated in a deterministic way, using the differential inclusions as the main modeling tool. This results in the shape of the reachable set for the model trajectory, namely the possible extreme values of the stock demand and price. The main topic of this chapter is the agent-based model that supports multiple agents and multiple companies. The dynamics of the market is discussed. The pattern behaviors such as oscillations and catastrophic events are shown. It is pointed out that unexpected events like abrupt changes of market index and of share price can be provoked by the internal interactions of market actors and not necessarily by external excitations. Keywords Stock market · Uncertainty · Simulation · Stock market model

5.1

Introduction

The continuous model used here is a simple system dynamics stock market model for the short time market behavior. Models of such kind can provide important hints for financial planning and strategic decisions. Recall that the systems dynamics (SD) models are mostly continuous and reflect certain global and averaged trajectories of a possible change of the system variables. In models of such kind the human factor is strongly simplified. The behavior of a stock market agent is difficult to model and to predict in such a continuous version. Consequently, the application of system dynamics modeling methodology to this case, as well as to other systems with human factor is doubtful. The other possible approach is agent-based simulation. Anyway, we should remember that not all what happens in the real world is described by differential equations. The continuous version of the stock market model used in this paper was taken from literature, and is discussed rather for historical reasons. It is a simple model of a © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Raczynski, Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations, Evolutionary Economics and Social Complexity Science 27, https://doi.org/10.1007/978-981-16-2574-9_5

79

80

5 Stock Market: Uncertainty and Catastrophes

market with only one stock type. For this and similar models consult Andresen (1999), Minsky (1982) or Goodwin (1967), to mention only some of thousands of publications on stock market behavior. Some more qualitative comments on stock modeling and the use of models can be found in less academic sources like Glassman (1998). Recall that in the discrete object-based modeling, we create objects that behave according to the user-defined rules, executing their events in discrete moments of the model time. The main concepts of object- and agent-based modeling can be found in Chap. 1 of this book. For the reader convenience, we repeat here the main remarks. The agent-based models manage objects called agents, equipped with certain “intelligence.” They can take decisions, optimize their actions and interact with each other. Agent-based models (ABMs) simulate the simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The agent-based approach consists in describing individuals interacting with one another and with their environment. The individuals may be of different types. Although the rules of behavior are shared by a group of individuals of the same type, the behavior is not identical for all of them. This modeling method has many applications, mainly in ecology, biology, and social sciences. A key notion is that simple behavioral rules (micro model) generate complex (macro) behavior. An important central tenet is that the whole is greater than the sum of the parts. Individual agents are typically characterized as rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules (Railsback et al. 2006). Consult also Bandini and Vizzari (2009). The agent-based simulation seems to be a quite realistic approach to the system modeling. This is an alternative approach, completely different from the continuous modeling. There is a strange conviction among some simulationists that everything that happens in the real world can be described by the differential equations. The point of this book is that this is not exactly true. In few words, the agent-based simulation program generates the corresponding populations, according to the generic declarations of the object types. The objects appear in the computer memory and each of them runs its own “life,” moving, escaping, devouring other ones, and dying.

5.2

Continuous Model

The continuous models are not the main topic of this book. They are shortly discussed here to give a different insight on the models and to provide a paradigm to be compared with the agent-based approach. The main problem in marketing, economic, social, and similar (soft) systems modeling is the lack of exact information. This uncertainty in model data (initial conditions, parameters, external signals, etc.) and even in the model structure needs a special treatment. The simplest way to

5.2 Continuous Model

81

get some information about the behavior of a system with uncertainty is to assume some variables to be subject to random changes and to see the resulting model trajectories. The common opinion is that the uncertainty can be treated using stochastic models and probabilistic methods. Note, however, that the very essential definition of uncertainty has nothing to do with stochastic models. It is an error to identify an uncertain variable with a random variable. An uncertain variable or parameter has an uncertain value that may belong to some interval or satisfy some restrictions. It may have no probability distribution and could not be random at all. The approach to uncertainty treatment proposed here is based on differential inclusions and it is deterministic.

5.2.1

Model Equations

We consider a simple model of the dynamics of one stock type only. This is an ordinary differential equation (ODE) model like the models used in the system dynamics approach. Let p be the current market price, and pr the real value of the stock. We will denote by n the current demand of the stock expressed in number of units. Suppose that this demand is the sum of the following components. nr—demand due to the agents being informed about the stock value. ne—demand due to erroneous information. This is the uncertain component of the demand n. nb—demand due to the agents who observe the price increase/decrease rate and do their trading based on some kind of predictions. The subscript b stands for the “bandwagon effect.” This means that the positive or negative price rate attracts increasing or decreasing numbers of agents, respectively. Due to Wikipedia, “A bandwagon is a wagon which carries a band during the course of a parade, circus or other entertainment event. The phrase jump on the bandwagon first appeared in American politics in 1848.” In ecology, politics, and economy, the bandwagon effect is also referred to as gregarious or herd instinct. This effect means that an individual looks around and observes what other are doing or where they are moving. Then the individual makes a decision about its action, influenced by the actions of the others. This effect may disturb the consequences of the supply and demand rules of the market, and even make the individuals to take irrational and wrong decisions. To find the model equations, observe the following facts. The demand n depends on the difference between the real and the current stock price. This difference should be expressed in relation to the price, so we assume that the demand can be calculated as follows.

82

5 Stock Market: Uncertainty and Catastrophes

nr ¼ A

pr  p p

ð5:1Þ

where A is a constant. This demand, in turn, determines the price growth rate. So, we have dp=dt ¼ r ðt Þ ¼ Bf ðnÞ p  n for n > I f ð nÞ ¼ , I for n < I

ð5:2Þ ð5:3Þ

where B is a constant and I is the total number of stocks issued. Function f represents saturation. This means that the surplus of stocks (which results in negative demand) cannot be greater than I. The component nb that determines the “bandwagon effect” depends on the price increase rate. This reaction of the agents is not immediate and is subject to some inertia. We shall use here a simplest way to represent this, supposing that nb ðsÞ ¼ GðsÞr ðsÞ where C GðsÞ ¼ 1 þ Ts

ð5:4Þ

Here, s is the complex variable of the Laplace transform, and G(s) is a first order transfer function. The Eq. (5.4) implies the following. dnb =dt ¼ ðCBf ðnÞ  nb Þ=T

ð5:5Þ

The Eqs. (5.2) and (5.5) describe the dynamics of the model. It is a set of two ordinary, nonlinear differential equations that can easily be solved using any continuous simulation tool. In Fig. 5.1 we can see an example of possible changes of the demand during two trading days. This trajectory was obtained using the PSM (http:// www.raczynski.com/pn/pn.htm) simulation system with the following parameters. I ¼ 10,000, T ¼ 0.005, pr ¼ 10, A ¼ 476.2, B ¼ 0.00007, C ¼ 14,200. The uncertain (erroneous) component ne was supposed to belong to the interval [500,500]. The above value of the “bandwagon” time constant T can be lower than assumed. The value of 0.005 was chosen to slow down the oscillations and make the example more illustrative. The initial conditions for the trajectories of Fig. 5.1 and for all other figures were: p(0) ¼ 8, nb (0) ¼ 0 which means that we start with undervalued stock, that generates a positive demand. As stated before, component ne of the estimated demand represents the erroneous data. To obtain the trajectory of Fig. 5.1, this component was assumed to be a random variable taken from the interval [500,500]. This is a common approach in

5.2 Continuous Model

83

11826.7048 9855.5873 7884.4699 5913.3524 3942.2349 1971.1175 0.0 –1971.1175 –3942.2349 –5913.3524 0.0

0.400

0.800

1.200

1.600

Fig. 5.1 Possible changes in demand during two trading days

uncertainty treatment. Treating the uncertain parameters as random ones, we can obtain possible model trajectories, or carry out various statistical analyses over sets of hundreds or thousands of integrated trajectories. However, note that uncertainty should not be confused with randomness. First of all, to consider a variable as random, you must know that it is really random. If so, you must know something about its probabilistic properties, to be able to generate it. In the case of dynamic systems, it is not only the probability distribution, but also the spectral density, mutual correlations, and other stuff hardly known in practical applications. On the other hand, if a model variable is said to be uncertain, we only assume some interval, where it belongs, and nothing more. The result of the uncertainty analysis should be the reachable set for the model trajectories. Note that such uncertainty treatment is deterministic. Other reason to treat the uncertain variables in a non-probabilistic way is that such analysis gives us information about possible extreme values (recall the “law of Murphy”). This also may be useful if we expect that the uncertain variables could be intentionally generated to move the system to the extreme values (manipulated and false information). In this case it has no distribution density or other probabilistic properties. In the Eqs. (5.2) and (5.5) there is one uncertain variable, namely ne. In the vectorial form our model is described by the equation dx=dt ¼ f ðx, ne Þ ne 2 ½500, 500

ð5:6Þ

84

5 Stock Market: Uncertainty and Catastrophes

where x is the state vector x ¼ (p, nb) and f is a vector-valued function that includes the two right-hand sides of the equations. We do not indicate constant model parameters in the arguments of f. However, ne appears on the right-hand side of (5.6) because it is a variable and not a fixed parameter. The Eq. (5.6) can be written as follows. dx=dxe 2 FðxÞ,

ð5:7Þ

where F is a set defined by f when ne scans all values from the interval [500,500]. What we obtained is a differential inclusion (DI), instead of a differential equation. This is the proper way to treat the dynamic uncertainty. The solution to a differential inclusion is the reachable set. It is the set where all model trajectories must belong. This is exactly what we need as the result of the uncertainty analysis, and not particular model trajectories. Note that this problem statement is completely deterministic.

5.2.2

Differential Inclusion Solver

A trajectory of a DI is a function that satisfies (5.7) for almost all points within a given time interval. The reachable or attainable set of a DI is the union of the graphs of all possible trajectories of the DI. The solution to a DI is the reachable set. It is exactly the solution to our uncertainty problem. In this very natural way, the uncertainty in dynamic system modeling leads to differential inclusions as a corresponding mathematical tool. Note that DIs have been known for about 80 years and that there is a wide literature available on the DIs theory and applications. The first works have been published by Zaremba (1936). He used the terms “contingent” or “paratingent” equations. Later, in 1960–1970, T. Wazewski (1961) and his collaborators published a series of works, referring to the DIs as orientor conditions or orientor fields. As always occurs with new theories, their works received severe criticism, mainly from some physicists who claimed that it is a waste of time, dealing with so abstract and useless theory. Fortunately, the authors did not abandon the idea and developed the elemental theory of differential inclusions. In the decade 1930–1940 such problems as the existence and properties of the solutions to the DIs have been solved in the finitedimensional space. After this, many works appear on DIs in more abstract, infinitedimensional spaces. Within few years after the first publications, the DIs resulted to be the basic tool in the optimal control theory. Recall that optimal trajectories of a dynamic system are those that lay on the boundary of the system reachable set. In the works of Pontryagin, see Lee and Markus (1967), and many others, one of the fundamental problems is the properties of the reachable sets. The differential inclusion solver algorithm that calculates the reachable sets has been published in Raczynski (2002).

5.2 Continuous Model

85

One could expect that a solution algorithm for a DI might be obtained as some extension of known algorithms for the ODEs. Unfortunately, this is not the case. Finding the boundary of the reachable set is not an easy task. I will not discuss here more theoretical details about the DIs. A more extended survey can be found in Raczynski (1996). An excellent book on theoretic background was written by Aubin and Cellina (1984). Other fundamental publications are those of Zaremba (1936) and Wazewski (1961). The DI solver has been developed by the author in 2002 (see Raczynski (2002)). One of the properties of the reachable set is the fact that if a trajectory reaches a point on the boundary of the RS at the final time, then its entire graph must belong to the RS boundary. This fact is well known and used in the optimal control theory. Observe that any trajectory that reaches a point on the boundary of the RS is optimal in certain sense. Such trajectories can be calculated using several methods, the main one being the Maximum Principle of Pontryagin (consult Lee and Markus 1967). If we can calculate a sufficient number of trajectories that scan the RS boundary, then we can see its shape. We will not discuss here the theoretical issues related to the DIs, which are complicated and would need rather a book than a short article. In few words, the solving algorithm generates trajectories that scan the boundary of the reachable set. On each of such trajectories an expression known as Hamiltonian is maximized with respect to a control variable (in our case ne). Integrating a sufficient number of such trajectories we can obtain a good approximation of the reachable set. The DI solver based on the above principles has been presented in several articles; consult Raczynski (1996) for more detail. Note that our DI solver explores the boundary of the reachable set. It would be an error to treat to calculate this set, scanning its interior. One could expect that the reachable set of a DI can be obtained by a simple random shooting, that is, by generating ne randomly and then looking for the boundary of the resulting points reached by the trajectories. Unfortunately, this is not the case, except perhaps some very simple and trivial cases. What we obtain by such primitive random shooting is a cluster of trajectories in a small region that has little to do with the true shape of the reachable set, even if with great number of calculated trajectories.

5.2.3

The Reachable Set Example

Figure 5.2 shows the solution to our differential inclusion at the end of a one-day trading. The dotted contour shows the boundary of the reachable set that is the boundary of the set where the model trajectories must belong on the price-demand plane. This contour was obtained by storing about 500 model trajectories. To see how useless is a primitive random shooting method mentioned before, the figure also shows the result of such shooting with 10,000 trajectories integrated (a small cluster of pixels inside the reachable set). The random values of ne in this primitive shooting were generated on the boundary of the allowed interval [500,500]. While

86

5 Stock Market: Uncertainty and Catastrophes

Fig. 5.2 The reachable set at the price-demand plane. A small cluster of points inside the set was obtained with a primitive random shooting

generating ne randomly from inside of this interval, the cluster is even smaller. This does not mean that the computing time needed to solve the DI is 20 times smaller compared to primitive shooting. The point is that the primitive shooting provides no solution at all. On the other hand, the DI solver is rather slow, because of the complexity of the algorithm that needs the Hamiltonian to be maximized on each integration step. In the presented case, about 1 min of computing time was necessary to get the solution, using a 2.5 GHz PC. Figure 5.3 shows a 3D image of the reachable set. The trajectories that determine the reachable set are oscillating around the boundary of the reachable set. These nonlinear oscillations suggest that the extreme points of the reachable set boundary are reached when the model enters in some kind of “resonance.” This is hardly possible with a pure random excitation, but quite possible when the uncertain parameter is changed intentionally to reach the boundary or extreme points. Figure 5.4 shows some of such trajectories randomly selected. Note that those are not random trajectories; only their selection is random. Each of the trajectories of Fig. 5.4 is a 2D projection of a trajectory that lies on the reachable set boundary. Finally, Fig. 5.5 depicts the projection of the reachable set at the time-demand plane. The model of the stock market dynamics is a good example of uncertainty treatment. It may provide interesting information about possible stock market behavior.

5.3 Agent-Based Model

87

Fig. 5.3 The 3D image of the reachable set 13.8755 13.3204 12.7654 12.2104 11.6554 11.1004 10.5453 9.9903 9.4353 8.8803 0.0

0.200

0.400

0.600

0.800

Fig. 5.4 Some randomly selected model trajectories that scan the boundary of the reachable set

5.3

Agent-Based Model

Our agent-based model is much more complex than the simple continuous model discussed above. Now, we are modeling each individual agent that acts on the stock market, each individual company, and the market brokers. The market model includes the main stock market mechanisms. We select the most important components and events that occur in a real market. The model can be

88

5 Stock Market: Uncertainty and Catastrophes

Fig. 5.5 The projection of the reachable set on the time-demand plane

extended by adding a more detailed components and their behavior. Our limited component and event set allow to carry out multiple simulation experiments that may provide hints to understand the market behavior. Anyway, the results give us the qualitative behavior data, rather than quantitative properties of a real market. The purpose of the simulations is to observe dynamic behavioral patterns like random fluctuations, oscillations, and possible unexpected events that may be interpreted as catastrophes or market crashes. It should be noted that these events are generated by the internal interactions of the market components, and not by any external conditions. It is easy to simulate a stock market crash inserting some external impulse. In our case, no such signals are simulated, and the eventual abnormal behavior only depends on the model itself. Our market works as follows. There is a set of companies that issue stocks (or shares, securities, or equities) that represents ownership in a company. The share owners acquire the rights to receive dividends as well as assets in the event of a liquidation. This is done by selling shares to the public through an initial public offering (IPO). Thousands or even millions of individuals can buy and sell the shares. These individuals or traders will be referred to as agents in the following. The agents issue market orders that are requests made by an agent to purchase or sell a share at the best possible price. Market orders are executed by brokers on behalf of their agents, who want to conclude the transaction at the best price available on the market. A bid is the price at which somebody wishes to buy, and the ask is the price at which somebody wishes to sell. There are several kinds of orders. Here, we consider only the common daily orders that are valid until the end of the trading

5.3 Agent-Based Model

89

day. The order data include the requested or offered volume of shares, the company id and the bid or ask (for buy or sell orders, respectively). For each order, the brokers look for the better buyer or seller and if it is found, the order is executed. While issuing an order, the buyer selects the best (due to his/her opinion) company. In our model, if a buy or sell order is not executed during a given time interval, then the order is changed, rising the bid or decreasing the ask. This way, the market is governed by the offer and demand market rules. This mechanism results in certain auto-regulatory behavior but it does not guarantee that the market reaches a steady state of equilibrium and do not imply the market stability. The initial share price for each company is defined in the company initial event. Then, the price changes according to the demand/offer mechanism. This is implemented in the simplest way. If the buy order is not executed during a certain time interval, the buyer increases his bid. On the other hand, if the shared are hardly being sold, then the sellers decrease their ask values. The actual price is defined by the recent successful sell-buy operations. The following model description is given in terms of the agent-based model specifications. Thus, the main terms are processes, agents and events. A process is a generic code segment that describes the behavior of certain model components. According to the process declaration, the simulation program generates one or more instances of agents of the process type that are activated and start to execute their events. Inside the process declarations, there is a number of event specifications. The scheduling and execution of events are done automatically, governed by the “simulation engine” provided by the simulation package (in our case the Bluesss simulation system). The model time is the time we are simulating, not the time of our clocks. In the model, it is represented by the variable TIME (a C++ float type). In the simulations, the time interval is set to be 720 time units (may be interpreted as minutes) that corresponds to two trading days.

5.3.1

Processes and Events

5.3.1.1

Process Company

In this model, the company has little activities. At the very beginning, companies issue shares. In the initial state of the model, the shares are uniformly distributed between agents. Other event of company that is executed repeatedly is the change in company value. Recall that the value of a company depends on the financial analysis, discounted cash flow, earning per share, growth rate, and other factors. However, this event is disabled in the present model because these are mostly external factors and what we want to do is to simulate market behavior depending on what happens inside and not outside the market. Each company has the stock price attribute that may change during the simulation run. In the current model we simulate market including 20 companies competing for the agent share demand. The amount of shares issued by a company varies from

90

5 Stock Market: Uncertainty and Catastrophes

200,000 to 1,000,000. The share price fluctuates between 7.5 and 1.25. The company attribute G tells how good is the company to be selected by an agent who wants to buy shares. The way to define G is explained further on.

5.3.1.2

Process Agent

The agent is the fundamental component of the market. On the real market there may be thousands of even millions of agents that exchange shares. Our model is objectand agent-based, and the necessary computer time strongly depends on the number of generated agents. Unless you use a supercomputer, the reasonable number of agents is up to several thousands. Depending on model complexity, the models with more than a thousand of process instances running concurrently could make the processing time increase for several minutes to hours. In the current model we generate only 200 agents. Each agent has the following attributes. hb—the time step for issuing buy order (by default 0.78). hs—time step for issuing a sell order (by default 0.78). hc—time step for the control event of process monit (Sect. 5.3.2). The buy/sell order is a record that includes the following data. Order type: equal to 1 for buy order, 2 for sell order Company id Agent id Max volume to buy or sell ask/bid value A—ask is the tentative share price for sell order, bid is the price the buyer is able to pay Tm—time instant the order was created The agent executes the following events.

5.3.1.3

Event Buy

The agent issues a buy order. Before setting up the order, the buyer selects the company. There are two possible selection modes: 1. The agent looks for the best company offer. This depends on the company attribute G, calculated in the control event of the monitor process described in the sequel. 2. The agent is not aware of the price forecast, and selects any company, see Markowitz (1952). Chen and Hung (2009). Glassman (1998). In the case 1, first, the prognosis for the share price is made. It uses the data about the history of the price. The second order approximation curve for the price changes is found using the last squares fitting method. This provides the linear and quadratic tendency coefficients. It is assumed that the company share is good for a buyer, when the forecast for the price is good, i.e. the price will increase in a near future (see Fig. 5.6). This is a simple way to select the company. In the real market situation,

5.3 Agent-Based Model

91

Fig. 5.6 Price changes and forecast, ta is the actual time instant

there is a lot of price forecasting and share selection tools, based on sophisticated statistical algorithms. Here, we use this simple, but reasonable criterium. The other component of the object function G is the price itself. It should be lower than the past price values. When a new buy order is placed, the bid is defined as follows:  A ¼ p 1 þ Bg r , where A is the bid, Bg is the“ bid gap” p is the share price, and r is a random number between 0 and 1 In the similar way, for a sell order we have A ¼ p 1  Bg r



In this model, the orders have the IOC (Immediate or Cancel) condition. If no shares are traded in certain “immediate” interval, then the order is canceled. In this case, a new order is issued with the same parameters and new bid multiplied by 1 + W. The default value of W is equal to 0.25 (25% increase). The sell orders are updated in the similar way, with coefficient 1-W. Here, W is the bandwagon factor. All buy and sell orders expire at the end of the trading day. In our case, this occurs at the time instant equal to 390.

92

5.3.1.4

5 Stock Market: Uncertainty and Catastrophes

Process Brokers

As mentioned before, the brokers manage and execute agent’s orders. In the present model, we do not simulate the activities of each broker as a separate object (instance of process brokers). There is no need to generate multiple brokers, because the only thing they are doing is to scan all orders, and select the best option of each order. So only one instance of this process is created. It represents the whole group of brokers. The main event, executed repeatedly, includes a loop over all orders. For each order the best option is selected. If the ask/bid values of the selected sell/buy orders coincide, then the order is executed. The current price of the corresponding company share is set equal to the price of this transaction. Here, we do not simulate the profit the brokers earn from the transactions. If we follow this principle for all transactions, with the ask/bid defined above, then the price of all shares will always decrease. Thus, we define the transaction price as the average between the corresponding ask and bid. The main event of brokers process is broke, executed each 0.01 time units. In this event, the buy and sell orders are revised and the transactions that can be executed, are. This high frequency of repetitions is used because the broke event represents activities of all market brokers, “encapsulated” in only one instance of the brokers process, while the agents and sellers are simulated as multiple objects.

5.3.1.5

Process monit

This is an auxiliary process that manages data collection, statistics, and result display. It has two events: control and show. Only one instant of the monit process is created. The control and show events are executed repeatedly.

5.3.2

Other Elements

During the simulation, the market index is calculated. It is the total value of all shares owned by the agent, using the current share prices. The value of the index is calculated and stored as a function of time, in the control event of process monit. The bandwagon or herd instinct factor is also calculated (see Sect. 5.2.1). In ecology, politics, and economy, the bandwagon effect is also referred to as gregarious or herd instinct. This effect, like in the continuous model version, means that an individual looks around and observes what other are doing or where they are moving. Then the individual makes a decision about its action, influenced by the actions of the others. This effect may disturb the consequences of the supply and demand rules of the market, and even make the individuals to take irrational decisions.

5.4 Simulation Tool: BLUESSS Implementation

93

In our model, bandwagon is a global factor, not related to any particular company. It becomes negative when the overall market index is decreasing. In this case, both buyers and sellers decrease their ask/bid parameters. This provides, as in the continuous model version, a positive feedback that moves the model state out of the eventual steady state. The changes of bandwagon factor depend on the forecast of the market index. The forecast is done in similar way as that of price changes (Sect. 5.3.1). If the forecast is positive, then the bandwagon factor increases by the bandwagon rate Br, otherwise it decreases by the same amount. This operation is performed in the control event of process monit that repeats every hc time units, hc ¼ 1.

5.4

Simulation Tool: BLUESSS Implementation

A more detailed overview of the BLUESSS (Blues Simulation System) package has been provided in Chap. 2 of this book. Here, recall only the main features of the package (consult also Raczynski, 2019). BLUESSS is an object-oriented simulation package. It permits continuous/discrete models, includes the clock mechanism and inheritance. The BLUESSS program has a clear process/event structure. It runs with the Embarcadero C++Builder. The package is extremely flexible. The user has access to any feature available from the C++ code. The Bluesss source code is converted into C++, then compiled and executed. The resulted exe file is a stand-alone Windows application. For more information, consult http://www.raczynski.com/pn/bluesss.htm. In BLUESSS, we declare a series of processes (class declarations) and, inside each process, several events descriptions. These events are invoked during the “life” of the corresponding object (or agent) created according to the process declaration. For example, if we declare a process “car,” then we can generate and launch thousands of cars, each of them having its own attributes and running its own events, such as “enter the highway,” “accelerate,” “move,” “crash,” “stop,” etc. Agents can interact with other agents, execute a complicated computational procedure, execute an external program, change its own attributes or these of other objects, create an OpenGL image, sing a song, or communicate by the internet. These are features needed in agent-based simulation. BLUESSS is perhaps not such user-friendly as other packages because it requires some abilities in C++ programming. On the other hand, it is extremely flexible and permits the use of anything available from C++ because the body of each event is coded in C++. The implementation of our model is done by coding the processes and events described in the precious section in BLUSSS language. In the main, Start section of

94

5 Stock Market: Uncertainty and Catastrophes

the program, NC companies, NG agents, brokers and one monit process instance are generated and activated. The rest of the simulation runs automatically.

5.5

The Simulations

Figure 5.7 shows a screenshot of a simulation run. The upper part of the display is the state of share ownership. The horizontal axis is the agent index, and the vertical axis represents companies. Dark spots correspond to higher amount of company shares owned by the agent. In the lower part we can see the online plot of the overall market index (see Sect. 5.3.2). In each simulation run, more than one and half million transactions are simulated. In this simulation, both bandwagon effect and company selection are disabled. As stated before, our market is isolated from the environment. This may appear to be somewhat unreal condition, but remember that we are looking for unexpected patterns in the market behavior and possible “catastrophes” that result from the market itself, and not from any external or artificial excitation. The “bandwagon” phenomenon also depends only on what happens inside the market and not in its environment. The program generates more than 25 different plots that reflect the behavioral patterns. Below, we will show only some of them.

Fig. 5.7 Screenshot of a simulation run

5.5 The Simulations

5.5.1

95

Experiment 1

Let start with a simulation with company select mode disabled. The main parameters are AC ¼ 0.15, Bg ¼ 0.02, Br ¼ 0.04, final time 780 time units. In this simulation mode we assume that the investors have little information about the companies, and randomly select the share to buy. The change of the market index is shown in Fig. 5.8. The plot does not reveal any interesting events and rather resembles a random walk process. Figure 5.9 shows the changes of number of buyers. The plot of share price for company number six can be seen in Fig. 5.10. In the next experiments, we will see how different are company price fluctuations when the company select mechanism is enabled.

Index

172270800 137816640 103362480 68908320 34454160 0

Time

0

156

312

468

624

780

Fig. 5.8 Simulation without company select option

Buyers

1922 1537 1153 768

A new trading day

384

Time

0 0

156

312

Fig. 5.9 Number of buy orders, experiment1

468

624

780

96

5 Stock Market: Uncertainty and Catastrophes Price

10.8834

8.7067

6.5300

4.3534

2.1767

Time

0.0000 0

156

312

468

624

780

Fig. 5.10 Fluctuation of the share price for company no 6, experiment 1

Index

373765900

299012720

224259540

149506360

74753180 Time

0 0

156

312

468

624

780

Fig. 5.11 Changes of the market index, experiment 2

5.5.2

Experiment 2

In this simulation the company select mode (Sect. 5.3.1) is enabled. Figure 5.11 shows changes of the overall index of the market. The bandwagon effect and the company select are enabled, AC ¼ 0.15, Bg ¼ 0.02, Br ¼ 0.04, final time 780 time units. If we interpret the time unit as 1 min, this represents 2 days of market activity. We can observe the slow changes of the general tendency and the oscillations caused by the bandwagon effect. In general, this market model, in most simulations, reveals a self-regulation property. In order to stabilize the model, in this experiment the variable A (ask/bid) has the upper limit fixed equal to 100. On Fig. 5.9 we can see the changes of the corresponding bandwagon factor. In Figs. 5.11 and 5.12 we can see considerable variable fluctuations, and a big decrease of the index between time equal to 300 and 470. The other, local variables

5.5 The Simulations

97 Bandwagon

0.0000 –0.2080 –0.4160 –0.6240 –0.8320

Time

0

156

312

468

624

780

Fig. 5.12 Fluctuations of the bandwagon factor, experiment 2

Average price

31.0412

24.8330

18.6247

12.4165

6.2082 Time

0.0000 0

156

312

468

624

780

Fig. 5.13 The average share price for experiment 2

as the share prices for particular companies are less regular and include some “catastrophic” events. Figure 5.13 shows the fluctuations of the average share price. Now, let us revise the share price fluctuation for some individual companies. Figure 5.14 depicts the changes of the share price of companies 4,6, and 17 (out of 20). The plots show several catastrophic increases and decreases of the price. Note that these changes were not provoked by any external conditions. They are the result of the internal interactions between buyers, sellers, and brokers. While the plots provided by experiment 1 may look as expected from the model specifications, the plots of Fig. 5.14 are somewhat more difficult to explain. What makes a share price of a particular company increase so rapidly, then drop and recuperate after certain long period from a very low value? Remember that no external signals are present, and the behavior patterns are generated inside the model. This phenomenon is perhaps invoked by the company forecast and select

98

5 Stock Market: Uncertainty and Catastrophes Price

99.5602

Price ompany 4

79.6482 59.7361 39.8241 19.9120 0.0000 39.9121

Time 0

Price

156

312

468

624

780

468

624

780

468

624

Price ompany 6

23.9473

7.9824 0 48.1783

0 Price

156

312 Price ompany 17

28.9070

9.6357 0

Time 0

156

312

780

Fig. 5.14 Fluctuation of the share price for some selected companies

activities. Some very vague analogy may be made to the oscillations in prey– predator models, though this is not exactly the case.

5.5.3

Experiment 3

In this simulation, the bandwagon effect is augmented. We have AC ¼ 0.15, Bg ¼ 0.02, Br ¼ 0.1. Figure 5.15 shows the changes of the market index. Now, the fluctuations are quite big, and we have some small crashes like these between time 540 and 620. The patterns of the plots of Figs. 5.14 and 5.17 may suggest that the companies begin to compete for the market between each other, though no such activities have been explicitly defined in the model. Figure 5.16 shows the corresponding variations of the bandwagon factor.

5.5 The Simulations

99

Index

315300300

252240240

189180180

126120120

63060060

0 156

0

312

468

624

780

Fig. 5.15 Changes of the market index with augmented bandwagon effect

Bandw 0.0000 –0.4000 –0.8000 –1.2000 Time

–1.6000 0

156

312

468

624

780

624

780

Fig. 5.16 Changes in the bandwagon factor, experiment 3

Price

101.0

80.8

60.6

40.4

20.2

0.0 0

156

312

468

Fig. 5.17 An example of changes of the share price for company no 10

100

5 Stock Market: Uncertainty and Catastrophes

The fluctuations of the market index are bigger compared to these of experiment 2. However, the bandwagon itself does not provoke any catastrophic events. For smaller intensity of the bandwagon it seems that this effect rather stabilizes the market.

5.6

Final Remarks

The main topic of this chapter is the agent-based simulation of the stock market. As stated before, this is a model of an artificial population of investors, companies, and market brokers. Thus, the quantitative results are not very relevant. What we are looking for are the behavioral patterns that could provide some qualitative hints to the research of stock market dynamics. The main tenet is that some known market events, like abrupt price grow or market crash may be originated inside the market and not necessarily by the influence of external excitations. The agent-based model discussed here is open to any changes and extensions. In the future research, a more detailed models should be constructed, based on parameters of certain real stock market. This may provide new insight on the dynamics of the stock exchange markets.

References Aubin JP, Cellina A (1984) Differential inclusions. Springer-Verlag, Berlin Andresen T (1999) The dynamics of long-range financial accumulation and crisis. Nonlinear Dynamics Psychol Life Sci 3(2):161–196 Bandini S, Vizzari G (2009) Agent based modeling and simulation: an informatics perspective. J Artif Soc Soc Simul 12(4):4. ISBN/ISSN ISSN 1460-7425 Chen C, Hung W (2009) A new decision-making method for stock portfolio selection based on computing with linguistic assessment. Adv Decision Sci 2009:20. Article ID 897024. https:// doi.org/10.1155/2009/897024 Glassman KJ (1998) Trying to crack the code; Computer Models Lag as Stock Pickers, The Washington Post, October 15, 1998 Goodwin RM (1967) A growth cycle, in capitalism and economic growth. Cambridge University Press, Cambridge Lee EB, Markus L (1967) Foundations of optimal control theory. Wiley, New York Markowitz H (1952) Portfolio selection. J Financ 7:77–91 Minsky H (1982) The financial instability hypothesis: an interpretation of Keynes and alternative ‘Standard Theory’, inflation, in recession and economic policy. Wheatsheaf, Sussex Raczynski S (1996) Differential inclusions in system simulation. Trans Soc Comp Simulation 13 (1):47–54

References

101

Raczynski S (2002) Differential Inclusion Solver, International Conference on Grand Challenges for Modeling and Simulation, Conference of The Society for Computer Simulation Int., San Antonio, TX, U.S.A., January 2002 Railsback SF, Lytinen SL, Jackson SK (2006) Agent-based simulation platforms: review and development recommendations. Simulation, Sage 82(9):609–623. https://doi.org/10.1177/ 0037549706073695 Wazewski T (1961) Sur une condition equivalente a l'equation au contingent. Bull Acad Polon Sci Ser Math Astronom Phys 9(12):865–867 Zaremba SK (1936) Sur les équations au paratingent. Bull Sci Math 60:139–163

Chapter 6

Epidemics

Abstract Continuous models of and epidemic are discussed, and the agent-based version of the model is presented. For the continuous version, an application of differential inclusions in the epidemic spread models is presented. Some mostly used epidemic models are discussed here, and a brief survey of epidemic modeling is given. Simple simulations are carried out. Then, we consider the influence of some uncertain parameters. It is pointed out that the presence of some fluctuating model parameters can be treated by differential inclusions. This provides the reachable sets that may give us an interesting insight on the influence of uncertainty. In the discreteevent agent-based model, we simulate the behavior of the very elemental components, like individuals that can be exposed to the infection, infected, or recuperated. The spatial distribution of agents is simulated. We have several clusters of agents (maybe cities), and agents that travel between the clusters. This micro-behavior of the components results in the macro-behavior patterns of the whole system. Keywords Simulation · Model building · Epidemics · Agent-based modeling

6.1

Introduction

In this chapter, unlike the others, we do not look for catastrophes in the behavior of the epidemic models because an epidemic is itself a catastrophe. In general, the agent-based model provides results similar to that of the continuous models. What is perhaps more interesting, is the influence of the territorial distribution of groups of susceptible individuals and the possibility to experiment with changes of individual behavioral patterns. After an epidemic in a human or animal population is detected, an important task is to assess how dangerous it is, and which part of the society may be infected. So, the look for the models of epidemic spread dynamics has been the topic of research for a long time. Covid-19: It should be emphasized that the models discussed below are not covid-19 pandemic models. Let me give some personal remarks on this issue. I © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Raczynski, Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations, Evolutionary Economics and Social Complexity Science 27, https://doi.org/10.1007/978-981-16-2574-9_6

103

104

6 Epidemics

appreciate efforts on covid-19 pandemics simulations that perhaps can result in some useful tools in the future. However, note that the most serious mistake in any scientific work is to look for something that does not exist. The problem of the existence of solutions is well known in mathematics, but somewhat vague in other fields of scientific research. For new epidemics with new forms of bacteria or virus, the existence of the model is rather doubtful. The worst error is to take a plot of certain past epidemics and then look for a forecast by using the data of the initial period of the disease and some best-fit method to estimate parameters and generate forecasts. Such curves and forecasts may be (and, in fact were) used by health organizations and governments to take decisions on future actions. This may result in erroneous disease handling and in thousands of infections and deaths. Anyway, this is the question of ethics in modeling and simulation.

6.2 6.2.1

Continuous Models Susceptible-Infectious-Removed Models

One of the first classic models of the phenomenon appears in the late 1920s, in the works of Kermack and McKendrick (1927). The models that have been developed later on are of similar type: they try to reflect the dynamics of the epidemics using the system dynamics (SD) approach (Forrester 1961), and the ordinary differential equations (ODEs). The ODE models are normally given in the form of a set of non-linear ODEs, and their properties are widely discussed and improved in many available works. However, the ODE or SD models can hardly reflect the geospatial issues. The popularity of the SD models arose among the modelers mainly due to the strange conviction that any continuously changing variable of the real world can be modeled using the ODEs. In fact, this is not true. Some remarks on these issues can be found in Raczynski (2009). This and other deficiencies of the ODE and SD modeling have inspired the development of other modeling and simulation tools, such as the object- and agent-based simulation, see Obaidat and Papadimitriou (2003), and Perez and Dragicevic (2009). Dargatz and Dragicevic (2006) consider an application of an extended Susceptible-Infectious-Removed model of the “spacetemporal spread of influenza in Germany. The inhomogeneous mixing of the population is taken into account by the introduction of a network of sub-regions.” The multivariate diffusion process is used to describe the model. The continuous SD models result to be useful and can provide, to some extent, an important information about the disease propagation. The basic and most popular is the Susceptible-Infectious-Removed (SIR) model. Figure 6.1 shows the SD scheme of the model. Block S represents the number of susceptibles, i.e. the individuals that can be infected. Block I denotes the number of actually infected, R is the number of individuals recovered from the disease with immunity, or dead. A modification of the SIR model named SIRS includes also a feedback from block R to S, as shown in Fig. 6.1.

6.2 Continuous Models

105

Fig. 6.1 The system dynamics scheme of the SIR and SIRS models

The equations of the SIR model are as follows (r and a are constants). 8 > > > > >
dt > > > dR > : ¼ aI ðt Þ dt

ðAÞ ðBÞ

ð6:1Þ

ðC Þ

The coefficient r is the disease contracting rate, and a represents the mean recovery/death rate. Consult Misici and Santarelli (2013) for SIR model simulation. Equation A of (6.1) tells that the velocity of decreasing the amount of susceptibles is proportional to the product SI, being the rate of possible contacts between infected and susceptibles. Eq. (B) describes the rate of change of the number of infected that is equal to the rate of S decrease minus a factor proportional to I (the individuals that become recovered). The third Eq. (C) defines the rate of increase of the number of removed. Note that in the SIR model the variables S and I do not depend explicitly on R. So, if we do not need the solution for R(t), we can use only the Eqs. (A) and (B). The SIR model is perhaps too simplified. It also has been criticized for the big sensitivity to the variation of the parameter r. If we suppose that some of the recovered people may be infected again, we can use a modification of the SIR model, named SIRS as follows: 8 dS > > ¼ rSðt ÞI ðt Þ þ mðN  Sðt ÞÞ þ fRðt Þ > > > < dt dI , ¼ rSðt ÞI ðt Þ  ða þ mÞI ðt Þ > dt > > > > dR : ¼ aI ðt Þ  ðm þ f ÞRðt Þ dt

ð6:2Þ

where a, r, m, and f are constants. Note that the term fR(T) in (a) represents the number of recovered individuals that join the S group. Coefficient f tells which part of R will be incorporated again to S group. N is the total population. The SEIS model is a SIR modification, where there is an additional group E that represents the exposed or latent period of the disease. The equations are as follows:

106

6 Epidemics

8 dS > > > dt ¼ B  rSðt ÞI ðt Þ  mSðt Þ þ aI ðt Þ > > < dE , ¼ rSðt ÞI ðt Þ  ðe þ mÞE ðt Þ > dt > > > > : dI ¼ eE ðt Þ  ða þ mÞI ðt Þ dt

ð6:3Þ

where a, r, m, e, and B are constants. There are a lot of modifications of the above models. For epidemics with larger duration, the birth-and-death process is added. A passive immunity is taken into account in the MSIR model, where it is supposed that some individuals are born with the immunity. MSEIR model has the Scheme M!S!E!I!R, where M is the number of passively immune individuals. Supposing that the immunity in the group R is temporary, we obtain the MSEIRS model M!S!E!I!R!S. In the article of Ng et al. (2003) we can find a description of a model of double epidemic. The two superimposed epidemics are considered using a modification of the SIR model. The problem is focused on the Hong Kong SARS (Severe Acute Respiratory Syndrome) epidemic in 2003, caused by two different viruses. The resulting model is given by a system of six differential equations of the first order. Some more complicated models of epidemic dynamics can be found in Gebreyesus and Chang (2015). They propose a multi-compartment model that takes human and animal, or different species of animals interactions into account. It is a multivariable state-space model that reflects the phenomena of some diseases transmitted from animal to human, such as Ebola, MERS-Coronavirus bird flu, and tuberculosis. The basic model used in that article is a modification of SIR. The main concept is to define a number of clusters in the populations, where the epidemics are governed by SIR equations, interacting with each other. The model supports the spatial issues, introducing the regions like urban and rural. For the models of recurrent epidemics, see David (1945). The above models are deterministic. In order to manage the uncertain elements, the stochastic elements in disease spread models have been introduced many years ago. Various types of stochastic models are discussed by Allen and Allen (1945), who considers some modifications of the basic SIR model. The mathematical tools include the Markov chains and stochastic differential equations. As the result, from these models we can obtain the probability distributions for final size of the S, I, and R groups, for the disease extinction, disease duration, and other parameters. In the Allen model the S, I, and R are treated as discrete random variables. A detailed mathematical background is given, where the obtained equations describe the probabilities rather than the instant values of the variables. The Matlab code examples are given. See also Matis and Kiffe (2000). The overview of various modeling techniques, both deterministic and stochastic, can be found in Daley and Gani (2001). Our approach to uncertainty is quite different. The proposed approach, using differential inclusions, is deterministic. The variables treated as random in the stochastic models are supposed to be uncertain, and not random. We do not obtain

6.2 Continuous Models

107

any probabilistic results, managing rather the possible rage of uncertainty given as the resulting attainable sets.

6.2.2

Differential Inclusions and Uncertainty

A differential equation, in the canonical form, has a (scalar or vector) right-hand side, which is a function of the independent variable and of the state variables. The right-hand side of a differential inclusion is a set-valued function F: dx 2 F ðx, t Þ dt

ð6:4Þ

By a trajectory of the DI over a given time interval I, we mean a function x(t) that obeys the above inclusion almost everywhere on I. The union of the graphs of all possible functions x(t) over I is called reachable or attainable set. We will not treat this function as a solution to the DI. The reachable set is a generalization of a single solution to a differential equation. In the following, we will not use the term solution to a DI, referring rather to the reachable sets. Here, we only recall the main concepts. For more detailed definition and properties of the DIs, consult Aubin and Cellina (1984). In Raczynski (2002) you can find the description of the computer software named DI solver that calculates the reachable sets, using some concepts of the optimal control theory. The DIs are closely connected to control systems. Consider a model given in the form of ODE equations with some uncertain parameters. If we treat these parameters as control signals that fluctuate within given limits, then we obtain a control system. Now, suppose that the controls scan certain allowed region. The right-hand sides of the model equations will scan a multi-dimensional set that can be treated as the set F of the corresponding differential inclusion (6.4). The detailed explanation of this procedure is given in the following section, based on particular applications.

6.2.3

Examples of Reachable Sets

The model itself is not the main topic of this book. Our aim is to show an alternative approach to the uncertainty, rather than developing new models. To carry out some simulation experiments we use the basic SIR and the SIRS models, adding uncertainty to some of its parameters. Similar simulations can be carried out with any other model. The model equations are as follows.

108

6 Epidemics

Fig. 6.2 Simple simulation of the SIR model

8 dS > > > dt ¼ r ½1 þ 0:05u1 ðt ÞSðt ÞI ðt Þ > > < dI ¼ r ½1 þ 0:05u1 ðt ÞSðt ÞI ðt Þ  a½1 þ 0:05u2 ðt ÞI ðt Þ > dt > > > > : dR ¼ a½1 þ 0:05u2 ðt ÞI ðt Þ dt

ð6:5Þ

where u1(t) and u2(t) are functions of time and belong to the interval [1,1]. In other words, we assume that the coefficients r (disease contracting rate) and a (mean recovery/death rate) have uncertain values and may change in time, within the interval + 5% of their default values. If u1 and u2 scan all their permissible values, then the right-hand side of (6.5) scans a 3-dimensional set, being the right-hand side of the corresponding differential inclusion. We first run a simple simulation with u1 and u2 constant and equal to zero. This converts the model (6.5) into the basic SIR model. Model parameters are as follows. disease contracting rate r ¼ 0.00001, mean recovery/death rate a ¼ 0.6, S(0) ¼ 100,000 I(0) ¼ 50 R(0) ¼ 0, simulation final time ¼ 60. Figure 6.2 shows the results of the simulation. The three curves are normalized into the interval 0–1. The real ranges of the variables are: S between 100,000 and 33,300, I between zero and 9413, and R between zero and 67,700, approximately. Now, permit the parameter r to be uncertain, fluctuating between 0.0000075 and 0.0000125. Let us calculate the reachable set for the model variables. Note that the reachable set at the simulation final time is not very interesting because the variables reach the final steady state. So, we will rather look for the reachable set in shorter simulation time, equal to 20, see Fig. 6.3. Running the model with uncertain both r and a, we obtain a larger reachable set, as shown in Fig. 6.4. From the resulting images of reachable sets, we can learn what are the limits for possible variable changes. Note that those are not global maximal and minimal values, but rather the limits at the time instant under consideration. Of course, similar images can be obtained for any other, user-defined time instant. In Fig. 6.5 we can see the 3D image of the reachable set with uncertain both r and a. The coordinates are the time, number of susceptibles S, and number of infected.

6.2 Continuous Models

109

Fig. 6.3 Time-section of the reachable set, uncertain parameter r, time ¼ 20

Fig. 6.4 Reachable set at time ¼ 20. (a) uncertain r, (b) uncertain both r and a

The origin of the coordinate axes is fixed at 0, 100,000, and 50, respectively. It can be seen that the number of infected changes considerably even while the range of uncertain parameters is between +5 and5 percent. Similar simulations for the final time equal to 40 are shown in Fig. 6.6. that shows the reachable set time-section for time equal to 40.

110

6 Epidemics

Fig. 6.5 The 3D image of the reachable set, uncertain parameters r and a, time ¼ 20

Fig. 6.6 Shape of the reachable set boundary for time ¼ 40

To see another model version, we use the SIRS model (6.3). In this model we assume that recovered people may be infected again. The equations with the two controls inserted, are as follows:

6.2 Continuous Models

111

Fig. 6.7 Simulation of the SIRS model. Final time ¼ 60

8 dS > > ¼ r ½1 þ 0:05u1 ðt ÞSðt ÞI ðt Þ þ mðN  Sðt ÞÞ þ fRðt Þ > > > < dt dI ¼ r ½1 þ 0:05u1 ðt ÞSðt ÞI ðt Þ  ½að1 þ 0:05u2 ðt ÞÞ þ mI ðt Þ > dt > > > > dR : ¼ a½1 þ 0:05u2 ðt ÞI ðt Þ  ðm  f ÞRðt Þ dt

ð6:6Þ

The controls are used in the same way as in the previous SIR model, supposing that the parameters r and a of the SIR model are uncertain variables. To compare the models, we use similar parameters: disease contracting rate r ¼ 0.00001, mean recovery/death rate a ¼ 0.6, S(0) ¼ 100,000 I(0) ¼ 50 R(0) ¼ 0, simulation final time ¼ 60. In addition, we define m ¼ 0.07, f ¼ 0.04, and N ¼ 100,000. A simple simulation of this model, with final time equal to 60 is shown in Fig. 6.7. As in the previous simulations, the curves are normalized to interval [0,1]. The ranges for the variables are: S between 30,300 and 100,000, I between 50 and 8808 and R between 0 and 33,200, approximately. Compared to Fig. 6.2 we can observe a substantial difference. The time-section of the reachable set for this model is shown in Fig. 6.8. The section time is equal to 40, and the uncertainty range for r and a is the same as for the SIR model (plus minus 5%). Note that now all three equations interact with each other, so the simulation of the first two of them cannot be done as in a 2-dimensional model. The shape is more complicated, and the boundary of the reachable set may fold with itself. A general conclusion from these experiments with the continuous model, is that even with small uncertainty of model parameters, the uncertainty of the resulting model state is rather big. This means that the models hardly provide reliable predictions, even for short time horizon. Differential inclusions may provide important additional information about the behavior of the system dynamics models. In such fields as public health,

112

6 Epidemics

Fig. 6.8 Time-section of the reachable set for the SIRS model with uncertainty. Final time equal to 40

management problems, marketing or economy growth, the models are charged with a great amount of uncertainty. Treating uncertain factors as stochastic variables may be useful, but it is not always possible or conceptually correct. Not everything that is uncertain can be represented by a random variable. Differential inclusions provide such results as the attainable sets, which are deterministic objects, useful while analyzing the dynamics of the model.

6.3 6.3.1

Agent-Based Model General Concepts

Let us recall the main concepts. More detailed overview was given in Chap. 1 of this book. In the agent-based modeling (ABM) we create individuals named agents that are objects equipped with certain artificial intelligence. At the runtime, a set of agents is created in the computer memory. Each agent is activated and executes its events concurrently with other agents. Agents can interact with each other and with a global environment. An event is defined as a piece of code that is executed in certain model time instant. In 1994 a SWARM ABM package vas developed (Swarm Development Group, 2001). A similar ABM tool Ascape was created by the Santa Fe Institute in 2001 (Parker, 2001), other agent-based simulation tools are Breve-2.7.2 (Klein, 2002), Recursive Porous Agent Simulation Toolkit released in 2003 (Michael et al., 2006),

6.3 Agent-Based Model

113

Cormas by VisualWorks (Bommel et al., 2015), MASON (Luke et al., 2005), the MASS package (Tatai et al., 2005), FLAME package and the parallel architecture designed for Graphics Processing Units, FLAMEGPU (Coakley et al., 2006, Holcombe et al. 2013), MATSim by EHT Zürich (Bazzan 2009), and Spot Oriented Agent Role Simulator SOARS (Tanuma et al. 2005). Compared to continuous models, given mainly by sets of differential equations, the ABM approach provides a more flexible modeling paradigm, starting with the micro-behavior of agents. Observe that ABM modeling needs little or zero high mathematics. In our case, the model agents represent the individuals that can be susceptible to an infection, infected, recovered, or dead. The agents move over a geographic area and are attracted by some spots named cities. They can travel between the cities that make the eventual disease propagate.

6.3.2

The Model

The model is implemented with the BLUESSS simulation package. This tool is described in detail in Chap. 2 of this book. However, the model can be coded in any other agent-based simulation package. So we do not enter in very detailed description of the code, focusing rather on the particular agent behavior. The simulated scene is inside a rectangular region R, with five attraction spots, named cities. Cities are shown at the animation screen as circles with different radius. The agents are attracted to the cities, and randomly move inside and around them. At a given time instant one of the agents becomes exposed to a disease. While moving, it may infect other agents, and the epidemic starts to expand. The BLUESSS model consists in a series of process declarations. The process is a generic code that is used to generate multiple instances that represent model agents. Inside the process declaration, there are several code segments called events. Once an agent is activated, it executes the events during its “life.” Agents can interact with each other and with agents of other processes, and so the whole system works, simulating the behavior of an artificial society. Let the main process of the model be named agent. Each agent has its own attributes that are data “attached” to it. The agent attributes are: State—it is a character, indicating the actual state of the agent. The values are as follows: S for susceptible E for exposed I for infected R for recovered (x,y)—position in the region where the agents can move. We have 0 < x < 1, 0 < y < 1. (vxr,vxr)—x and y components of the random walk movement.

114

6 Epidemics

(vatrx,vatry)—direction where the agent is being attracted. It may be the direction towards the nearest city, or the agent trip direction. speed—overall speed of movement.

6.3.2.1

Agent Events

Init—the initial event, executed only once for each agent. In this event the main attributes are defined, and the necessary events are scheduled to be executed. randomw—the event where the direction of the random movement is defined. atrac—the event where the attraction direction is defined. If an agent is on a trip, this is the direction towards the destiny. The attraction velocity is defined as follows. Let the agent be located at (x,y). For each city the term di is calculated, equal to the distance from (x,y) to the center of the city number i, divided by the radius of the city. Then the city with minimal value of d is selected. The attract direction (vatrx, vatry) is defined as the direction toward the selected city. This way the cities with greater radius attract more agents. trip—in this event an agent starts its trip from one city to another. endtrip–the event invoked by the trip event. It is executed repeatedly, simulating the agent movement. The endtrip event stops when the agent reaches its destiny. moves—agent movement. The movement direction is defined as follows: vx ¼ 0.1*vxr + 0.9*vatrx; vy ¼ 0.1*vyr + 0.9*vatry: (we use C++ expression syntax to avoid multiple subscripts of mathematical terms). contact—this event occurs when the distance between two agents becomes less than a given threshold (fixed equal to 0.002 in this model). Within the contact event between agents A and B occurs the following: If agent A is exposed or infected and the agent B is not, then the agent B becomes exposed (state E). This occurs with a given probability (equal to 0.3 by default). If the agent B becomes exposed, the event incub is scheduled to be executed after the disease incubation period. In the event incub, the agent state switches to I (infected). dies—the agent disappears from the model. Other processes: Process monit. Only one instance of this process is generated. The created object runs concurrently with all agents of the model. The events of this process are as follows. Events: init—initial settings. control—calculates the statistics like population of each city, susceptibles, exposed, infected, recovered, etc. This event also generates one infected agent, after a short warm-up period. store—stored the model state variables in files on hard disc. show—displays the animation screen.

6.4 Simulations

115

In the main segment of the program, 500 agents are generated and their corresponding init events are scheduled. Also, one object of type monit is generated and its events activated.

6.4

Simulations

If not stated differently, in the following we simulate the evolution of epidemic during 30 time units. The units can be interpreted as days. The death rate is supposed to be equal to 0.1 (10% of infected will die). The average disease duration is supposed to be equal to 7, and the incubation period is equal to 3 time units.

6.4.1

Fast Propagation

In this simulation we suppose that the trips between cities are frequent and fast. Namely, each agent changes the city each 10 time units (the trip interval), in average. The moment when the agent starts traveling is defined as random variable with exponential distribution and expected value equal to trip interval.

Fig. 6.9 The warm-up period. Agents are attracted towards the cities

116

6 Epidemics

Fig. 6.10 The early stage of epidemics, and agents moving between cities

This means that, from the point of view of disease propagation, we practically have one connected population. In Fig. 6.9 we can see the initial warm-up situation, when the randomly distributed individuals are being attracted towards the five cities. Figure 6.10 shows the situation at the beginning of disease spread. In the following, the blue icons represent exposed, red means infected, and green is for recovered. For the agent that moves between cities, a small line segment indicates the movement direction. . Figure 6.11 shows an advanced stage of the epidemics, when nearly everybody is exposed, infected, or recovered. In Fig. 6.12 we can see the plots of susceptible, exposed, infected, and recovered. In this case, the results do not differ significantly from those of continuous model version because the territorial distribution of agents is not very significant.

6.4.2

Lower Trip Frequency

Now, let the trip interval be equal to 70. This means that not all agents will move because the final simulation time is equal to 30. However, due to the number of agents (500), there are always some of them traveling. Figure 6.13 shows the situation after 15 time units. Observe that while in the cities B, C, D, and E nearly all agents are exposed or infected, the city A remains with only few exposed. This territorial distribution of agents influences the spread dynamics. Figure 6.14 depicts the plots of the amount of the four types of agents. In

6.4 Simulations

117

Fig. 6.11 Advanced stage of epidemics, fast propagation

500 susceptible

recovered

exposed

400

infected

300

200

100

0

Time

0

6

12

18

24

30

Fig. 6.12 Susceptible, exposed, infected, and recovered, fast trip case

this case, the shape of the plots is quite different from the plots obtained by the continuous, globalized version of the epidemics.

118

6 Epidemics

Fig. 6.13 Epidemics spread at time equal to 15, lower trip frequency 500 susceptible

400

exposed

recovered infected

300

200

100

0

0

6

12

18

24

30

Fig. 6.14 Susceptible, exposed, infected, and recovered, lower trip frequency

6.4.3

Near Cities

In this simulation we suppose that the distance between cities is low, and the agents can change the city or only interact with those from other cities by their random walk. Unlike the previous simulation, agent trips are disabled. Figure 6.15 shows the

6.4 Simulations

119

Fig. 6.15 Epidemy spread for near cities

500 s - susceptible E - exposed I - infected R - recovered

400 S

R

300

200 E

I

E

R I

E

I

100

S

R

E

Time

0 0

6

12

18

24

30

Fig. 6.16 Number of agents of type S, E, I, and R, near cities simulation

animation screens for time equal to 15 and 25. It can be seen that in some cities the epidemics starts with considerable delay. Figure 6.16 shows the plots of S, E, I, and R agents. The shape of the curves is considerably different from the global continuous version. For example, the number of exposed (curve E, dotted line) has two local maxima, because the epidemics, after some decrease, spreads again because of interactions between cities.

120

6.4.4

6 Epidemics

Long Epidemics—Adverse Conditions

In this simulation we suppose that the period of immunity is finite, equal to 20 time units, and the final time is equal to 80. The mortality is supposed to be equal to 0.05. The duration of the disease is not fixed as before, but changes from 7 to 11 for the final time. Figure 6.17 shows the model trajectory. At the beginning, there are strong oscillations of the number of exposed and infected, and then the variables oscillate with longer period. Looking at the plots, it seems that the epidemics will not terminate, and rather the number of infected tends to certain steady level. In Fig. 6.18 we can see a separate plot of the global number of susceptibles, and Fig. 6.19 shows the number of infected for city C. Note that this variable does not

Fig. 6.17 Long time or “catastrophic” epidemics

Fig. 6.18 Long time simulation, number of susceptibles

References

121

Fig. 6.19 Long time simulation. Infected for city C

tend to zero, and there remain some number of “residual” susceptibles. This makes possible the epidemics to repeat. In Fig. 6.3 there is an example plot of infected in city C. Observe that after the initial epidemic grow and decay, the variable enters in quite different behavioral pattern.

6.5

Conclusion

In general, both continuous and agent-based simulations of this kind of models provide similar results. In continuous simulations, interesting results of uncertainty treatment can be obtained, by the application of differential inclusions. The agentbased simulation gives us more detailed information, including the spatial distribution of the disease spread. In this kind of simulation we can change the behavior patterns of each individual agent and observe the global system behavior.

References Allen LJS, Allen EJ (1945) An introduction to stochastic epidemic models. Springer, Berlin Aubin JP, Cellina A (1984) Differential inclusions. Springer, Berlin. https://doi.org/10.1007/978-3642-69512-4. ISBN: 978-3-642-69514-8 Bazzan AIC (2009) Auton Agent Multi-Agent Syst 18(3):342–375. https://doi.org/10.1007/ s10458-008-9062-9 Bommel P, Becu N, Le Page C, Bousquet F (2015) Cormas, an agent-based simulation platform for coupling human decisions with computerized dynamics. In: Hybrid simulation and gaming in the network society series. Translational systems sciences. Springer, Singapore. https://doi.org/ 10.1007/978-981-10-0575-6_27

122

6 Epidemics

Coakley S, Smallwood R, Holcombe M (2006) From molecules to insect communities – how formal agent based computational modeling is undercovering new biological facts. Scientiae Mathematicae Japonicae Online 64:765–778. http://www.jams.or.jp/scm/contents/e-2006-7/ 2006-69.pdf Daley DJ, Gani J (2001) Epidemic modelling: an introduction. Cambridge University Press, Cambridge. ISBN: 978-0-521-01467-0 Dargatz C., Dragicevic S. (2006) A Diffusion Approximation for an Epidemic Model. Collaborative Research Center 386 David JD (1945) A light introduction to modelling recurrent epidemics. In: Mathematical epidemiology series lecture notes in mathematics. Springer, Berlin Forrester JW (1961) Industrial dynamics. Pegasus Communications, Waltham Gebreyesus KD, Chang CH (2015) Infectious diseases dynamics and complexity: multicompartment and multivariate state-space modeling. In: Conference paper: proceedings of the world congress on engineering and computer science 2015, 2, San Francisco. isbn:978988-14047-2-5 Kermack WO, McKendrick AG (1927) Contributions to the mathematical theory of epidemics, part I. In: Conference paper: proceedings of the Royal Society of Edinburgh. Section a. mathematics, the Royal Society of Edinburgh, Edinburgh Klein J (2002) Breve: a 3D environment for the simulation of decentralized systems and artificial life. In: Conference paper: ICAL 2003 proceedings of the eighth international conference on artificial life. MIT Press, Cambridge. ISBN/ISSN 0-262-69281-3 Luke S, Cioffi-Revilla C, Panait L, Sullivan K (2005) MASON: a multiagent simulation environment. SIMULATION 81(7):517–527 Michael JN, Nicholson T, Collier JR, Vos JR (2006) Experiences creating three implementations of the repast agent modeling toolkit. ACM Trans Model Comput Simul 16(1):1–25. https://doi.org/ 10.1145/1122012.1122013 Matis JH, Kiffe TR (2000) Stochastic population models. A compartmental perspective. Springer, Cham Misici L, Santarelli F (2013) Epidemic propagation: an automaton model as the continuous SIR model. Appl Math 4:84–89 Ng TW, Turinici G, Danchin A (2003) A double epidemic model for the SARS propagation. Journal of negative results in biomedicine. BioMed Central 3(19). https://doi.org/10.1186/14712334-3-19 Obaidat MS, Papadimitriou GI (2003) Applied system simulation methodologies and applications. Springer, Boston. ISBN: 978-1-4613-4843-6 Parker MT (2001) What is Ascape and why should you care? J Artif Soc Soc Simul. http://jasss.soc. surrey.ac.uk/4/1/5.html Perez L, Dragicevic S (2009) An agent-based approach for modeling dynamics of contagious disease spread. BioMed Central 8(50). https://doi.org/10.1186/1476-072X-8-50 Raczynski S (2002) Differential inclusion solver. In: Conference paper: conference paper: international conference on grand challenges for modeling and simulation, the Society for Modeling and Simulation Int., San Antonio TX Raczynski S (2009) Discrete event approach to the classical system dynamics. In: Conference paper: Huntsville simulation conference Huntsville Alabama, the Society for Modeling and Simulation SWARM Development Group (2001) Swarm simulation system. Electron 8(1–10):2. http:// digitalcommons.usu.edu/nrei/vol8/iss1/2 Tanuma H, Deguchi H, Shimizu T (2005) Agent-based simulation: from modeling methodologies to real-world applications, vol 1. Springer, Tokyo Tatai G, Gulyas L, Laufer L, Ivanyi M (2005) Artificial agents helping to stock up on knowledge. In: Conference paper: 4th international central and eastern European conference on multi-agent system, Budapest, Hungary, ISBN:3-540-29046-X 978-3-540-29046-9. https://doi.org/10. 1007/11559221_3

Chapter 7

Growing Organism and Cancer

Abstract Agent-based modeling is used to simulate the behavior of an abstract growing organism. The cells of the organism are “alive” during the simulation. They replicate, change their properties and die. This makes the organism in permanent movement. The aging process is simulated as a result of the changes of the genetic information contained in each cell. Two “catastrophic” events are simulated: the bacterial infection that is detected and eliminated by the organism and the cancerlike disease that cannot be detected and spreads to all the organisms. Keywords Organism growth · Modeling · Simulation · Agent-based modeling · DNA

7.1

Introduction

In this chapter, we will simulate the growth of an artificial, multi-cellular organism. The reproduction of the cells obeys a mechanism similar to the growth governed by the DNA (Deoxyribonucleic acid) molecules. The model is implemented using the agent-based simulation method. Like in the previous chapters, we do not simulate any real biological system, but rather the behavior of an artificial set of cells. We do not pretend to simulate the extremely complex mechanism of DNA-governed cell growth. So, we will not use the acronym DNA, using the name EOB instead. It has been done a huge research and immense number of publications about the DNA, so, the comprehensive overview would rather be the topic of a whole book. Here, we only recall the basic issues. DNA was discovered in 1869. Its functions were not understood until the middle of the last century. The structure of the DNA molecule is a double-helix spiral. In April 1953, the discovery of this fact was published in Nature magazine (Watson and Crick 1953). In their announcement, Watson and Crick say: “It has not escaped our notice that the specific pairing that we have postulated immediately suggests a possible copying mechanism for the genetic material.” Further research confirmed the role of DNA in the cell reproduction

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Raczynski, Catastrophes and Unexpected Behavior Patterns in Complex Artificial Populations, Evolutionary Economics and Social Complexity Science 27, https://doi.org/10.1007/978-981-16-2574-9_7

123

124

7 Growing Organism and Cancer

process as a key to define and maintain life. A good explanation of the DNA function can be found in the book by Calladine et al. (2004) or Johnson and Lewis (2002). DNA is located inside the nucleus of the cell. The DNA instructions are used to make proteins and for other biological processes. When a cell divides, DNA is transferred to new cells. DNA sequences are converted into messages that are used to produce proteins. The enzymes that read the information in a DNA molecule are transcribed into an intermediary molecule called messenger ribonucleic acid or mRNA. The information contained in the mRNA molecule is translated into the “language” that tells the cells how to produce a specific protein. This RNA copy is then decoded by ribosomes that are particles consisting of RNA and associated proteins that function to synthesize proteins. Basically, they convert the genetic code into an amino acid sequence and build protein polymers from amino acid monomers. These proteins implement all of the functions of a living organism and determine the organism’s characteristics. These are only some of the basic functions and mechanisms of DNA and organism growth. We will not enter in more detail because we do not pretend to simulate all these functions. In the model used in the following sections, what we use are the very elemental functions of DNA, called EOB in the following. So, our EOB contains the data that tells how the model cells should reproduce. The cell is simulated as an agent of the agent-based model that may have different types (see the model description below). Each cell contains, in its attributes, a copy of the EOB that is passed to new replicas. To grow, the cell needs to decode the EOB information, like RNA does in the real organism. This decoded information is used to generate a new cell of the type of the original one in certain decoded direction and other characteristics. While creating a new cell, the EOB information can be slightly changed. These alterations make the organism constantly change and evolve. This is a very simple mechanism, but contains the basic functions of DNA-governed growth: • The EOB contains the information necessary to create new replicas • During the replication process, the EOB information is decoded and new replica is created. • The new replica is not exactly equal to the parent. Its EOB is being altered.

7.2 7.2.1

Agent-Based Modeling How It Works

For the reader convenience, we repeat the information about the agent-based modeling and the simulation tool. A larger overview of agent-based simulation can be found in Chap. 1. Here, we only recall some basic concepts. In the discrete object-based modeling, we create objects that behave according to the user-defined rules, executing their events in discrete moments of the model time.

7.2 Agent-Based Modeling

125

The agent-based models manage objects called agents, equipped with certain “intelligence.” They can take decisions, optimize their actions, and interact with each other. Agent-based models (ABMs) simulate the concurrent operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The agent-based approach consists in describing individuals interacting with one another and with their environment. The individuals may be of different types. Although the rules of behavior are shared by a group of individuals of the same type, the behavior is not identical for all of them. This modeling method has many applications, mainly in ecology, biology, and social sciences. A key notion is that simple behavioral rules (micro model) generate complex (macro) behavior. An important central tenet is that the whole is greater than the sum of the parts. Individual agents are typically characterized as rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules (Railsback 2006). Consult also Bandini et al. (2009). In the case of the prey–predator system and growing population models, the agent-based simulation seems to be a quite realistic approach to the system modeling. This is an alternative approach, completely different from the continuous L-V model. There is a strange conviction among some simulationists that everything that happens in the real world can be described by the differential equations. The point of this book is that this is not exactly true. In few words, the ABM simulation program generates the corresponding populations according to the generic declarations of the object types. The objects appear in the computer memory and each of them runs its own “life,” moving, escaping, devouring other ones, and dying. For more information consult: Bazzan (2009), Bommel et al. (2015), Klein (2002), Macy and Willer (2002), SWARM Development Group (2001) or Tanuma et al. (2005).

7.2.2

Simulation Tool

The BLUESSS (Blues Simulation System) package was used to simulate the prey– predator model. It is the C++ version of packages PASION and PSM++, developed earlier by the author. See Chap. 2 for more detailed overview of BLUESS. The main features of the package are as follows. • • • • • •

Object-oriented simulation Continuous/discrete models Clock mechanism Inheritance Permits the use of all C++ features Easy to use, clear process/event structure

126

7 Growing Organism and Cancer

The BLUESSS source code is converted into C++ code compatible with the C+ +Builder, then compiled and executed. The resulted exe file is a stand-alone Windows application. The user can also see and edit the resulting C++ source code. There are several additional modules that work as BLUSSS code generators. Queuing module receives a user-defined graphical queuing model structure and the necessary data. It produces the BLUESSS source code. ODE (continuous) module converts the model differential equations into the BLUESSS code. These equations can be explicitly defined by the user or obtained from the block diagrams/signal flow or bond graphs modules. A post-mortem variance analysis for stochastic models can be done. The animator module shows animated simulations for queuing and manufacturing models. BLUESSS provides a “clock” mechanism that manages the model time, using an efficient internal event queue algorithm. For more information consult http://www. raczynski.com/pn/pn.htm. BLUESSS can be used to create agent-based models. The general concept is to declare a series of processes and, inside each process, several events that may occur during the “life” of the corresponding object. The process declarations are generic, and the objects are instances of the processes. For example, if we declare a process “client” as a client at a bank, then we can generate and launch thousands of clients, each of them running its own events, like “enter the bank,” “join a waiting line,” “occupy teller,” and so on. Each event has a body that may include a code for any, simple or complicated, algorithm of object behavior. So, the user can create objects that represent the model agents and equip them with a desired behavior. Inside the process declaration, there are several event definitions. The events can be scheduled for their execution, which is controlled by the BLUESSS clock mechanism. Any object can interact with other objects, change its own attributes or those of other objects, execute a complicated computational procedure, execute an external program, display an image, be able to run external code (e.g., sing a song or communicate by the internet). This makes BLUESSS a convenient tool for the agentbased simulation.

7.3 7.3.1

The Model Overview

We simulate a growing multi-cellular organism. The simulations are carried out over a two-dimensional region, where the cells replicate. The same simulation can be done in 3D, but then, the number of necessary cells would be significantly greater, and we will not have so easy insight on what happens inside the organism. The organism has the following properties.

7.3 The Model

127

1. The growth is initialized with one initial cell that starts to replicate. 2. The organism is “alive,” that means that the cells constantly replicate and die. On the animation screen, this birth-and-death process can be seen, as the cell images slightly move, appear, and disappear, while the general shape of the whole organism remains the same. In this model, the average cell lifetime is equal to 4 time units, while the final simulation time in most simulations is equal to 400 time units. 3. The cell type, replication, and position are defined by the EOB record, contained in each cell. 4. The EOB changes slightly while inherited from the parent cell. 5. There are two types of possible infections: • A bacteria, being a foreign cell implanted into the organism. • Cancer cells that grow because of the degeneration of the EOB data. Each cell has the type attribute that defines its size and display color. The type is also a “stamp” that tells if the cell is an organism-own object, or a foreign body. The own cells are those which type belongs to the predefined set. In our case, we simulate a simple organism, and its cells can be of type 0,1, and 2 only. The attributes of the cell are as follows. Type—can be 0,1, and 2 (x,y)—cell position inside a square region of size 1x1 (xo,yo) —the position of the initial cell. This attribute permits the cell to “know” its relative position inside the organism lifetime—the predefined, average lifetime of the cell reprate —the replication rate. The cell intents to replicate each reprate time units EOB—the data record that encodes the way the cell can replicate The EOB attribute is a data structure attached to each cell. It can contain several data items, up to several thousand items for more complicated organisms. The EOB cannot be one, global data used by all cells because it is subject to changes during the replications. Here, we simulate a simple organism, with EOB that contain few data. C++ code is as follows. struct teob{ float dx1,dx2,zx1,zy1,zx2,zy2,ltim,reprat;};

This data, to be used, needs a decoding function. Consider, for example, the following function (code A). void interp(teob myeob, float x, float y, int *typ, float *alf ){ float d,d1,d2,dd1,dd2,zx1,zx2,zy1,zy2; dd1=myeob.dx1; dd2=myeob.dx2; zx1=myeob.zx1; zx2=myeob.zx2; zy1=myeob.zy1; zy2=myeob.zy2; d=sqr(x)+sqr(y); if(d>0.0)d=sqrt(d); d1=sqr(x-zx1)+sqr(zy1-y-0.1); if(d1>0.0)d1=sqrt(d1);

128

7 Growing Organism and Cancer

d2=sqr(x+zx2-0.5)+sqr(zy2-y-0.1); if(d2>0.0)d2=sqrt(d2); *alf=Frandom()*PI; if(d=dd1 && d=dd2)*typ=2; if(d>=dd2 && x>0.0)*alf=acos((zx1-x)/d1); if(d>=dd2 && x0.0)d=sqrt(d); d1=sqr(x-zx1)+sqr(zy1-y-0.1); if(d1>0.0)d1=sqrt(d1); d2=sqr(x+zx2-0.5)+sqr(zy2-y-0.1); if(d2>0.0)d2=sqrt(d2); *alf=Frandom()*PI; if(d=dd1 && d=dd2)*typ=2; if(d0.0)*alf=acos((zy1-y)/d1); if(d 0, the income is greater for those who “work harder” or whose work effectiveness is greater (Wk > 1). Agent k maximizes his object function J defined below. J k ¼ vk I k  ð1  vk ÞW k

ð8:5Þ

where v is a coefficient, 0 < ¼vk < ¼1. This means that if vk approaches one, then the agent wants to maximize his income. For low values of vk the agent prefers minimize his work effort. Now, using the Eq. (8.4), we obtain J¼

vr ðg þ ðW  1ÞcÞ  ð1  vÞW ðagent index omittedÞ N

After reordering, we have     rc vr J¼W v þ 1  1 þ ð g  cÞ N N

ð8:6Þ

The agent maximizes of J changing the amount of work W. So, if the   the value  þ 1  1 is greater than zero, then he increases his working expression s ¼ v rc N effort, otherwise he prefers to decrease it. This change is done within an agent activity obfun, that is executed repeatedly each h time units. The change dW of W is defined as follows. 

If J > U then dW ¼ 0:03s if J 0 && TIME-TIMIN>100) WinExec("other.exe",1); if(n==42){ b=nr(QUE39)+nr(QUE41)>24; SEM29=b; SEM40=b;} if(TIMQ>20)DisplayWarning(); }

This is a very simple example. Inside the SVOP function the user can insert any C ++ code to define the entity behavior and/or block operations. QMG is not a typical tool for agent-based simulation. However, if we use the SVOP function, we can equip the moving objects with a simple or more complicated behavior. Even without using SVOP, the entities formed in queues or inside other facilities, can take decisions, for example, go to the shortest queue, or leave the queue if the waiting time becomes too long. Queuing and manufacturing models of QMG may use animation. For example, Fig. 9.3 shows the QMG scheme of a simple model of a tollbooth. The conveyers are used to simulate car movements. In Fig. 9.4 we can see a screenshot from the corresponding animation.

162

9 Waiting Lines

Fig. 9.4 Tollbooth model animation

Fig. 9.5 A simple QMG model

9.3 9.3.1

Simulations Simple Model, Useless Statistics

Let us start with a simple model with two queues and two servers, as shown in Fig. 9.5. Model blocks are as follows: 1. Entity generator, Poisson input flow, inter-arrival time distribution exponential with expected value 1. 2. FIFO (first-in-first-out) queue. 3. Server, service time distribution Erlang rank 2, expected value 1. 4. FIFO queue. 5. Server service time distribution Erlang rank 2, expected value 1.01. 6. Terminal point. Note that the input average input flow interval is equal to the average service time of the server one. However, both the queueing theory and the simulation tell us that the queue (block 2) length will be greater than zero. Figure 9.6 shows the average trajectory (as function of time) for the first queue, with the maximal and minimal queue length (green area), taken over 200 repetitions of the simulation run, with final

9.3 Simulations

163

Fig. 9.6 Average trajectory and the maximal and minimal queue length for the second queue

Fig. 9.7 Average trajectory and the confidence intervals for the queue of block 2

simulation model time equal to 580. The yellow curve inside the green area is the average queue length. It can be seen how little informative and useless is the average queue length value. The final average length is equal to approximately 18, while the maximal queue reached in all repetitions is approximately 90. Figure 9.7 depicts the average queue length and the confidence intervals for the same queue, as function of time. The red area is the confidence interval with confidence level 0.9. Again, we can see how useless is the average. The variance and the confidence intervals are also with little utility because of the great variance of the queue length. The second queue is a little bit longer, but the plots are similar. In Fig. 9.8 we can see the 3D image of the probability density for this queue. The horizontal axes are queue length and the time, and the vertical axis is the probability density.

164

9 Waiting Lines

Fig. 9.8 Three-dimensional image of the probability of queue length

Fig. 9.9 Bus stop model

9.3.2

The Bus Stop Paradox

This is a known paradox of the queuing theory. Suppose that the bus arrives to the bus stop with average inter-arrival interval equal to 20 min. However, the buses arrive in somewhat irregular way, namely as the Poisson input flow. Now, you come to the bus stop in some, random time instant. As the buses should arrive approximately each 20 minutes, it is "logical" that you expect to wait about ten minus. However, this is not true. The expected value of your waiting time is equal to 20, independently on the moment when you come. The simulation fully confirms this paradox. The model is shown in Fig. 9.9 Block 1 is the passenger generator, block 2 is the passenger queue, block 3 is the server that represents a fast, boarding operation. However, the boarding is disabled until a bus arrives. Block 5 is the bus generator, Poisson input flow with expected inter-arrival time equal to 20. Server 7 is the bus stop, with service time fixed to 1 min. The server 3 is fast enough to enable all waiting persons to enter the bus (the

9.3 Simulations

31.0000

165

Queue length

Bus arrives

24.8000 18.6000 12.4000 6.2000 0.0000 0.0

19.950

39.900

59.850

79.800

99.7 Time

Fig. 9.10 Bus stop waiting line, final time 100

Fig. 9.11 A queue-server chain with feedback

bus has unlimited capacity). To simulate this, we need to use a semaphore at the input to server 3. The corresponding code in the SVOP function is as follows. void SVOP(int n, int SOURCE, float TIMIN, float TIMQ){ if(n==7)SEM3=true; if(n==8)SEM3=false; }

Initially, the server 3 is disabled (semaphore¼false). We can see that the semaphore SEM3 of server 3 is triggered to be true when a bus enters the block 7 (buss arrives). It is set false again, when the buss departs. Figure 9.10 depicts the changes of the number of passengers waiting for the bus. Running 200 repetitions of the model with the final simulation time equal to 2000, we obtain, as the result, the average passenger waiting time equal to 20, with some small fluctuations of plus minus 1.5 in average, for different experiments. A possible error is also caused by the initial warm-up period.

9.3.3

Queue and Server Chain

Let us simulate a chain of four consecutive queues servers with corresponding queues, see Fig. 9.11. For some reason, there is a requirement that the length of the last queue (block 8) does not exceed 4. Consequently, the system designer puts a semaphore at the entry to the system (block 1). This semaphore disables the generator 1 (entry door closed), if the last queue is greater than 4. This is done by the following SVOP code.

166

18.0000

9 Waiting Lines Queue2

Queue length

14.4000 10.8000 7.2000 3.6000 0.0000 0.0

79.800

159.600 Queue4

239.400

319.200

399

79.800

159.600

239.400

319.200

399 Time

7.0000 Queue length 5.6000 4.2000 2.8000 1.4000 0.0000 0.0

Fig. 9.12 Queue changes for blocks 2 and 4

void SVOP(int n, int SOURCE, float TIMIN, float TIMQ){ SEM3=(nr(QUE8)