139 64 40MB
English Pages [657]
Unifying Themes in Complex Systems Volume VI
Springer Complexity SpringerComplexity is a publication program, cutting across all traditional disciplinesof sciences as wellas engineering, economics, medicine, psychologyand computersciences, whichis aimedat researchers, studentsand practitioners working in the field of complex systems. Complex Systems are systems thatcomprisemanyinteractingpartswiththe abilityto generatea newquality of macroscopic collectivebehaviorthrough self-organization, e.g., the spontaneous formation of temporal, spatial or functional structures. This recognition, that the collectivebehavior of the whole system cannot be simply inferred from the understanding of the behavior of the individual components, has led to various newconceptsand sophisticated tools of complexity. The main conceptsand tools - with sometimes overlapping contents and methodologies - are the theories of self-organization, complexsystems,synergetics, dynamical systems, turbulence, catastrophes, instabilities, nonlinearity, stochasticprocesses, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. The topics treated within Springer Complexity are as diverse as lasers or fluids in physics, machinecuttingphenomena of workpieces or electriccircuitswith feedback in engineering, growth of crystalsor patternformation in chemistry, morphogenesis in biology, brain function in neurology, behaviorof stockexchangerates in economics, or the formation of publicopinionin sociology. All these seeminglyquite different kinds of structure formation have a number of important features and underlying structuresin common. Thesedeep structuralsimilarities can be exploited to transfer analytical methodsand understanding from one fieldto another. The SpringerComplexity program therefore seeks to foster cross-fertilization between the disciplinesand a dialogue between theoreticians and experimentalists for a deeper understanding of the general structure and behaviorof complexsystems. The programconsistsof individual books,booksseries such as "SpringerSeriesin Synergetics", "Instituteof NonlinearScience", "Physics of NeuralNetworks", and "Understanding Complex Systems", as well as variousjournals.
New England Complex Systems Institute
NECS/
President Yaneer Bar-Yam New England Complex Systems Institute 24 Mt. Auburn St. Cambridge , MA 02138, USA
For over 10 years, The New England Complex Sy stem s Institute (NECSI) has been instrumental in the development of complex systems science and its applications . NECSI conducts research , education, know-ledge dissem ination , and community development around the world for the promotion of the study of complex sys tems and its appli cation for the betterment of society . NECSI was founded by faculty of New England area academic institutions in 1996 to further international research and understanding of complex systems . Complex sys tems is a growing field of science that aims to understand how parts of a syst em give rise to the sys tem's collective behaviors, and how it interacts with its environment. These questions can be studied in general, and they are also relevant to all traditional fields of science. Social systems formed (in part) out of people , the brain formed out of neuron s, molecules formed out of atom s, and the weather formed from air flows are all examples of complex systems. The field of complex system s intersects all traditional disciplines of physical, biological and social sciences, as well as engineering , management , and medicine . Advanced education in complex system s attracts profes sion als, as complex systems science provides practical approaches to health care, social network s, ethnic violence, marketing , milit ary confl ict , education , systems eng ineering , international developm ent and terrorism . The study of complex syst ems is about understand ing indirect effects. Problems we find difficult to solve have causes and effects that are not obv iously related, Pushing on a complex system "here" often has effects "over there " because the parts are interdependent. This has become more and more apparent in our efforts to solve societal problems or avoid ecolo gical disasters caused by our own actions . The field of complex systems provides a number of soph isticated tools , some of them conceptual helping us think about these systems , some of them analytical for studying these systems in greater depth, and some of them computer based for describing, modeling or simulating them . NECSI research develops basic concepts and formal approach es as well as their applications to real world problem s. Contribution s of NECSI researcher s include studies of networks, agent-based modeling , multiscale analy si s and complexity , chaos and predictability, evolution, ecology, biodiversity , altrui sm, systems biology , cellular res ponse, health care, sys tems engineering , negotiation , military conflict, ethnic violence, and international development. NECSI uses many modes of education to further the investigation of complex systems . Throughout the year, cla sses, seminars, conferences and other program s assi st students and professionals alike in the ir understanding of complex systems. Courses have been taught all over the world: Australia , Canada, China, Colombia, France, Italy, Japan, Korea, Portug al, Russia and many states of the U.S. NECSI also sponsors postdoctoral fellows, provides research resources, and hosts the Intern ational Conference on Complex Systems, discussion groups and web resources.
@ NECSI
New England Complex Systems Institute Book Series Series Editor Dan Braha
New England Complex Systems Institute 24 Mt. Auburn St. Cambridge, MA 02138, USA
New England Complex Systems Institute Book Series The world around is full of the wonderful interplay of relationships and emergent behaviors . The beautiful and mysterious way that atoms form biological and social systems inspires us to new efforts in science . As our society becomes more concerned with how people are connected to each other than how they work independently, so science has become interested in the nature of relationships and relatedness . Through relationships elements act together to become systems, and systems achieve function and purpose . The study of complex systems is remarkable in the closeness of basic ideas and practical implications . Advances in our understanding of complex systems give new opportunities for insight in science and improvement of society . This is manifest in the relevance to eng ineering , medicine, management and education . We devote this book series to the communication of recent advances and reviews of revolutionary ideas and their application to practical concerns.
Unifying Themes in Complex Systems VI Proceedings of the Sixth International Conference on Complex Systems
Edited by Ali Minai, Dan Braha and Yaneer Bar-Yam
Ali A. Minai Univeristy of Cincinnati Department of Electrical and Computer Engineering, and Computer Science P.O. Box 210030, Rhodes Hall 814 Cincinnati, OH 45221-0030, USA Email: [email protected] Dan Braha New England Complex Systems Institute 24 Mt. Auburn St. Cambridge, MA 02138-3068, USA Email: [email protected] Yaneer Bar-Yam New England Complex Systems Institute 24 Mt. Auburn St. Cambridge, MA 02138-3068, USA Email : [email protected]
This volume is part of the New England Complex Systems Institute Series on Complexity Library of Congress Control Number: 2008931598
ISBN-978-3-540-85080-9 Springer Berlin Heidelberg New York This work is subject to copyright . All rights are reserved, whether the whole or part of the material is concerned, specifically the right s of translation, reprinting, reuse of illustrations, recitation, broad casting, reproduction on microfilm or in any other way, and storage in data banks . Duplication of this publication or parts thereof is permitted only under th e provisions of the German Copyright Law of September 9, 1965, in its current version. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science-l-Business Media springer.com NECSI Cambridge, Massachusetts 2008 Printed in the USA The use of general descriptive names, registered names , trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
CONTENTS -
VOLUME VI
Introduction
vii
Organization and Program
viii
NECSI Publications
xxiv
PART I: Methods Daniel Polani Emergence, Intrinsic Structure of Information, and Agenth ood
3
Susan Sgorbati & Bruce W eber How Deep and Broad are t he Laws of Emergence?
11
Victor Korotkikh & Galina Korotkikh On an Irreducible Th eory of Complex Systems
19
Jacek Marczyk & Balachandra Deshpande Measuring and Tracking Complexity in Science
27
Val K . Bykovsky Dat a-Driven Modeling of Complex Systems
34
Tibor Bosse, Alexei Sharpanskykh & Jan Treur Modelling Complex Systems by Integration of Agent-Based and Dynamical Systems Models
42
Yuriy Gulak On Elementary and Algebraic Cellular Automat a
50
David G. Green, Tania G. Leishman & Suzanne Sadedin Dual Ph ase Evolution - A Mechanism for Self-Organization in Complex Systems
58
Jun Wu, Yue-Jin Tan, Hong-Zhong Deng & Da-Zhi Zhu A New Measure of Heterogeneity of Complex Networks Based on Degree Sequence
66
Daniel E . Whitney & David Alderson Are Technological and Social Networks Really Different ?
74
Takeshi Ozeki
Evolutional Family Networks Generat ed by Group-Entry Growth Mechanism with P referential Att achment and t heir Features
82
Gabor Csardi, Katherine Strandburg, Laszlo Zalanyi, Jan Tobochnik & Peter Erdi Estimating the Dynamics of Kernel-Based Evolving Networks
90
Pedram Hovareshti & John S. Baras Consensus Problems on Small World Graphs: A Structural Study
98
Thomas F. Brantle & M. Hosein Fallah Complex Knowledge Networks and Invention Collaboration
106
Philip Vos Fellman & Jonathan Vos Post Complexity, Competitive Intelligence and the "First Mover" Advantage
114
Jiang He & M. Hosein Fallah Mobility of Innovators and Prosperity of Geographical Technology Clusters
122
Vito Albino, Nunzia Carbonara & Haria Giannoccaro Adaptive Capacity of Geographical Clusters: Complexity Science and Network Theory Approach
130
Philip Vos Fellman Corporate Strategy an Evolutionary Review
138
Diane M. McDonald & Nigel Kay Towards an Evaluation Framework for Complex Social Systems
146
Kevin Brandt Operational Synchronization
154
Philip Vos Fellman The Complexity of Terrorist Networks
162
Czeslaw Mesjasz Complexity Studies and Security in the Complex World: An Epistemological Framework of Analysis
170
Giuseppe Narzisi, Venkatesh Mysore, Jeewoong Byeon & Bud Mishra Complexities, Catastrophes and Cities: Emergency Dynamics in Varying Scenarios and Urban Topologies
178
Samantha Kleinberg, Marco Antoniotti, Satish Tadepalli, Naren Ramakrishnan & Bud Mishra Systems Biology via Redescription and Ontologies(II): A Tool for Discovery in Complex Systems
186
Hector Sabelli & Lazar Kovacevic Biotic Population Dynamics: Creative Biotic Patterns
194
PART II: Models R ene Doursat Th e Growing Canvas of Biological Development : Multiscale Pattern Generat ion on an Expanding Latt ice of Gene Regulatory Nets
203
Franziska Matthaus, C arlos Sal azar &: Oliver Ebenhoh Compound Clustering and Consensus Scopes of Metabolic Networks
211
Robert Melamede Endocannabinoids: Multi-scaled, Global Homeost at ic Regulators of Cells and Society
219
Walter Riofrio &: Luis Angel Aguilar Different Neurons Population Distribution correlat es with 'Iopologic-Temporal Dynamic Acoust ic Inform ation Flow
227
Mark Hoogendoorn, Martijn C. Schut &: Jan Treur Modeling t he Dynamics of Task Allocat ion and Specialization in Honeybee Societies
235
Garrett M. Dancik, Douglas E. Jones &: Karin S . Dorman An Agent- Based Model for Leishman ia major Infection
243
Holger Lange, Bjorn 0kland &: Paal Krokene To Be or Twice To Be? The Life Cycle Development of the Spruce Bar k Beetle Under Climate Change
251
Tibor Bosse, Alexei Sharpanskykh &: Jan Treur A Formal Analysis of Complexity Monotonicity
259
Claudio Tebaldi &: Deborah Lacitignola Complex Features in Lot ka-Volterra Syst ems with Behavioral Adaptation 267 Gerald H. Thomas Sc Keelan Kane A Dynamic T heory of Str ategic Decision Making Applied to t he Prisoners Dilemma
275
Mike Mesterton-Gibbons &: Tom N . Sherratt Animal Network Phenomena: Insights from Triadic Games
283
Simon Angus Endogenous Cooperation Network Formation
291
Khan Md. Mahbubush Salam &: Kazuyuki Ikko Takahashi Mat hematical Model of Conflict and Cooperation with Non-Annihilat ing Multi-Opponent
299
Margaret Lyell, Rob Flo &: Mateo Mejia-Tellez Simulatio n of Pedest rian Agent Crowds, with Crisis
307
Michael T. Gastner Traffic Flow in a Spatial Network Model
315
Gergana Bounova & Olivier de Weck Augmented Network Model for Engineering System Design
323
Daniel E. Whitney Network Models of Mechanical Assemblies
331
Jun Yu, Laura K. Gross & Christopher M. Danforth Complex Dynamic Behavior on Transition in a Solid Combustion Model 339 Ian F. Wilkinson, James B. Wiley & Aizhong Lin Modeling the Structural Dynamics of Industrial Networks
347
Leonard Wojcik, Krishna Boppana, Sam Chow, Olivier de Weck, Christian LaFon, Spyridon D. Lekkakos, James Lyneis, Matthew Rinaldi, Zhiyong Wang, Paul Wheeler & Marat Zborovskiy Can Models Capture the Complexity of the Systems Engineering Process? 366 Clement McGowan, Fred Cecere, Robert Darneille & Nate Laverdure Biological Event Modeling for Response Planning
374
Dmitry Chistilin Principles of Self-Organization and Sustainable Development of the World Economy are the Basis of Global Security
382
Walid Nasrallah Evolutionary Paths to Corrupt Societies of Artificial Agents
390
Roxana Wright, Philip Vos Fellman & Jonathan Vos Post Path Dependence, Transformation and Convergence A Mathematical Model of Transition to Market
398
Kumar Venkat & Wayne Wakeland Emergence of Networks in Distance-Constrained Trade
406
Ian F. Wilkinson, Robert E. Marks & Louise Young Toward Agent-Based Models of the Development and Evolution of Business Relations and Networks
414
Sharon A. Mertz, Adam Groothuis & Philip Vos Fellman Dynamic Modeling of New Technology Succession: Projecting the Impact of Macro Events and Micro Behaviors On Software Market Cycles
422
Manuel Dias & Tanya Araujo Hypercompetitive Environments: An Agent-Based Model Approach
430
V . Halpern Precursors of a Phase Transition in a Simple Model System
438
C. M. Lapilli, C. Wexler & P . Pfeifer Universality Away from Critical Points in a Thermostatistical Model
446
Philip Vos Fellman & Jonathan Vos Post Quantum Nash Equilibria and Quantum Computing
454
PART III: Applications Hiroki Sayama Teaching Emergence and Evolution Simultaneously Through Simulated Breeding of Artificial Swarm Behaviors
463
Ashok Kay Kanagarajah, Peter Lindsay, Anne Miller & David Parker An Exploration into the Uses of Agent-Based Modeling to Improve Quality of Healthcare
471
Neena A. George, Ali Minai & Simona Doboli Self-Organized Inference of Spatial Structure in Randomly Deployed Sensor Networks
479
Abhinay Venuturumilli & Ali Minai Obtaining Robust Wireless Sensor Networks through Self-Organization of Heterogenous Connectivity
487
Orrett Gayle & Daniel Coore Self-Organizing Text in an Amorphous Environment
495
Adel Sadek & N agi Basha Self-Learning Intelligent Agents for Dynamic Traffic Routing on Transportation Networks
503
Sarjoun Doumit & Ali Minai Distributed Resource Exploitation for Autonomous Mobile Sensor Agents in Dynamic Environments
511
Javier Alcazar & Ephrahim Garcia Interconnecting Robotic Subsystems in a Network
519
Chad Foster Estimating Complex System Robustness from Dual System Architectures 527 Dean J. Bonney Inquiry and Enterprise Transformation
535
Mike Webb Capability-Based Engineering Analysis (CBEA)
540
Keith McCaughin & Joseph DeRosa Stakeholder Analysis To Shape the Enterprise
548
George Rebovich Jr. Systems Thinking for the Enterprise: A Thought Piece
556
Matt Motyka, Jonathan R.A. Maier & Georges M . Fadel Representing the Complexity of Engineering Systems: A Multidisciplinary Perceptual Approach
564
Dighton Fiddner Policy Scale-free Organizational Network: Artifact or Phenomenon?
572
Hans-Peter Brunner Application of Complex Systems Research to Efforts of International Development
580
Alex Ryan About the Bears and the Bees: Adaptive Responses to Asymmetric Warfare
588
Donald Heathfield Improving Decision Making in the Area of National and International Security - The Future Map Methodology
596
Andrei Irimia, Michael R. Gallucci & John P. Wikswo Jr. Comparison of Chaotic Biomagnetic Field Patterns Recorded from the Arrhythmic Heart and Stomach
604
F. Canan Pembe & Haluk Bingol Complex Networks in Different Languages: A Study of an Emergent Multilingual Encyclopedia
612
Gokhan Sahln, Murat Erentiirk & Avadis Hacinliyan Possible Chaotic Structures in the Turkish Language with Time Series Analysis
618
Index of authors
626
vii
INTRODUCTION The science of complex systems has made impressive strides in recent years. Relative to the opportunities, however, the field is still in infancy. Our science can provide a unified foundation and framework for the study of all complex systems. In order for complex systems science to fulfill its potential, it is important to establish conventions that will facilitate interdisciplinary communication. This is part of the vision behind the International Conference on Complex Systems (ICCS) . For over a decade, ICCS has fostered much-needed cross-disciplinary communication. Moreover, it provides a forum for scientists to better understand universally applicable concepts such as complexity, emergence, evolution, adaptation and self organization. The Sixth ICCS proved that the broad range of scientific inquiry continues to reveal its common roots. More and more scientists realize the importance of the unifying principles that govern complex systems . The Sixth ICCS attracted a diverse group of participants reflecting wide ranging and overlapping interests. Topics ranged from economics to ecology, particle physics to psychology, and business to biology. Through plenary, pedagogical, breakout and poster sessions, conference attendees shared discoveries that were significant both to their particular field, as well as the overarching study of complex systems. This volume contains the proceedings from that conference. Recent work in the field of complex systems has produced a variety of new analytic and simulation techniques that have proven invaluable in the study of physical, biological and social systems. New methods of statistical analysis led to better understanding of patterns and networks. The application of simulation techniques such as agent-based models, cellular automata, and Monte Carlo simulations has increased our ability to understand or even predict the behavior of systems. The concepts and tools of complex systems are of interest not only to scientists, but also to corporate managers, physicians and policy makers. The rules that govern key dynamical behaviors of biochemical or neural networks apply to social or corporate networks, and professionals have started to realize how valuable these concepts are to their individual fields. The ICCS conferences have provided the opportunity for professionals to learn the basics of complex systems and share their real-world experience in applying these concepts.
Sixth International Conference on Complex Systems: Organization and Program Host : New Eng land Complex Systems Inst it ut e
Partial Financial Support: National Science Found ation
Additional Support : Birkhouser Edward Elgar Publishing Springer
Chairman: Yaneer Bar-Yam - NECSI
*
Executive Committee: Ali Minai - University of Cincinnati t Dan Braha - Univers ity of Massachuset t s, Dart mout h
* t
NECS I Co-facu lty NECS I Affiliate
t
ix
Program Committee: Russ Abbott - CSU Los Angeles Yaneer Bar-Yam - NECSI * Philippe Binder - University of Hawaii Dan Braha - MIT Jeff Cares - NECSI Director, Military Programs t Irene Conrad - Texas A&M University Kingsville Fred Discenzo - Rockwell Automation Carlos Gershenson - NECSI and Vrije Universiteit Brussel James Glazier - Indiana University Charles Hadlock - Bentley College Nancy Hayden - Sandia National Laboratories Helen Harte - NECSI Organization Science Program t Guy Hoelzer - University of Nevada, Reno Sui Huang - Harvard University t
Mark Klein - MIT t Tom Knight - MIT Michael Kuras - MITRE May Lim - NECSI, Brandeis University and University of the Philippines, Diliman Dwight Meglan - SimQuest Ali Minai - University of Cincinnati t Lael Parrott - Universi ty of Montreal Gary Nelson - Homeland Security Institute Doug Norman - MITRE Hiroki Sayama - Binghamton University, SUNY t Marlene Williamson Sanith Wijesinghe - Millenium IT Jonathan Vos Post - Computer Futures, Inc . Martin Zwick - Portland State University
Founding Organizing Committee of the ICCS Conferences: Philip W . Anderson - Princeton University Kenneth J . Arrow - Stanford University Michel Baranger - MIT * Per Bak - Niels Bohr Institute Charles H. Bennett - IBM William A. Brock - University of Wisconsin Charles R. Cantor - Boston University * Noam A. Chomsky - MIT Leon Cooper - Brown University Daniel Dennett - Tufts University Irving Epstein - Brandeis University * Michael S. Gazzaniga - Dartmouth College William Gelbart - Harvard University * Murray Gell-Mann - CalTech/Santa Fe Institute Pierre-Gilles de Gennes - ESPCI Stephen Grossberg - Boston University Michael Hammer - Hammer & Co John Holland - University of Michigan John Hopfield - Princeton University Jerome Kagan - Harvard University * Stuart A. Kauffman - Santa Fe Institute
* t
NECSI Co-faculty NECSI Affiliate
Chris Langton - Santa Fe Institute Roger Lewin - Harvard University Richard C. Lewontin - Harvard University Albert J. Libchaber - Rockefeller University Seth Lloyd - MIT * Andrew W . Lo - MIT Daniel W. McShea - Duke University Marvin Minsky - MIT Harold J. Morowitz - George Mason University Alan Perelson - Los Alamos National Lab Claudio Rebbi - Boston University Herbert A. Simon - Carnegie-Mellon University Temple F . Smith - Boston University * H. Eugene Stanley - Boston University John Sterman - MIT * James H. Stock - Harvard University * Gerald J . Sussman - MIT Edward O. Wilson - Harvard University Shuguang Zhang - MIT
x
Session Chairs: Iqbal Adjali - Unilever Yaneer Bar-Yam - NECSI * Dan Braha - University of Massachusetts, Dartmouth t Hans-Peter Brunner - Asian Development Bank Jeff Cares - NECSI Director, Military Programs t Irene Conrad - Texas A & M University Kingsville Joe DeRosa - MITRE Ronald DeGray - Saint Joseph College Fred Discenzo - Rockwell Automation Adrian Gheorghe - Old Dominion University Helen Harte - NECSI Organization Science Program t Guy Hoelzer - University of Nevada, Reno Plamen Ivanov - Boston University Mark Klein - MIT t Holger Lange - Norwegian Forest and Landscape Institute May Lim - NECSI, Brandeis University and University of the Philippines, Diliman Thea Luba - The MediaMetro Society
Joel MacAuslan - S.T .A.R Corp Gottfreid Mayer-Kress - Penn State University Dwight Meglan - SimQuest David Miguez - Brandeis University Ali Minai - University of Cincinnati t John Nash - Princeton University Doug Norman - MITRE Daniel Polani - University of Hertfordshire Jonathan Vos Post - Computer Futures, Inc . Hiroki Sayama - Binghamton University, SUNY t Jeff Schank - University of California, Davis Hava Siegelmann - University of Massachusetts, Amherst John Sterman - MIT Sloan * William Sulis - McMaster University Jerry Sussman - MIT Stephenson Tucker - Sandia National Laboratory Sanith Wijesinghe - Millenium IT Martin Zwick - Portland State University
Logistics and Coordination: Sageet Braha Eric Downes Nicole Durante Luke Evans Debra Gorfine
* t
NECSI Co-faculty NECSI Affiliate
Konstantin Kouptsov Aaron Littman Greg Wolfe
Xl
Subject areas: Unifying themes in complex systems The themes are: EMERGENCE, STRUCTURE AND FUNCTION: substructure, th e relationship of component to collect ive behavior, th e relationship of internal structure to external influence, multiscale structure and dynamics. INFORMATICS: structuring, storing, accessing, and distributing information describing complex systems. COMPLEXITY: characte rizing the amount of information necessary to describe complex systems, and the dynami cs of this information. DYNAMICS : tim e series analysis and prediction , chaos, temporal correlations, t he tim e scale of dynamic processes. SELF-ORGANIZATION: pattern formation , evolut ion, development and adapt at ion.
The system categories are: FUNDAMENTALS, PHYSICAL &; CHEMICAL SYSTEMS: spatio-temporal patterns and chaos, fractals , dynamic scaling, nonequilibrium processes, hydrod ynami cs, glasses, non-linear chemical dynamics, complex fluids, molecular self-organization, information and computation in physical syst ems. BIO-MOLECULAR Sc CELLULAR SYSTEMS: prot ein and DNA folding, bio-molecular informatics , membranes, cellular response and communication , genet ic regulation , gene-cytoplasm interactions, development , cellular differenti ation , primitive mult icellular organisms , th e immune system. PHYSIOLOGICAL SYSTEMS: nervous system , neuro-muscular control, neural network models of brain , cognition , psychofunction , pattern recognition , man-machine interactions . ORGANISMS AND POPULATIONS: population biology, ecosystems, ecology. HUMAN SOCIAL AND ECONOMIC SYSTEMS: corporate and social structures, markets , the global economy, the Int ernet. ENGINEERED SYSTEMS: product and product manufacturing, nano-technolo gy, modified and hybrid biological organisms , compute r based interactive systems, agents, art ificial life, artificial intelligence, and robots.
xii
Program: Sunday, June 25, 2006 PEDAGOGICAL SESSIONS - Mark Klein and Dan Braha - Session Chairs Albert-Laszlo Barabasi - The Architect ure of Complexity : Networks, biology, and dynamics Alfred Hubler - Understanding Complex Systems - From Parad igms to App licat ions Ali M inai - Biomorphic Systems Lev Levitin - Zipf Law Revisited: A Mode l of Em ergence and Manifestation Felice Frankel - T he Visual Expression of Complex Systems: An App roach to Understanding W hat Questions We Should Ask. Katy Borner - Mapping Science Susan Sgorbati - The Emergent Improvisat ion Project OPENING RECEPTION Kenneth W ilson - Pet er Druc ker's Revolutionary Ideas abo ut Ed ucation
Monday, June 26, 2006 EMERGENCE - Jonathon Vos Post - Session Chair Judah Folkman - Ca ncer Irving Epste in - Emergent Pat terns in Reaction-Diffusion Systems Eric Bonabeau - Simulat ing Systems SCIFI TO SCIENCE - Jonathon Vos Post - Session Chai r David Brin - Prediction as Faith, Prediction as a Too l: Peering Int o Tomorrow's World . FROM MODELING TO REALITY - Hiroki Sayama - Session Chai r Ed Fredkin - Finite Nat ure Eric Klopfer - StarLogo The Next Generation Ron Weiss - Programming Biology NETWORKS - Ali Minai - Session Chair John Bragin, Nicholas Gessler - Design and Implement at ion of an Undergraduate Degree Program in Social an d Organizational Complexity. Jun Wu, Yue-jin Tan, Hong-zhong Deng, Da-zhi Zhu - A new measure of heterogeneity of complex networks based on degree sequence David Green, Tania Le ishman, Suzanne Sadedin - T he emergence of socia l consensus in Boolean networks D aniel Whitney - Network Models of Mechanical Assemblies Daniel Whitney, David Alderson - Are tec hnological an d social networks really different ? Nathan LaBelle, Eugene Wallingford - Inter-package dep endency net works in ope nsource software Gabor Csardi, Tamas Nepusz - The igraph software package for complex network resea rch Gergana Bounova, Olivier de Weck - Augmente d network mode l for engineering system design Tania Leishman, David Green , Suzanne Sadedin - Dual phase evolution : a mechanism for self-orga nization in comp lex systems Kurt R ichardson - Complexity, Informat ion and Rob ust ness: T he Role of Infor mat ion 'Barriers' in Boolean Networ ks Andrew Krueger - Inferring Networ k Connectivity with Combine d Perturbations
xiii Takeshi Ozeki - Evolutional fam ily networ ks generated by group entry growt h mechan ism with preferenti al at tachment and t heir features Valeria Prokhotskaya, Valentina Ipatova, Aida Dmitrieva - Intrapopul ation cha nges of algae under tox ic exposure John Voiklis, Manu Kapur - T wo Walks th rough P rob lem Space John Voiklis - An Act ive Walk t hrough Semantic Space (Poster) David Stokes, V. Anne Smith - A complex syste ms approach to history: correspondence networks among 17th century astronomers Martin Greiner - Selforganizing wireless mult ihop ad hoc commun icat ion networks Lin Leung - Int elligent Inte rnationa l Medical Network: Design and Performance Ana lysis Durga Katie, Dennis E. Brown, Charles E . Johnson, Raphael P . Hermann, Fernande Grandjean, Gary J . Long, Susan M. Kauzlarich - An Ant imony-121 Moessbauer Spectral St udy of Yb 14MnSbll HOMELAND SECURITY - Stephenson Tucker - Session Cha ir Nairn Kapucu - Interorganizat ional coord ination in the National Response Plan (NRP) : t he evolution of comp lex systems Philip Fellman - T he complexity of te rrorist networks W . David Stephenson - Networked Homela nd Security strategies Gustavo Santana Torrellas - A Knowledge Manage ment Framework for Security Assessment in a Multi Agent P KI-based Net working Envi ronme nt Dighton Fiddner - Scale-free policy organizations' net work: ar tifact or pheno menon? Stephen Ho, Paul Gonsalves , Marc Richards - Complex adaptive systems-based too lkit for dynam ic plan assessment Donald Heathfield - Improving decision mak ing in t he area of national an d internationa l security: t he future map met hodology Steven McGee - Met hod to Ena ble a Homeland Security Hear t beat - Heartbeat e9-1-1 Maggie Elestwani - Disaster response and com plexity model: t he sout heast Texas Kat rina-Ri t a response Margaret Lyell, Rob Flo, Mateo Mejia-Tellez - Agent-based simulation framework for st udies of cognitive pedestrian agent behavior in an ur ban environme nt wit h crisis MATHEMATICAL METHODS - Daniel Polani - Session Chair G . P . Kapoor - St abili ty of simulation of highly uneven curves and surfaces using fract al interpo lat ion Val Bykovsky - Data-driven tec hnique to mode l complex systems Fu Zhang, Benito Fernndez-Rodriguez - Feedback Linearizat ion Control Of Systems W ith Singularities Nikola Petrov, Arturo Olvera - Regularity of critical objects in dynamical systems James K ar, Martin Zwick , Beverly Fuller - Short-term financial market prediction using an informatio n-t heoret ic app roac h Jean-Claude Torrel, Claude Lattaud, Jean-Claude Heudin - Complex St ellar Dynam ics an d Ga lactic Patterns Formation Tibor Bosse , Alexei Sharpanskykh, J an Treur - Modelling complex systems by int egrat ion of agent-based and dy na mica l systems methods Kostyantyn Kovalchuk - Simulatio n uncert ainty of complex economic system behavior Xiaowen Zhang, Ke Tang, Li Shu - A Chaot ic Cipher Mmohocc an d Its Randomness Evaluation J M arczyk, B Deshpande - Measuring and Tracking Complexity in Science Cliff Joslyn - Reconstructibility Analysis as a n Order T heoretical Knowledge Discovery Technique Alec Resnick, Jesse Louis-Rosenberg - Dynamic State Networks Matthew Francisco, Mark Goldberg, M alik M agdon-Ismail, William Wallace Using Agent-Based Modeling to Traverse Frameworks in T heor ies of t he Socia l
xiv ENGINEERING - Fred Discenzo - Session Chair Bogdan Danila, Andrew Williams, Kenric Nelson, Yong Yu, Samuel Earl, John Marsh, Zoltan Toroczkai, Kevin Bassler - Optimally Efficient Congestion Aware Transport On Complex Networks Orrett Gayle, Daniel Coore - Self-organising text in an amorphous computing environment Javier Alcazar, Ephrahim Garcia - Interconnecting Robotic Subsystems in a Network Chad Foster, Daniel Frey - Estimating complex system robustness from dual system architectures Mark Hoogendoorn, Martijn Schut, Jan Treur - Modeling decentralized organizational change in honeybee societies Jonathan R. A. Maier, Timothy Troy, Jud Johnston, Vedik Bobba, Joshua D. Summers - A Case Study Documenting Specific Configurations and Information Exchanges in Designer-Artifact-User Complex Systems Mo-Han Hsieh, Christopher Magee - Standards as interdependent artifacts: application to prediction of promotion for internet standards Chi-Kuo Mao, Cherng G. Ding, Hsiu-Yu Lee - Comparison of post-SARS arrival recovery patterns Zafar Mahmood, Dr. Nadeem Lehrasab, Muhammad Iqbal, Dr. Nazir Shah Khattak, Dr. S. Fararooy Fararooy - Eigen Analysis of Model Based Residual Spectra for Fault Diagnostics Techniques Russ Abbott - Emergence explained INNOVATION - Helen Harte - Session Chair Adam Groothuis, Sharon Mertz, Philip Vos Fellman - Multi-agent based simulation of technology succession dynamics Svetlana Ikonnikova - Games the parties of Eurasian gas supply network play: analysis of strategic investment, hold-up and multinational bargaining Koen Hindriks, Catholijn Jonker, Dmytro Tykhonov - Reducing complexity of an agent's utility space for negotiating interdependent issues Md. Mahbubush Salam Khan, Kazuyuki Ikko Takahashi - Mathematical model of conflict and cooperation with non-annihilating multi-opponent Philip Fellman, Matthew Dadmun, Neil Lanteigne - Corporate strategy: from core competence to complexity-an evolutionary review Adam Groothuis, Philip Vos Fellman - Agent based simulation of punctuated tech nology succession: a real-world example using medical devices in the field of interventional cardiology Sharon Mertz, Adam Groothuis, Philip Fellman - Dynamic Modeling of New Technology Succession : Projecting the Impact of Macro Events and Micro Behaviors on Software Market Cycles Manuel Dias, Tanya Araujo - Hypercompetitive environments: an agent based model approach Diane McDonald, George Weir - A methodology for exploring emergence in learning communities Diane McDonald, Nigel Kay - Towards an evaluation framework for complex social systems Debra Hevenstone - Employer- Employee Matching with Intermediary Parties: a trade off Between Match Quality and Costs? BIOLOGY - Jeff Schank - Session Chair Margaret J. Eppstein, Joshua L. Payne, Bill C. White, Jason H. Moore - A 'random chemistry' algorithm for detecting epistatic genetic interactions Jason Bates - Nonlinear network theory of complex diseases C. Anthony Hunt - Understanding Emergent Biological Behaviors: Agent Based Simulations of In vitro Epithelial Morphogenesis in Multiple Environments
xv Franziska Matthaeus, Oliver Ebenhoeh - Large-scale ana lysis of metabolic networks: cluste ring metabolites by t heir synt hesizing capac it ies Mar tin Greiner - Impact of observational incomplete ness on t he st ructura l properti es of prot ein int eraction networks Jorge M azze o, Melina Rapacio li, Vl adimir Flores - Cha racterizing cell proliferat ion process in t he developing central nervo us system Garrett Dancik, Karin Dorma n , Doug Jones - An agent- based model for Leishman ia infect ion Markus Schwehm, M anuel P oppe - Mode lling cytoskeleton morphogenesis wit h SBToo is Ray Greek, Niall Shanks - Implicat ions of complex systems in biomed ical research using a nima ls as mod els of hum an s Robert Melamede - E ndocannab inoids : Mult i-sca led, Globa l Homeostatic Regulators of Cells an d Society Holger Sltmann - Reverse Ph ase Protein Arrays for prot ein qu antificat ion in biological sam ples James Glazier - Cell Or ient ed Mod eling Biological Development using t he Cellular Potts Model POSTERS Pooya Kabiri - Ra nkin cycle Power P lant Exe rgy a nd Energy Op timization With Gra ph Mode l Mit r a Shojania Feizabadi - Tr acing t he behavior of t umor cells du ring a course of chemot hera py Tibor Bosse , Martijn Schut , J an Treur, David Wendt - Emergence of alt ruism as a result of cognit ive capa bilit ies involving trust an d int ertemp ora l decision-m aki ng Dritan Osmani, Richard Tol - T he case of two self-enforcing internationa l agreements for environmental prot ecti on R ene Doursat, Elie Bienenstock - How Act ivity Regulates Connectivity: A SelfOrganizi ng Complex Neural Networ k M a ciej Swat , James Glazie r - Cell Level Modeling Using CompuCe1l3D Jur e Dobnikar, Marko Jagodic, M ilan Brumen - Computer simulation of bact erial chemotaxis
Tuesday, June 27, 2006 SOCIAL SY S T E M S - John Sterman - Session Chair John Morgan - Global Security Steven Hassan - Destruct ive mind control issues: cults , deprogramming and SIA post 9-11 Jay Forrester - Syst em Dynami cs Michael Hammer - Reengineerin g th e Corporat ion EDUCATION AND HEALTHCARE - Irene Conrad and H el en Harte - Session Cha ir Nastaran Keshavarz , Don Nutb eam, Louise Rowling, Fereidoon Khavarpour Ca n complexity t heory shed light on our understandin g about school hea lth promot ion? A shok K ay (K a n a gara jah ), Pet e r L indsay, Anne Miller, David Parker - An exp lora t ion into t he uses of age nt-based modeling to improve quality of health care Alice David son, Marilyn Ray - Com plexity for hu man-environment well-b eing Thea Luba - Creating, Connecting an d Collaborating in SonicMetro: Our on-line complex systems model for Arts Ed ucation Hiroki Sayama - Teaching emergence and evolution simultaneously t hro ugh simulated breed ing of art ificial swarm behaviors P aulo Blikstein, U r i W ilensky - Learni ng Abo ut Learn ing: Using Multi-Agent Compu ter Simulation to Invest igate Huma n Cogn ition
xvi COMPLEX SYSTEMS EDUCATION - Thea Luba - Session Cha ir Charles H adlock - Guns, ger ms, and st eel on t he sugar scap e: int rod ucing und erg raduates to agent bas ed simulation Nicholas Gessler - Through the looking-glass with ALiCE: arti ficia l life, cult ure and evolut ion: PHYSIOLOGY - Plamen Ivanov - Session Cha ir Bela Suki - Fluct uat ions a nd noise in respiratory and cell physiology Gottfried Mayer-Kress, Yeou-Teh Liu, Karl Newell - Complexity of Hum an Movement Learning Attila Priplata - Noise-E nha nced Human Balance Control Cecilia Diniz Behn, Emery Brown, Thomas Scammell, Nancy Kopell - Dynam ics of behavioral st at e control in t he mouse sleep-wake network Indranill Basu Ray - Interpreting complex body signals to predict sudden card iac deatht he largest killer in the West ern Hemisphere. Bruce West - Frac tiona l Ca lculus In P hysiologic Networks MEDICAL SIMULATION - Dwight Meglan - Session Cha ir Dwight Meglan - Simul at ion-based Surgery Tr ai ning Bryan Bergeron - Interdepe nde nt Man-Mach ine Problem Solving in Serious Gam es James Rabinov - New Approac hes to Computer-based Int ervent ional Neur oradiology Tr aining Joseph Teran - Scientifi c Comput ing Appli cations in Biomedical Simulat ion of Soft Ti ssues NETWORKS - Dan Braha - Session Chair V Anne Smith - Bayesian network inference a lgorit hms for recovering networks on mult iple biological scales: molecular, neur al , and ecological Peter Dodds - Social cont agion on networks: groups a nd cha os. Jun Ohkubo, Kazuyuki Tanaka - Fat- tailed degree d istribution s generated by qu enched disorder Joseph E . Johnson - Generalized ent ropy as network met rics Hugues Berry, Benoit Siri, Bruno Cessac, Bruno Delord, Mathias Quoy - Topological and dyn amical structures indu ced by Hebb ian learn ing in random neur al networks Gabor Csardi, Katherine Strandburg, Laszlo Zalanyi, Jan Tobochnik, Peter Erdi - Estimat ing the dyn am ics of kern el-b ased evolving networks Michael Gastner - Traffic flow in a spatial network mod el Pedram Hovareshti, John Baras - Consensus problems on sma ll world graphs: a st ruct ural study Dalia Terhesiu, Luis da Costa - On t he relationsh ip between comp lex dyn amics a nd complex geometrical structure HOMELAND SECURITY - Jeff Cares - Session Chai r Giuseppe Narzisi, Venkatesh Mysore, Lewis Nelson, Dianne Rekow, Marc Triola, Liza Halcomb, Ian Portelli, Bud Mishra - Complexit ies, Catastrophes and Cit ies: Unraveling Emergency Dyna mics Nancy Hayden, Richard Colbaugh - The complexity of te rrorism: considering surprise an d deceit Markus Schwehm, Chris Leary, Hans-Peter Duerr, Martin Eichner - Int erSim :A network-b ased out break investigation and inte rvent ion planning too l. Gary Nelson - St ructure and Dyn ami cs of Mult iple Agent s in Homeland Secur ity Risk Management Czeslaw Mesjasz - Complexity St udies and Security in th e Comp lex World : An Epist emologica l Framework of Analysis Corey Lofdahl, Darrall Henderson - Coord inating National Power using Complex Systems Simul at ion Kevin Brandt - Operational Synchronizat ion Alex Ryan - About the Bears and t he Bees: Adapt ive Respons es to Asymmetric Warfare Adrian Gheorghe - Vulnerab ility Assessme nt of Comp lex Cr itical Infrastructures
xvii EVOLUTION AND ECOLOGY - Guy Hoelzer - Session Chair Elise Filotas, Lael Parrott , Martin Grant - Effect of space on a mult i-sp ecies commun ity model with ind ivid ual-based dyn amics Pascal Cote, Lael Parrott - Application of a genetic algorit hm to generate non-ra ndom assembly sequences of a community assembly model Javier Burgos, Julia Andrea Perez - A Met hodological guide to environmental pr iorit izing using hyperbolic laws and strategical ecosystems Holger Lange, Bjorn 0kland, Paal Krokene - Thresholds in t he life cycle of t he spr uce bark beetle und er climate change Lora Harris - Ra met rules: merging mechan istic growt h mod els with an indiv idual-based approach t o simulate clonal plants Tibor Bosse, Alexei Sharpanskykh, Jan Treur - On t he complexity monotonicity thesis for environme nt , behaviour and cognition Mauricio Rinc?n-Romero, Mark Mulligan - Hydrol ogical sens it ivity ana lysis to LUCC in Tropical Mountainous Environment ALIFE AND EVOLUTION - H iroki Sayama - Session Cha ir William Sulis - Eme rgence in th e Game of Life Predrag Tosic - Computat iona l Complexity of Counti ng in Sparsely Networked Discrete Dynamical Syst ems Rene Doursat - T he growing canvas of biological development : multiscale pat t ern generation on an expanding lat tice of gene regulato ry networks Kovas Boguta - Inform at ional fractu re po ints in cellular automata Christof Teuscher - Live and Let Die: Will there be Life after Biologically Inspi red Computation? Hideaki Suzuki - A molecular network rewiring rule t hat represents spatial constraint PHYSICAL SYSTEMS - Jonathon Post - Session Chair Karoline Wiesner, James Crutchfield - Com putation in Finit ar y Qu antum P rocesses Wm. C. McHarris - Complexity via Correlated Statistics in Qua nt um Mechan ics Xiangdong Li , Andis ChiTung Kwan, Michael Anshel, Christina Zamfirescu, Lin Wang Leung - To Quantum Walk or Not Philip Fellman, Post Jonathon - Nash equil ibrium and qu antum com putational complexity Ravi Venkatesan - Informat ion Encry ption using a Fisher-Schroedinger Model INNOVATION - Iqbal Adjali - Session Chair Jeroen Struben - Ide ntifyi ng cha llenges for sustained adoption of alternative fuel vehicles a nd infrast ru cture Jiang He, M. Hosein Fallah - Mobility of innovators a nd prosperity of geographical t echnol ogy clusters Ian Wilkinson, Robert Marks, Louise Young - Toward Agent Based Models of t he Developme nt And Evolutio n of Business Relations a nd Networks Nunzia Carbonara, Haria Giannoccaro, Vito Albino - T he competitive advantage of geographical clust ers as comp lex adaptive systems: an exploratory st udy based on case st udies and net work analysis Kazuyuki Takahashi, Md. M ahbubush Salam Khan - Com plexity on Politics -How we construct perpetua l peacePhilip Fellman, Jonathon Post - Comp lexity, competitive inte lligence an d t he 'first mover ' advantage Thomas Brantle, M . Hosein Fallah - Com plex knowledge networks and invent ion collaboratio ns
xviii ENGINEERING - Fred Discenzo - Session Chair Sarjoun Doumit , Ali Minai, - Dist ribut ed resource exploitation for a utonomous mobile sensor agents in dynamic environments Abhinay Venuturumilli, Ali Minai - Obt aining Robust W ireless Sensor Networks t hroug h Self-Organizatio n of Heterogenous Connect ivity Justin Werfel, Yaneer Bar-Yam, Radhika Nagpal - Automating construction wit h dist ribu ted robo tic systems Fred Discenzo, Francisco Maturana, Raymond Staron - Dist ributed diagnostics an d dynamic reconfiguration using auto nomous agents Adel Sadek, Nagi Basha - Self-Learn ing Int elligent Agents for Dynamic Traffic Rout ing on Transportation Networks Fred M . Discenzo, Dukki Chung - Power scavenging ena bles maintenance-free wireless sensor nodes Jacob Beal - Wh at t he assass in's guild taught me abo ut distribut ed comp ut ing Predrag Tosic - Dist ribu t ed Coa lition Formation for Spa rsely Networked Large-Scale Mult i-Agent Syste ms SYSTEMS ENGINEERING - Doug Norman - Session Chair Sarah Sheard - Bridging Systems Enginee ring and Com plex Systems Sciences M ichael Kuras, Joseph DeRosa - W hat is a system? Anne-Marie Grisogono - Success an d failure in adaptation Jonathan R. A. Maier, Matt Motyka, Georges M . Fadel - Represent ing t he Complexity of Engineeri ng Systems: A Multidisciplinary Perce pt ual Approach Leonard Wojcik, Sam Chow, Olivier de Week, Christian LaFon, Spyridon Lekkakos, James Lyneis, Matthew R inaldi, Zhiyong Wang, Paul Wheeler, Marat Zborovskiy - Ca n Models Capture t he Comp lexity of t he Systems Engineer ing Process? Richard Schmidt - Synthesis of Syste ms of Systems [SoS] is in fact the Management of Syst em Design Complexity LEADERSHIP N anette Blandin - Re-conceptualizing leadership for comp lex social systems Russ Marion - Mult i-Agent Based Simulat ion of a Model of Complexity Lead ership Chi-Kuo M ao - P rinciples of Orga nizat ion Cha nge - A Complex System Pers pect ive Earl Valencia - Architecting t he Next Generation of Technical Leaders Cory Costanzo, Ian Littlejohn - Ea rly Detection Capabilities: Apply ing Complex Adap t ive Systems Principles to Business Environments
Wednesday, June 28 , 2006 EVOLUTION - Jerry Sussman - Session Chair Edward O. Wilson - New Underst and ing in Socia l Evolution David Sloan Wilson - Rethinking the T heoretical Foundations of Sociobiology Andreas Wagner - Robust ness and Evolution Charles Goodnight - Complexity an d Evolution in Struct ured Pop ulat ions PHYSIOLOGY - Aaron Littman - Session Chair M adalena Costa, Ary Goldberger, C .-K. Peng - Broken asym metry of t he hum an heartbeat: loss of time irreversibility in aging and disease
David Garelick - Body sway tec hnology Helene Langevin - Connect ive tissue: a bod y-wide complex system network? Edward Marcus - Manifestations of Cellular Cont raction Patterns on t he Cardiac Flow Out put Andrei Irimia, Michael Gallucci, John Wikswo - Com pariso n of chaotic biomagnet ic field patterns record ed from t he arr hythmic cardiac and Gl syste ms Muneichi Shibata - A whole-body met ab olism simulation model
xix LANGUAGE - Adrian Gheorghe - Session Chair F. Canan Pembe, Haluk Bingol - Complex Networks in Different Lang uages : A St udy of an Emergent Multilingual Encyclopedia Gokhan Sahin, Murat Erenturk, Avadis Hacinliyan - Search for chaotic structures in Turkish an d Eng lish texts by detrended fluct uation and ti me series analyses PSYCHOLOGY Jeff Schank, Chris M a y, Sanj ay J oshi - Can robots help us underst and the developme nt of behavior? Irina Trofimova - Ensemb les wit h Variable Struct ure (EVS) in t he modeling of psychological phenomena Michael Roberts, Robert Goldstone - Human -environme nt inte ractions in gro up foraging behavior MATHEMATICAL METHODS - Joel MacAuslan - Session Chai r Yuriy Gulak, H aym B enaroya - Nonassociative algebraic structures and comp lex dynamical systems Martin Zwick, Alan Mishchenko - Binary Decision Diagrams an d Crisp Possibilistic Reconstructability Ana lysis Thomas Ray - Self Organization in Rea l and Complex Analysis ALIFE AND EVOLUTION - William Sulis - Session Chai r Hiroki Sayama - On self-rep licat ion and the halt ing problem Gerald H Thomas, Hector Sabelli, Lazar Kovacevic, Louis Kauffman - Biotic patterns in t he Schroedinger's equation and t he ear ly universe Zann Gill - Designing Cha llenges to Harness C-IQ [collaborat ive intelligence] DYNAMICS O F MATERIALS Sergio Andres Galindo Torres - Computational simulat ion of the hydraulic fracturing process using a discrete element met hod Jun Yu, Laura Gross, Christopher D anforth - Complex dyn amic behavior on transition in a solid combustion mode l GAME THEORY - John Nash a n d Adrian Gheorghe - Session Chairs Paul Scerri, K atia Sycara - Evolutionary Ga mes and Social Networks in Adversary Reasoning Mike Mesterton-Gibbons, Tom Sherratt - Anima l netwo rk phenomena: insights from triadic games S imon Angus - Cooperation networks: endogeneity a nd complexity Gerald H Thomas, Keelan Kane - Prisonner's Dilemma in a Dynamic Game Theory Andis ChiTung Kwan - On Nash -connectivity and choice set prob lem ACADEMIA - Ronald Degray - Session Chai r Hank Allen - Complexity, social physics, and emergent dynamics of t he U.S. academic system Sean Park, D el H arnish - Rethinking Accountability an d Quality in Higher Education ENGINEERING Neena George, Ali Minai, Simona Doboli - Self-organized inference of spatial struct ure by randomly deployed sensor networks M a y Lim, Dan Braha, Sanith Wijesinghe, Stephenson Tucker, Yaneer Bar-Yam Preferential Detachment: Improving Connectivity an d Cost Trade-offs in Signaling Networks Martin Greiner - P roactive robustness control of heterogeneously loaded networks DISEASE John Holmes - Communica ble disease outbreak detection and emergence of etiologic phenomena in an evolut ionary computation system Clement McGowan - Biological Event Modeli ng for Respo nse P lan ning
xx DYNAMICAL METHODS - David Miguez - Session Chair Christopher Danforth, James Yorke - Making forecast s for chao tic physica l processes Michael Hauhs, Holger Lange - Organisms, rivers, an d coalgebras Burton Voorhees - Emergence of Met astabl e Mixed Choice in Probabilist ic Ind uction C rist ian Suteanu - An isotropy and spatial scali ng aspects in th e dyn am ics of evolving d issipat ive syst ems with discret e appearance Hector Sabelli, Lazar Kovacevic - Biot ic Populat ion Dynamics an d t he T heory of Evolut ion HERBERT SIMON AWARDS - Yaneer B ar-Yam - Session Chai r Kenneth Wilson - Herb ert Simon Award presented to Ken neth W ilson for his deve lopment of t he Renorm alizat ion Gro up John F . Nash Jr. - Herbert Simo n Award presented to Joh n F . Nash Jr. for his ana lysis of Game T heory John F . Nash Jr. - Multi player Ga me Theory Marvin Minsky - Rem iniscence of John F . Nash J r .
Thursday, June 29, 2006 SYSTEMS BIOLOGY - Hava Siegelmann - Session Cha ir Naama Barkai - From Dyna mic Mecha nisms t o Networks Jose Venegas - Anatomy of an Ast hma Attack Luis Amaral - Biological and Social Networks SYSTEMS ENGINEERING - Doug Norman, Joe DeRosa - Session Chairs Lou Metzger - Systems Eng ineeri ng Tony De Simone - The Globa l Inform ati on Grid K enneth Hoffman, Lindsley B o iney, Renee Stevens, Leonard Wojcik - Complex systems land scapes - t he enterprise in a soc io-economic contex t Brian White - On t he Pursuit of Enterprise Syst ems Engineering Ideas Dean Bonney - Inqui ry and Enterprise Transformation Keith McCaughin, Joseph DeRosa - Stakeholder Ana lysis To Shape t he Enterp rise Carlos Troche - Doc umenting Complex Systems in t he Ente rprise Michael Webb - Capability-Based Engineering Analysis (CBEA) EVOLUTION AND ECOLOGY - Guy Hoelzer - Session Chair Claudio Tebaldi, Deborah Lacitignola - Complex Feat ures in Lot ka-Volterra Systems wit h Behavioral Adaptation Jeffrey Fletcher, Martin Zwick - Unifying t he t heories of inclusive fitness an d reciprocal altruism Georgy Karev - On mathematical th eory of select ion: d iscrete-ti me models Suzanne Sadedin - Adaptation and self-organization in spatial mode ls of speciation Chris Wright - Lot ka-Volterr a community organization increases wit h added trophic complexity SCIENCE FICTION - Jonathon Vos Post - Session Chair Geoffrey Landis - Science, Science Fiction, & Life in t he Universe Stanley Schmidt - The Symbiosis of Science and Science Fiction CONCEPTS - Helen Harte - Session Chair Susan and Bruce Sgorbati and Weber - How Deep a nd Broad are t he Laws of Emergence? P ierpaolo A ndriani , Jack Cohen - Innovation in biology and tec hno logy: exaptation precedes adaptation. Burton Voorhees - Paradigms of Order Rush D . Robinett, III, David G . Wilson, Alfred W . Reed - Exergy Sustainability for Complex Systems
xxi Diane McDonald, George Weir - Developing a conceptual model for exploring emergence Victor Korotkikh, Galina Korotkikh - On an irreducible t heory of complex systems Michael Hlsmann, Bernd Scholz-Reiter, Michael Freitag, Christine Wycisk, Christoph de Beer - Autonomo us Cooperation as a Met hod to cope with Complexity and Dynamics: A Simulation based Analysis and Measurement Concept Approach Jonathon Post, Philip Fellman - Complexity in t he Paradox of Simplicity Daniel Polani - Emergence, Int rinsic Struct ure of Informat ion, and Agent hood Irina Ezhkova - Self-Organizing Architecture of Complex Systems BIOLOGICAL NETWORKS - Ali Minai - Session Chair Guy Haskin Fernald, Jorge Oksenberg, Sergio Baranzini - Mutual informat ion networks unve il global properties of IFN ? immediate transcriptio na l effects in humans Holger Sueltmann - Delineating breast canc er gene exp ression networks by RNA interference and globa l microarray analysis in human tumor cells Samantha Kleinberg, M arco Antoniotti, Satish Tadepalli, N aren Ramakrishnan, B ud Mishra - Remembrance of experiment s past : a redescription approach for knowledge discovery in comp lex systems Shai Shen-Orr, Y itzhak Pilpel , Craig Hunter - Embryonic and Maternal Genes have Different 5' and 3' Regu lat ion Com plexity Tinri Aegerter-Wilmsen, Christof Aegerter, Konrad B asler, Ernst Hafen - Coupling biological pattern formation and size control via mechanical forces Blake Stacey - On Mot if Statistics in Symmetric Networks Benjamin d e Bivort, Sui Huang, Yaneer Bar-Yam - Dynamics of cellular level function and regulation derived from murine expression array dat a Thierry Emonet, Philippe Cluzel - Fro m molecul es to behavior in bacterial chemotaxis NEURAL AND PHYSIOLOGICAL DYNAMICS - Gottfreid M ayer-Kress - Session Chair Tetsuji Emura - A spatiotempo ra l coup led Lorenz model drives emergent cogn itive process Steve Massaquoi - Hierar chical and parallel organization, scheduled scaling of erro r-type signa ls, and synergistic actuation appear to greatly simp lify and rob ustify human motor control Michae l Holroyd - Synchronizability and connectivity of disc ret e comp lex systems Walter Riofrio, Luis Angel Aguilar - Different Neurons Pop ulation Distribution correlates with Topo logic-Temporal Dynamic Acoustic Information Flow D r. Joydeep Bhattacharya - An index of signal mode complexity based on orthogona l transformat ion Konstantin L Kouptsov, Irina Topchiy, David Rector - Brain synchronization d uri ng sleep David G. Meguez - Exp erimental steady pat tern formatio n in react ion-diffusion-advect ion systems Harikrishnan Parameswaran, Arnab Majumdar, Bela Suki - Relating Microscopic and Macroscopic ind ices of alveolar dest ruction in emp hysema A rnab Majumdar, Adriano M . Alencar, Sergey V . Buldyrev, Zoltn Hantos, H . Eugene Stanley, B ela Suki - Branching asymmetry in t he lung airway tree GLOBAL SYSTEMS - Sanith Wij es inghe - Session Cha ir Sanith W ijesinghe - Thursday Afternoon Breakout Session on Global Systems Hans-Peter Brunner - App lication of comp lex systems research to efforts of intern at ional developme nt Iqbal Adjali - An agent-based spatial mode l of consumer behavior Kumar Venkat, Wayne Wakeland - Eme rgence of Networks in Distance-Constrained Trade Roxana Wright, Philip Fellman, Jonathon Post - Path Dependence, Transformation and Convergence- A Mathematica l Mode l of Transition to Mar ket Craig Williams - Transaction Costs, Agency T heory and t he Complexity of Electric Power Distribution Governance
xxii Mauricio Rinc?n-Romero Environmental P lanning
- Hyd rological catchment as a Complex System for th eir
PHYSICAL SYSTEMS - May Lim - Session Chair David G arelick - Part icles traveling fast er t han the speed of light Cintia Lap illi , Peter Pfeifer, Carlos Wexler - Universa lity away from critical poi nts in a t hermostatistical mode l Sean Shaheen - Molecular Self-Assembly P rocesses in Organi c Photovoltaic Devices Leila Shokri, Boriana Marintcheva, Charles C. Richardson, Mark C . Williams - Salt Dependent Bind ing of T7 Gene 2.5 Protein to DNA from Single Molecule Force Spectroscopy Vivian Halpern - Precursors of a phase transition in a simple mode l syste m J onathon Post, Christine Carmichael, Philip Fellman - Emergent ph enomena in higher-ord er electrody namics Cynthia Whitney - On Seeing th e Superlu minals Aziz Raouak - Diffusion And Topological Properties Of P hase Space In The Standard Map SYSTEMS ENGINEERING II - Doug Norman and Joe DeRosa - Session Chairs Joyce Williams - Systems Thinking: The 'Softer Side' of Comp lex Systems Eng ineering Thomas Speller, Daniel Whitney, Edward Crawley - Syst em Archit ect ure Gene ration based on Le Pont du Gard Michael McFarren, Fatma Dandashi, Huel- Wan Ang - Service Or iente d Arch itect ures Using DoDAF Jeff Sutherland, Anton Victorov, Jack Blount - Adaptive Enginee ring of Large Software Projects wit h Dist ribu ted jOut sourced Tea ms Robert Wiebe, Dan Compton, D ave Garvey - A System Dynamics Treatment of t he Essent ial Tension Bet ween C2 an d Self-Sync hronization George Rebovich - Enterp rise Systems Engineer ing: New and Emerging Perspec t ives John J . Roberts - Enterprise Ana lysis and Assessment of Complex Military Com man d and Control Envi ronments EVOLUTION AND ECOLOGY II - Holger Lange - Session Cha ir Justin Scace, Adam Dobberfuhl, Elizabeth Higgins, Caroly Shumway - Complexity a nd t he evolut ion of the socia l brain Yosef Maruvka, Nadav Shnerb - T he Surviving Creatures : T he stable state on th e species Network Lauren O'Malley - Fisher Waves and Front Roug heni ng in a Two-Species Invasion Model wit h Preemptive Compet it ion Marcelo Ferreira da Costa Gomes, Sebastin Gon?alves - The SIR model wit h de lay Guy Hoelzer, Rich Drewes, Rene Doursat - Temporal waves of genetic d iversity in a spatially explicit model of evolution: heavi ng toward speciation GLOBAL SYSTEMS II - H ans-Peter Brunner - Session Chair Dmitry Chistilin - Global security Mehmet Tezcan - T he EU Foreig n Policy Governance As A Complex Adaptive Syste m Doug Smith - Bifurcation in Socia l Movements J . (Janet) Terry Rolfe - A New Era of Causality and Responsibility: Assessing t he Evolving Supe rpower Ro le of the United States Walid Nasrallah - Evolutionary paths to a corrupt society of artificial agents C arlos Puente - From Complexity to Peace Claudio Tebaldi, Giorgio Colacchio - Chaotic Behavior in a Modified Goo dwin' s Growth Cycle Model B ennett Stark - The Globa l Political System: A Dynamical System wit hin the Chaotic P hase, A Case Study of Stuart Kau ffman 's Complex Adaptive System Theory
xxiii RECONSTRUCTABILITY ANALYSIS - Martin Zwick - Session Cha ir Martin Zwick - A Short Tu torial on Reconstructabilit y Analysis Roger Cavallo - Whi ther Reconst ru ct abili ty Analysis Gary Shaffer - T he K-systems Niche Michael S. Johnson, M artin Zwick - State-based Reconst ruct ability Analysis Mark Wierman, Mary Dobransky - An Empirical St udy of Search Algorit hms Applied to Reconstruct ab ility Ana lysis Berkan Eskikaya - An extension of reconst ruct ability ana lysis wit h implicat ions to syst ems science a nd evolutio nary comp utation
Friday, June 30, 2006 GLOBAL SYSTEMS Hans-Pet er Brunner - App licat ion of comp lex systems resear ch to efforts of int ern ational development Patrick M . Hughes - Complexity, Convergence, and Confluence Peter Brecke - Modeling Globa l Syst ems Ricardo H ausmann - International Development
XXiV
Publications: Proceedings: Conference proceedings (t his volume) On t he web at ht tp :/ /necsLedu/ event s/iccs6/proceedings.html Video proceedings are available to be ordered t hrough t he New England Complex Systems Inst itu te.
Journal articles: Individual conference articles were published online at http://interjournal.org
Web pages: The New England Complex Systems Institute http://necsLorg The First International Conference on (ICCS1997) http://www.necsLorg/ ht ml/ ICCS_Program.ht ml
Complex
Systems
The Second International Conference on (ICCS1998) http :/ / www.necsi.org/ events / iccs/iccs 2program.html
Complex
Systems
The Third International Conference on (ICCS2000) ht tp ://www.necsLorg/ events/iccs/ iccs3program.html
Complex
Systems
The Fourth International Conference on (ICCS2002) htt p://www.necsLorg/events/iccs/iccs4program.html
Complex
Systems
The Fifth International Conference on Complex Systems (ICCS2004) http: / /www .necsi.org/eve nts/lccs / openconf/ aut hor/ iccsprogra m.php The Sixth International Conference (ICCS2006) http ://www .necsLorg/ event s/ iccs6/index.php
on
The Seventh International Conference on (ICCS2007) http ://www.necsi.org/events/iccs7/index.php
Complex
Systems
Complex
Systems
NECSIWiki http :/ /necsLorg/commun ity/ wiki/index.php/Main-Page InterJournal - Th e journ al of the New England Complex Systems Institu te ht tp :/ /int erjourn al.org
Part I: Methods
Chapter 1
Emergence, Intrinsic Structure of Information, and Agenthood Daniel Polani Adaptive Systems Research Group School of Computer Science, University of Hertfordshire, UK [email protected]
Em ergence is a central organizing concept for th e und erstanding of complex syst ems . Under the manifold mathematical notions that have been introduced to chara ct erize emergence, the inform ation- theoretic are of particular int erest since they provide a qu antitative and tr an spar ent approach and generalize beyond t he imm ediat e scope at hand. We discuss approaches to characterize emergence usin g information t heory via t he intrinsic temporal or composit iona l stru ct ure of the informa tion dynamics of a system. This approach is devoid of any extern al const ra ints and purely a proper ty of t he information dynami cs itself. We th en briefly discuss how emergence naturally connects to t he concept of age nt hood which has been recently defined using information flows.
1
Introduction
The concept of emergence is of cent ral importance to understand complex systems. Although there seems to be quite an intuitive agreement in the community which phenomena in complex systems are to be viewed as "emergent" , similarly to the concept of complexity, it seems difficult to const ruct a universally accepted precise mathemati cal notion of emergence. Unsurprisingly one is t hus faced with a broad spectrum of different approaches to define emergence. The present paper will briefly discuss a number of notio ns of emergence and t hen focus on the information-theoretic variants. Due to its universality, infor-
4 mation theory spawned a rich body of concepts based on its language. It provides power of quantification, characterization and of prediction. The paper will discuss how existing information-theoretic notions of emergence can be connected to issues of intrinsic structure of information and the concept of "agenthood" and thus provide new deep insights into the ramifications, and perhaps the reason why emergence plays such an important role.
2
Some Notions of Emergence
Of the broad spectrum of notions for emergence we will give an overview over a small selection representing a few particularly representative approaches, before concentrating on the information-theoretic notions which form the backbone of the philosophy of the present paper. A category-theoretic notion of emergence has been brought forward in [11]. While having the advantage of mathematical purity, category theory does not lend itself easily for the practical use in concrete systems. One of the difficulties is that the issue of identifying the emergent levels of description is exogenous to the formalism. These have to be formulated externally to be verified by the formalism. As is, the approach provides no (even implicitly) constructive way of finding the emergent levels of description. The difficulty of identifying the right levels of description for emergence in a system has brought up the suspicion that emergence would have to be considered only "in the eye of the beholder" [6]. In view of the fact that human observers typically agree on the presence of emergence in a system, it is often felt that it would rather be desirable to have a notion of emergence that does not depend on the observer, but is a property that would arise naturally from the dynamics of the system . In an attempt to derive emergent properties of a system, a pioneering effort to describe organizing principles in complex systems is the approach taken by synergetics [4]. The model attempts to decompose nonlinear dynamic systems in a natural fashion. In the vicinity of fixed points, dynamical systems decompose naturally into stable, central and unstable manifolds. Basically, this decomposes a system into fast and slow moving degrees of freedom (fast foliations and slow manifolds) . Since the lifetime of the slow degrees of freedom exceeds that of the fast ones, Haken termed the former master modes as compared to the latter which he termed slave modes. The main tenet of synergetics is that these master modes dominate the dynamics of the system. In the language of synergetics, the master modes correspond to emergent degrees of freedom. An informationtheoretic approach towards the concept of synergetics is presented in [5] . The synergetics view necessarily couples the concept of emergence to the existence of significantly different timescales . In addition, the applicability of above decomposition is limited to the neighbourhood of a fixed point. Under certain conditions, however, it is possible to achieve a canonical decomposition of chaotic dynamical systems even without separate time scales into weakly coupled
5
or decoupled subsystems [13] . In addit ion to above, a significant number of oth er approaches exist, of which we will briefly mention a few in §3.4 in relation to t he inform ation-theoretic approaches to be discussed in §3.2 and §3.3.
3
Concepts of Emergence
Among the possible formalizations of emergence, the information-th eoretic ones are particularly at tr active due to the universality of information theory and the power of expression, description and prediction it provides, as well as the pot ential to provide path s for explicitly const ructing the structures of relevance (see e.g. §4).
3.1
Notation
We introduce some notation. For random variables use capital letters such as X , Y, Z , for th e values th ey assume use letters such as x, Y, z , and for the sets th ey take values in use letters such as X , y , Z. For simplicity of notation, we will assume that such a set X is finite. A random variable X is determined by the probabilities Pr(X = x) assumed for all x E X. Similarly, joint variabl es (X , Y) are determined via Pr (X = x, Y = y), and condit ional variables via Pr(Y = ylX = x) . If there is no danger of confusion, we will prefer writing the probabiliti es in the shorthand form of p(x) , p(x, y) and p(y lx) instead of the more cumbersome explicit forms above. .Define the entropy of a random variable X by H (X ) - L XEX p(x) logp(x ) and the conditional entropy of Y given X as H (Y IX ) := L XEX P(x )H (Y IX = x) where H (Y IX = x) := - L YEy p(ylx )logp(ylx ) for x E X . Th e joint entropy H (X , Y) is the entropy of the random vari able (X, Y ) with jointly distributed X and Y . T he mutual information of random variables X and Y is defined as I (X ;Y ) := H (Y ) - H( Y /X) = H (X )+H(Y ) - H( X , Y ) . In analogy to (regular) maps between set s, we define a probabilistic map X --t Y via a condit ional p(ylx). If the prob abilistic map is deterministic, we call it a projection. A given prob ability distribution p(x ) on X induces a prob ability distribution p(y) on Y via the probabilistic map X --t Y in the natural way.
3.2
Emergence as Improved Predictive Efficiency
Based on th e epsilon-machine concept, a notion of emergence in tim e series has been developed in [1, 12]: a process emerges from another one if it has a greater predictive efficiency than the second. This means that , the ratio between prediction information (excess entro py) and the complexity of t he predicting epsilon-machine is better in the emerging process than the original process. This gives a natural and precise meaning to t he perspective that emergence should represent a simpler coarse-grained view of a more intr icate fine-grained system dynamics. In the following, we will review the technical aspects of this idea in more detail.
6
For this, we generally follow the line [12]. Consider a random variable X together with some projection X ---+ X. Then define the statistical complexity of the induced variable X as CJL(X) := H(X). Let random variables X, Y be given, where we wish to predict Y from X. Define an equivalence relation " ' f on X of equivalent predictiveness with respect to Y via Vx,x' EX: x " ' f x' iff Vy E Y: p(ylx) = p(ylx') . (1.1) The equivalence relation r-« induces a partition X of X into equivalence classes" . Each x E X is naturally member into one of the classes 55 EX, and thus there is a natural projection of X onto X. This induces a probability distribution on X and makes X a random variable which is is called a causal state. Consider now an infinite sequence ... S-I' So, SI,. .. of random variables. Furthermore, introduce the notation Slt,t'] for the subsequence St,St+l, . . . ,St'-I,St', where for t = -00 one has a left-infinite subsequence and for t' = 00 a right-infinite subsequence. We consider only stationary processes, i.e. processes where p(slt,oo)) = p(s[t',ooJ)' for any t, t'. Then, without f-
loss of generality, one can write S := S[oo ,t) for the past of the process and ---+ ........ S := Slt ,oo] for the future of the process, as well as S for the whole sequence. Following (1.1), introduce an equivalence between different pasts 7,7' in ---+ predictiveness with respect to the future S (for detailed technical treatment of the semi-infinite sequences, see [12]) . This induces a causal state S. Then, in [12] it is shown that, if a realization s of the causal state induced by the past S[-oo,t) is followed by a realization S of StH, the subsequent causal state induced by S[-oo,t+l) is uniquely determined. This induces an automaton on the set of causal states, called the e-machine, Then the statistical complexity CJL(8) (the entropy of the e-rnachine) measures how much memory the process stores about its past. As opposed to that, one can consider the excess entropy of the process, f - ---+ defined by E = I ( S; S). The excess entropy effectively measures how much information the past of a process contains about the future . It can be easily shown that E :::; CJL (8). In other words, the amount the past of the process "knows" at a point about its future cannot exceed the size CJL(S) of the internal memory of the process. Armed with these notions, in [12] the following definition of emergence is suggested : define EjCJL(S) E [0,1] as a measure of predictive efficiency, that is, how much of the internal process memory is used to actually predict what is going to happen. If we consider a derived process induced by a projection ........ applied to each member of the sequence S, this derived process is then called emergent if it has a higher predictive efficiency than the process it derives from. In particular, emergence is an intrinsic property of the process and does not depend on a subjective observer. 1 Note that the partition consists of subsets of X . However, we will use the partition itself later as a new state space and therefore the individual equivalence classes x are both subsets of X and states of the new state space X.
7
3.3
Emergent Descriptions
T he emergent description mod el developed by t he author in [10] takes an ap pro ach t hat , while related to predict ive efficiency, differs from it in some impor t ant as pects. In the emergent descr iptions model, consider agai n a sequen ce .-. of random state variables S . Assum e t hat t here exists a collect ion of k pro babilistic ma ppings each indu cing a sequence of random var iab les S (i), wit h .-. i = 1 .. . k, formin g a decom positio n of t he origina l system S . Then [10] defines S (k) to be an emergent description for S if t he decomposit ion S;i), fulfils t hree prop er ties Vi = 1 ... k: 1. the decom pos ition repres ents t he system fully: I( St;S;l), . .. S;k)) = H(St)); 2. t he individual substates are independent from each ot her: I(S;i);S? )) = 0 for i conserving through t ime
i- j;
3. and they are individually information
I(S;i);S;21) = H(S;21) ' Similarly t o the pr edictive efficiency from §3.2, the emergent description formalism considers the predi ctivity of a t ime series which is measured by mutual information. However , t he emergent description model only deals with a syste m wit hout a past , unlike t he predictive efficiency model which uses e-machines and t hus includes full ca usa l hist ories. However , a much more imp ort ant difference is t hat t he emergent description mod el explicitly considers a decomp osition of t he tot al dy na mical system into individual independent inform ation al com ponents. Rather t ha n considering t he system as a un st ru ct ured "bulk", t his view perceives it as having an inner informational dyn amics and a natural informationa l subst ructure . Similar to the emergence notion from §3.2 t his substruct ure is not exte rnally imposed , but rather an int rinsic pro perty of t he syste m. It is, however, not necessaril y un ique. F ig. l( a) shows schematically t he deco mpos it ion into emergent descript ions.
(a)
(b)
Figure 1: (a) Schematic structure of emergent description decomposition into independent modes. (b) Automaton found in multiobjective GA search. The two groups of states belong to the two components of the emergent description, and the arrows indicate the stochastic tra nsitions, where darker arrows indicate higher transition probabilities. The eme rgent descri ption mod el has t he advantage t hat it can be explicitly const ruc ted due to its quantitativ e characterisat ion (§4). This is a considera ble advantage to more conceptual a bst ract models (such e.g. the category-theoretic approach mentioned in §2).
8
3.4
Other Related Approaches
Two further related approaches should be mentioned. The authors in [9] suggest emergence as higher-level prediction models for partial aspects of a systems, based on entropy measures. This model can be viewed as a simplified version both of the predictive efficiency and of the emergent description model. Compared with CrutchfieldjShalizi's predictive efficiency, it does not consider causal states, and compared with our emergent description model, it does not consider a full decomposition into independent information modes. As opposed to that, [7] construct a decomposition into dynamical hierarchies based on smooth dynamical systems. This model is close in philosophy to the emergent descriptions approach, except for the fact that it is not based on information theory.
4
Constructing Emergent Descriptions
The quantitative character of the emergent description model provides an approach to construct (at least in principle) an emergent description (or at least an approximation) for a given system. Consider a system with 16 states, starting with a equally distributed random state and with a deterministic evolution rule St+l := s, + 1 mod 16, i.e, it acts as a counter modulo 16. We attempt to find an emergent description of the system into 2 subsystems of size 4 (these values have been chosen manually), applying a multiobjective Genetic Algorithm (GA) [2] to find projections that maximize system representation (criterion 1) and individual system prediction (criterion 3). With the given parameters, the optimization implicitly also optimizes criterion 2. The multiobjective optimization fully achieves criterion 1 and comes close in maximizing criterion 32 • The search is far from fully optimal and can easily be improved upon. However, it provides a proof-of-principle and it demonstrates several issues of relevance that are discussed below. The dynamics of one of the emergent descriptions found is shown in Fig. 1(b). The left automaton, if the GA had been fully successful, would have shown a perfect 4-cycle, i.e. a counter modulo 4, with only deterministic transitions; the failure of the GA to find this solution is due to the deceptiveness of the problem . However, the right counter, the lower-level counter, can never be fully deterministic according to the model from §3.3. Like a decadic counter, perfectly predicting the transition to the next state in right counter would ideally depend on the carry from the left counter; but the indepence criterion does not allow the right counter to "peek" into the left, thus always forcing a residual stochasticity. It turns out that this observation has a highly relevant relation to the algebraic Krohn-Rhodes semigroup decomposition [3] . Here, it turns out that the most general decomposition of a semigroup has a particular hierarchical structure that comprises a high-level substructure (group or flip-flop) which does not depend on anything else) and then a successive hierarchy of substructures each 2In fact, the GA fails to find the best solution since the problem is GA-dec eptive.
9 of which may depend on all the structures above them, the simplest example illustrated by a counter such as above". To incorporate this insight into the emergent description model, one could modify the conditions from §3.3 to respect the possible Krohn-Rhodes structure of the system. Schematically, this would correspond to a decomposition of the kind shown in Fig. 2(a)4.
D~ITIIJ o ITIIJ o ITIIJ (a)
D -ITITI OTI-ITITI ITIIIJ (b)- ITITI
Figure 2: (a) Emergent description with hierarchical dependence of states similar to Krohn-Rhodes decomposition . (b) Emergent description with state histories . In the present model system there is, however, a way to recover the independence of modes and maintain an optimal predictiveness. Note that in the emergent description model we completely banished state history. If we readopt it similar in spirit to component-wise e-rnachines, then the components can count individually whether a carry is required or not. The idea is schematically represented in Fig. 2(b) .
4.1
Discussion
We have contrasted the e-machine approach to characterize emergence with that of the emergent descriptions. The approaches are in many respects orthogonal, as the e-rnachine creates a relation of the two full half-axes of the temporal coordinate without any decomposition of the states itself, while the emergent description approach limits itself to a single time slice, however suitably decomposing the state into independent modes. This approach has however been shown to lose some predictivity even in the very simple counter scenario. As a remedy one can introduce either a hierarchical form of emergent descriptions, inspired by the Krohn-Rhodes decomposition, or else aim for an e-rnachine like history memory for the individual modes which is a kind of marriage of the emergent prediction and the e-machine models. In particular, this observation suggests the hypothesis that it might be possible to formulate a trade-off: one the one hand the memory requirements that a "serial" computation model such as the e-machine needs to compute the future from the past; on the other hand the information processing resources required by the "parallel" computation model such as the hierarchical emergent descriptions which involves combining information from different components of the decomposition to compute a component's future . It is quite possible that universal trade-offs may exist here, offering the possibility for resource optimization 3It also bears some algebraic relation to the Jacobian decomposition discussed in [7]. 4It is evident how to formalize this di agram in the spirit of §3.3.
10 and also for studying generalized forms of to-machines where computational resources can be shifted more-or-less freely between temporal and compositional degrees of freedom.
5
Agenthood
As a final comment, it should be mentioned that in [8] it has been shown that the perception-action loop of an agent acting in an environment can be modeled in the language of information. This is particularly int eresting for above considerations, as the agent/environment system is a generalization of a time series (a time series can be considered an agent without the ability to select an action, i.e, without the capacity for "free will"). Using infomax principles, above agent/environment system can be shown to structure the information flows into partly decomposable information flows, a process that can be int erpreted as a form of concept formation . This gives a new interpretation for the importance of emergence as the archetypical mechanism that allows the formation of concept in intelligent agents and may thus provide a key driving the creation of complexity in living systems.
Bibliography [L] J . P . C r utc h fie ld . The ca lc u li o f emerg en ce : Co m p u tat io n , dy nam ics , a n d indu c ti on . Physics D, p e g e s 11 -54 , 1994 . [2]
K . D eb , A . Prata p , S. A g a r w al , a nd T. M e y arivan . A f a st a nd e li tist mu lt.I o b .ject l ve gen e tic a l g o r i t h m : N ag.a-H . 0 /1 E volutionary Comp uta tion, 6 : 182-1 97 , 2 0 02 .
JEE P. 'Iran se cucns
[3 J A . E g r i- Nag y a nd C . L . N e han tv . M ak i ng s e nse o f t h e se nso r y d ata d e c o m pos it io n . In P rcc . K BS 2006, 2006.
coo r d in a te sys t e ms by h ie rar c h ical
[4 J H . H akon . Ad vancedsynerget ics. S pr in ger- V e r lag, B erlin , 198 3 , [5]
H . H ak on . Inform ation and Self· Organ izatio n. S pri n ger Ser ie s in Sy n e r get tcs . Spr inger , 2000 .
[6 ]
I. H a rvey. T' he 3 es o f a r t ificia l life : Em erg en c e , e m bo d i me n t a n d ev o tu t .ion . I nv it ed t alk a t Arti fi ci al L i fe VII , 1. - 0 . A u g u s t , Port la nd , Au gu st 200 0 .
[7]
M . N . Jac o bi. Hi e r arch ic al o rgan laat .ton in s moot h d yn am ic al s y s t e ms . A rt ificial
[8]
A . S . Klyu bin , D . P ol ani , a n d C . L . N eh aniv . O rga n izat io n o f the info rm ati on fl ow 1n th e p erc epti on- a cti on lo op o f evo lv e d age nts . I n Proceedings of 2004 NASA/ DoD Conference on Evolvable Hardwa re, p a g es 177-180. IE E E C o m p ute r So ci e ty, 2004 .
{9]
S. M c Gr-eg o r a n d C . F e rn an d o . L evel s o f d e s crip tio n : A n ovel a p p r oach t o dyn ami cal hi erarchie s . A rt ificial Life, 11 (4 ) ,4 59 -472 , 2 0 05.
[10J
D . Po lan i. D efinin g e merge nt d e s cripti on s by inform ati on pre s erv a tion . In Pr oc. of tlle International Con ference on Comph!x Systems. N ECS I, 2004 . L ong a b s tract, full pa pe r under r e v ie w in Inter J o urn al.
Lue,
11 (4) :4 9 3-512 , 2005 .
[1 11 S . R a smus s en , N . B aas , B . M ayer , M . N il s so n , a nd M . W . O lesen . A n sate fo r dyn ami cal hi e r arch ies . A rtificial Life, 7 :329 - 3 5 3 , 2001.
[1 2 )
C . R . S ha lizi. Cause! Archn ecture, Complexity e nd Self· Organiza tion in T ime Series and Cellular A uto mata. PhD th esi s , Uni vers ity o f W is c on sin - M ad is on , 2 0 01.
[13 ] S . W inter. Z erl egung vo n g ekoppe lt en D yn aml s ch en Sye t .em en (D e co mp o s it ion o f Co u p le d Dy namica l Sy s te m s ). Di pl om a th esi s , J ohan n es G u t.e n b erg -U n lv cr-stt .at M a i n z , 1 9 96 . ( I n G e r man) .
Chapter 2
How Deep and Broad are the Laws of Emergence? Susan Sgorbati and Bruce Weber, Bennington College
Abstract Bruce Weber, evolutionary biologi st and Susan Sgorbati , choreographer have been in a dialogue for the last several years asking the question of whether there are deep structuring principles that cross disciplines. While both professors at Bennington Colleg e, they developed a series of courses that explored these structuring principles in complex systems. Ideas such as self-organization, emergence, improvisation, and complexity were investigated through the lens of different disciplines and modes of perception. The inquiry was both intellectually driven and experientially driven. Students were asked to research and write papers, as well as move in the dance studio. Experiments in the studio led Susan Sgorbati to develop research that subsequently resulted in a national tour with professional dancers and musicians who are participating in a performance as part of this conference.
In this paper we will define concepts we have been using in our work and teaching , focusing on resonances between the different modalities. How to discern when organizing principles have relationships in common , and when they are specific to their systems seems an important distinction and line of inquiry that could have important implications for analyzing complex systems in a wide range of different environments from science to art to public policy.
Introduction: Providing a Historical Context Starting in 1999 Bruce Weber, a biochemist intere sted in how emergent and selforgani zing phenomena in complex chemical and biological systems affect our ideas of the origin and evolution of life, entered into a collaboration with Susan Sgorbati, a dancer interested in emergent improvisation, a form she developed as a set of structuring principles for dance and music . They were able to collaborate in both
12 teaching and research/creative work over a period of years at Bennington College, in an environment that fostered such interaction. We began with teaching a course in the emergence of embodied mind that was based upon reading the scientific writings on consciousness and embodiment of Gerald Edelman, Nobel Laureate and Director of The Neurosciences Institute in La Jolla, and who has visited the Bennington campus . Exploring the biological basis of consciousness brought us not only to utilize the conceptual resources of complex systems dynamics (theories of selforganization, emergence and the application of various computational models) but also to devise experiential work for the students involving perception, movement, improvisation, and the contrast of objective and subjective awareness. In addition to continuing this class over several years, we also taught classes in more general aspects of emergent complexity, where we drew heavily on the work of Stuart Kauffman of the Santa Fe Institute and the University of Calgary, and who also spent time at the campus. We looked for similar patterns that arose in different types of systems across a wide range of phenomena - physical, chemical, biological, cultural, and aesthetic. We ranged widely over such different subjects in order to ascertain if there might be a more general paradigm of emergent complexity beginning to affect our culture as suggested by Mark Taylor in his recent The Moment ofComplexity (Taylor 2001) . In the experientials that Susan developed for students in our classes, we studied the role of limited, close-range interactions vs. longer-range, global interactions, and also the correlation of constraints and selective factors and the likelihood of observable, global , aesthetic structures emerging. It was interesting to have the students report their subjective experiences during the process of emergence, something about which molecules are mute . For Susan, the language and concepts of complex systems dynamics in general and the specific ideas of Edelman and Kauffman in particular, provided a context for discussing emergent improvisational movement form s. Her creative exploration of this science/dance interface has intrigued colleagues at The Neurosciences Institute, where she has been in residence for several weeks in the last four years . The Jerome Robbins Foundation, The Bumper Foundation, The Flynn Center for the Performing Arts and The National Performance Network Creation Fund (The Creation Fund is sponsored by the Doris Duke Charitable Foundation, Ford Foundation, AUria, and the National Endowment for the Arts , a federal agency) have all supported her research
Defining Key Concepts We are interested in higher-order structures in complex systems that reveal themselves in scientific and aesthetic observations. The scheme that we explored was based upon the following type of pattern unfolding over time: individuals> self-organizatiore-ensemblec- emergence> complex system We explored such a sequence in particular systems, such as the BZ reaction, Bernard Cells, self-organization in slime molds , and the various Kauffman's NK models, where N represents the number of constituents in a system and K the number of ways such constituents are related to each other. In physical, chemical, and biological systems studied we saw that self-organization (SO or perhaps more perspicuously system-organization) and self-structuring can occur spontaneously when a system is held far from equilibrium by flows of matter/energy gradients and the system has
13 mechanisms for tapping such gradients (Peacocke 1983; Wicken 1987; Casti 1994; Schneider and Sagan 2005). The resulting structures from such SO processes involve an interplay of selective and self-organizing principles from which higherorder structures can emerge that can constrain activities at the lower levels and allow the completion of at least one thermodynamic work cycle (Kauffman 1993, 1995, 2000; Depew and Weber 1995; Weber and Depew 1996; Weber and Deacon 2000; Deacon 2003; Clayton 2004). Such emergent systems can, under special circumstances, display agency in that they select activities and/or behaviors that allow them to find gradients and extract work from them (Kauffman 2000). Sufficiently complex chemical systems with agency, boundaries and some form of "molecular memory" are showing many of the traits of living systems and give clues to the possible emergence of life (Kauffman 2000; Weber 1998,2000,2007). Further, Edelman's theory of neuronal group selection similarly invokes an interplay of selective and self-organizational principles giving rise to emergence of consciousness (Edelman 1987; Edelman and Tononi 2000; Weber 2003). In Edelman's model of how consciousness emerges there is a central role for both complexity and a process of reentry that can give rise to coherent neuronal activity in a "dynamic core" (Tononi and Edelman 1998). While exploring these concepts Susan developed experientials to help students understand the issues through an alternative modality to the experimental and mathematical. This alternative modality is based on the aesthetic idea that important concepts such as agency, movement, embeddedness, memory, topology, and complexity arise in dancers and musicians in an improvisational system. Trying out a series of experiments with students and then with professional dancers and musicians based on simple rules and constraints, certain key concepts were formulated as a result of observations. They are: 1) agency: Individual dancers and musicians exhibit agency, or in this context the choice to move or to create sound. An essential aspect of this agency is the sensation of being "embodied 2) movement: in this context, movement is the energy force driving the selforganizing system, creating the individual actions, the local interactions, and the global ensemble patterns. Movement is key as the system would be static without it. Movement is an essential component in any kind of structuring process . 3) embeddedness: the elements of this particular system contain constraints and boundaries in a particular environment. The global behavior is integral to the environment and will alter with any changes in the constraints. Time and space are essential components and will dictate the nature of structuring. 4) memory: structuring is an act of learning by the elements that are building the shape and patterns. Learning involves memory, reconstructing past experience into present thinking and action. This learning is essentially selectional, choosing certain patterns over others. Edelman speaks of "degeneracy" or many different ways, not necessarily structurally identical, by which a particular output occurs. (Edelman and Tononi 2000,86) The ability to recreate patterns to refine structuring processes increasingly depends on degenerate pathways to find more adaptable solutions to build onto forms. 5) topology: In this way of structuring, a •metatopology 'occurs where the system has the ability to operate on all levels at once (Sgorbati 2006,209). Scale and amplification are important. According to Terence Deacon, a topology is "a
14 constitutive fact about the spatial -temporal relationships among component elements and interactions with intrinsic causal consequences" (Deacon 2003,282) . Three levels of interaction exist at once: the local neighbor interaction, the small group ensemble locally, and the global collective behavior. 6) complexity: dynamic compositional structures among dancers and musicians arise when simple rules are followed through improvisation based on certain constraints in the environment. This leads us to speculate that there are three interactive levels of analysis to these complex structures: systems approach (evolutionary biology), developmental approach (morphology), and psychological approach (meaning) as a way of observing structuring principles (Susan Borden personal communication with Sgorbati, 2006). Complex systems dynamics gives a language with which to consider and discuss our experiences and the emergence of new aesthetic forms.
Research in Emergent Improvisation - An Aesthetic Idea The Emergent Improvisation Project is a research project into the nature of improvisation in dance and music . In this context improvisation is understood to mean the spontaneous creation of integrated sound and movement by performers who are adapting to internal and external stimuli, impulses and interactions. Ordinarily, we think of order and form as externally imposed, composed or directed. In this case , however, new kinds of order emerge, not because they are preconceived or designed, but because they are the products of dynamic, self-organizing systems operating in open-ended environments. This phenomenon - the creation of order from a rich array of self-organizing interactions - is found not only in dance and music, but also, as it turns out, in a wide variety of natural settings when a range of initial conditions gives rise to collective behavior that is both different from and more than the sum of its parts. Like certain art forms, evolution, for example, is decidedly improvisational and emergent, as is the brain function that lies at the heart of what it is to be human. Emergent forms appear in complex , interconnected systems, where there is enough order and interaction to create recognizable pattern but where the form is open-ended enough to continuously bring in new differentiations and integrations that influence and modify the form. It is by way of these interactions that particular pathways for the development of new material are selected.
In linking the creative work of art-making to the emergent processes evident in nature, there is basis for a rich and textured inquiry into how systems come together, transform and reassemble to create powerful instruments of communication, meaning and exchange. This project explores the ways in which natural processes underlie artistic expression along with the possibility that art can help illuminate natural processes. Conversations with scientists, particularly Bruce Weber at Bennington College, Gerald Edelman, Anil Seth, and John Iverson of The Neurosciences Institute, and Stuart Kauffman of The University of Calgary, have introduced Susan to the idea that, in living systems, self-organization produces complex structures that emerge dynamically. This idea resonated with her own work in improvisation and led us to
15 speculate that there are deep, structuring principles that underlie a vast range of phenomena, producing similar evolving patterns in different environments: dancers collecting, birds flocking , visual representations of neuronal networks
New Forms in Emergent Improvisation Movement appears to be a fundamental component of all living processes and we , as dancers, are moving and experiencing our own emergent sense of organization in this process (Sheets-Johnstone 1999). Working in this way with our students led Susan to observe and develop structuring principles for two emergent forms : comple x unison and memory. The Complex Unison Form is based on the observation of natural systems, which exhibit self-organizing structuring principles. In this form, open-ended processes are constantly adapting to new information, integrating new structures that emerge and dissolve over time. Complex Unison reveals the progression of closely following groups of individuals in space, to the unified sharing of similar material , and finally to the interplay of that material, which has both a degree of integration and variation, often displaying endlessly adaptive and complex behavior. In the Memor y Form, the dancers and musicians create an event that is remembered by the ensemble, and then reconstructed over time, revealing memory as a complex structuring process. This process by the dancers and musicians investigates multiple interpretations that draw on signals that organize and carry meaning . In this way, memory of the initial event is a fluid, open-ended process in which the performers are continuously relating past information to present thinking and action. This reintegration of past into present draws on repetition , nonlinear sequencing, and emergence to construct new adaptations. The Memory Form was inspired by the concept, "the remembered present" of Gerald Edelman.
Notes Toward The Definition of a General Theory of Emergence Entering into this discussion of a general theory of emergence feels like walking through a minefield . The dangers of generalities, of vague assumptions , of philosophizing about abstractions are everywhere. Artists and scientists have their own languages that describe the concept of emergence. Do the movement patterns of flocks of birds, schools of fish, neuronal networks, and ensembles of dancers and musicians have anything in common? Does our dialogue have something to contribute to our own communities as well as the culture at large? Yaneer Bar-Yam , in his book Dynamics a/Complex Systems states, "Thus, all scientific endeavor is based, to a greater or lesser degree , on the existence of universality , which manifests itself in diverse ways" (Bar-Yam 1997, 1). This suggests that there might be universal principles contained in the concept of emergence that might shed light on structuring principles for many disciplines . Let us make perfectly clear that we are not interested in comparing apples to oranges. Dancers are not molecules. However, unlike molecules, dancers and musicians can relate their subjective experience during the process of emergent complexity. They are aware of what signals are effective in self-organizing structuring processes, and can reflect on multi-level attention spans that participate in these topological structuring processes. From our dialogues in the last several years as well as our
16 work with students , we believe conversations between artists and scientists about emergence are important, and that a general theory may be possible . It is not simple to define emergence from a scientific or an aesthetic point of view , and clearly harder to encompass both perspectives. One definition is from Terrence Deacon, who in his essay, "The Hierarchic Logic of Emergence" states that , "Complex dynamical ensembles can spontaneously assume ordered patterns of behavior that are not prefigured in the properties of their component elements or in their interaction patterns" (Deacon 2003, 274) . Artists experience their own sense of emergence. Gerald Edelman describes some of the basis for this experience. In his essay "The Wordless Metaphor: Visual Art and The Brain" he states, based upon current theoretical models and experiments, "Because it has no instructional program, but works by selection upon variation, the brain of a conscious animal must relate perception to feeling and value , whether inherited or acquired . These are the constraints -feeling and value- that give direction to selection within the body and brain" (Edelman 1995,40). Edelman then describes how this complex process of continual recategorization of experience and movement of the body has links to motor features of artistic expression which we relate to as ' memory'. "The notion of bodily-based metaphor as a source of symbolic expression fits selectionist notions of brain function to aT . As Gombrich has put it, the artist must make in order to match" (Edelman 1995,41). He concludes the essay by stating, "I hope that artists will be pleased to hear that the process of selection from vast and diverse neural repertoires, giving each of their brains a unique shape, may be a key to what they have already discovered and expressed in their creative work . The promise of this idea is its ability to account for the individuality of our responses, for the coexistence of logic and ambiguity as expressed in metaphor, and for the actual origins of the silent bodily-based metaphors that underlie artistic expression . When scientific verifications and extensions of these notions occur, we will have a deeper understanding of how artistic expression, in an enduring silence of wordless metaphors, often historically precedes explicit linguistically expressed ideas and propositions. Art will then have a sounder and more expansive link to scientific ideas of our place in nature" (Edelman 1995,43-47). Whether one is looking at flocks of birds, ensembles of dancers or neuronal networks, certain questions, appropriately framed for the particular instance, seem pertinent. Questions of structure are of extreme importance across disciplines. While humans will always interact from a psychological framework unlike other living systems, all systems appear to need structuring in order to survive. Complex structuring is particularly challenging because of new ways of looking at nonlinear sequencing, communication across distances with spatio-temporal and kinesthetic signaling , analysis of particular constraints within a context, and new investigations into morphological concepts. In this general theory of emergence, movement and structuring principles are key elements. Robert Laughlin (1998 Nobel Prize in Physics) has written, "Nature is regulated not only by a microscopic rule base but by powerful and general principles of organization. Some of these principles are known, but the vast majority are not" (Laughlin 2005, xiv) . If the vast majority of principles of organization are not known, it is possible that they are there for us to be discovered on all levels , scientific as well as artistic. These structuring principles might be organized in levels of interactive analysis, analyzing, as in complex systems, such that we need to see the whole picture at once as well as individual levels. These levels include first
17 the systems approach where much research is occurring. Second is the developmental or morphological approach, where much research has occurred in relation to the development of organisms, but not much related to structuring principles, and the psychological approach, where the structuring of meaning and metaphor is integral to emergence and complexity, and can be directly related to social systems and artistic expression (Borden , personal communication with Sgorbati 2006). Thus, in conclusion, we observe some common themes across scientific and artistic disciplines on emergence: It is a property that arises out of self-organizing ensembles. Movement is an essential component of the self-organization . Constraints are necessary as are boundaries of time and space. Structuring principles dictate the type and nature of the emergence. They are found in a unique ordering that is a relationship between integration and differentiation. In our case, scientists and artists have begun a real conversation about a particular resonance to emergent structures across these disciplines. This theory suggests that living complex dynamical systems may share some unified experiences while making rigorous distinctions critical. (For example, molecular interactions are not sentient the way interactions among dancers are) . As Edelman suggests in connecting pattern recognition, selection and creativity, it may be that all living systems move toward creative ways to structure themselves in their environment based on a higher degree of adaptability. What may seem destructive to one group may seem perfectly ordered and coherent to another. For the sake of this discussion, rather than put a judgment on order or disorder, it might behoove us to observe and describe the structuring principles we see around us in order to best understand them, to recognize them, and then to determine their efficacy or destructive power. We might then be able to determine which structures work best within certain constraints, the length of their life spans, how much learned information is necessary for agents to participate in building them and gain a deeper appreciation for the beauty in patterns around us. We conclude with a quote from Stuart Kauffman from At Home in the Universe : The emerging sciences of complexity begin to suggest that the order is not all accidental, that vast veins of spontaneous order lie at hand. Laws of complexity spontaneously generate much of the order of the natural world. It is only then that selection comes into play, further molding and refining ... How does selection work on systems that already generate spontaneous order? .... Life and its evolution have always depended on the mutual embrace of spontaneous order and selection's crafting of that order. We need to paint a new picture. (Kauffman 1995,8-9). We look forward to continuing our exploration into these matters and to encourage artists and scientists to engage in this fruitful dialogue.
Bibliography Bar-Yam, Y. (1997), Dynamics of Complex Systems, Reading MA: Addison-Wesley. Casti, J.L. (1994), Complexification: Explaining a Paradoxical World Through the Science ofSurprise, New York: HarperCollins. Clayton, P. (2004), Mind & Emergence: From Quantum to Consciousness, Oxford:Oxford University Press. Deacon, T.W. (2003), The Hierarchic logic of emergence: Untangling the
18 interdependence of evolution and self-organization , in Evolution and Learning: The Baldwin Effect Reconsidered, Cambridge MA: MIT Press, pp 273-308. Depew, DJ . and B.H. Weber (1995), Darwini sm Evolving: Systems Dynamics and the Genealogy ofNatural Selection, Cambridge , MA: MIT Press . Edelman, G.M. (1987), Neural Darwinism: The Theory ofNeuronal Group Selection ,New York: Basic Books . Edelman, G.M. (1995), The wordless metaphor: Visual art and the brain , in i995 Biennial Exhibition Catalogue ofthe Whitney Museum ofAmerican Art, New York: Abrams. Edelman, G.M., and G. Tononi (2000), A Universe ofConsciou sness: How Matter Becomes imagination , New York: Basic Books. Kauffman , SA . (1993), The Origins of Order: Self-Organization and Selection in Evolution , New York : Oxford University Press. Kauffman, SA . (1995) , At Home in the Universe: The Search for the Laws of SelfOrganization and Complexity, New York: Oxford University Press. Kauffman, S.A. (2000), Investigations, New York: Oxford University Press. Laughlin, R.B. (2005), A Different Universe: Reinventing Physicsfrom the Bottom Down, New York: Basic Books . Peacocke, A.R. (1983), An Introduction to the Physical Chemistry ofBiological Organization, Oxford: Oxford University Press. Schneider, E.D. and D. Sagan (2005), Into the Cool: Energy Flow Thermodynamics and Life, Chicago: University of Chicago Press. Sgorbati, S. (2006), Scientifiquement Danse : Quand La Danse Puise aux Sciences et Reciproquement, Bruxelles: Contredanse. Sheets-Johnstone, M. (1999) , The Primacy ofMovement, Amsterdam: Benjamin. Taylor, M.C. (2001), The Moment ofComplexity: Emerging Network Culture, Chicago: University of Chicago Press . Tononi, G. and G.M. Edelman (1998), Consciousness and complexity, Science 282: 1846-1851. Weber, B.H. (1998), Emergence of life and biological selection from the perspective of complex systems dynamics, in Evolutionary Systems, G. van de Vijver, S. Salthe , and M. Delpos (eds), Dordrecht: Kluwcr. Weber, B.H. (2000), Closure in the emergence of life , in Closure : Emergent Organizations and Their Dynamics J.L.R . Chandler and G. van de Vijver (eds), Annals ofthe New York Academy ofSciences, 501: 132-138. Weber, B.H. (2003), Emergence of mind and the Baldwin effect, in Evolution and Learning: The Baldwin Effect Reconsidered, Cambridge MA: MIT Press, pp. 309326. Weber, B.H. (2007), Emergence of life, Zygon 42:837-856 . Weber, B.H. and T.W . Deacon (2000), Thermodynamic cycles, developmental systems, and emergence, Cybernetics and Human Knowing 7:21-43. Weber, B.H. and DJ. Depew (1996), Natural selection and self-organization: Dynamical models as clues to a new evolutionary synthesis, Biology and Philosophy 11:33-65. Wicken , J .S. (1987), Evolution , Information and Thermodynamics : Extending the Darwinian Program, New York: Oxford University Press .
Chapter 3
On an irreducible theory of complex systems Victor Korotkikh and Galina Korotkikh Faculty of Business and Informatics Central Queensland University Mackay, Queensland, 4740 Australia [email protected], [email protected] u.a u
1
Introduction
Complex systems profound ly change human activities of the day and may be of strategic interest. As a result , it becomes increasingly important to have confidence in the t heory of complex systems. Ultimately, this calls for clear explanations why the foundations of the theory are valid in t he first place. The ideal sit uation would be to have an irreducible theory of complex systems not requiring a deeper explanatory base in princip le. But the quest ion arises: where could such a theory come from, when even t he concept of spacetime is questioned as a fundamental entity. As a possible answer it is suggested t hat the concept of integers may take responsibility in the search for an irreducible theory of complex systems [1] . It is shown that self-organization processes of prime integer relat ions can describe complex systems through the unity of two equivalent forms, i.e., arithmetical and geometrical [1], [2]. Significant ly, based on the integers and controlled by arithmetic only such processes can describe complex systems by information not requiring further simp lification. T his raises the possibility to develop an irreducible theory of complex systems. In t his pa per we present results to progress in this direction.
20
2
N onloeal Correlations and Statistical Information about Parts of a Complex System
As we consider the correlations between the parts preserving certain quantities of the system self-organizat ion processes of prime integer relations can be revealed
[1], [2] .
Let I be an integer alphabet and IN = {x = Xl ...XN, Xi E I, i = 1, ...,N} be the set of sequences of length N 2 2. We consider N elementary parts {Pi , i = 1, ..., N} with the st ate of Pi in its local reference frame given by a space coordinate Xi E I , i = 1, ..., N and th e state of the elementary parts by a sequence x = Xl ...XN E IN . A local reference frame of an element ary part Pi is specified by two parameters ci > 0 and Oi > 0, i = 1, ..., N . The parameters are required to be th e same C = e. ,0 = Oi, i = 1, ..., N , i.e., const ants , and can not be changed unless simult aneously. It is proved [1] that C( x , x') 2 1 of the quantities of a complex syst em remain invariant , if and only if the correlat ions between t he parts can be defined by a system of C(x,x' ) Diophantine equat ions (m+N) C(X ,X/)-l~Xl
+ (m+N _1)C(X,X/)-1~X2 + ... +(m+lf(x ,x/)-l~XN =
+ N)l~Xl + (m + N (m + N)O~Xl + (m + N (m
+ + (m + l)l~XN = 0 -1)o~x2 + + (m + I)O~xN = 0
°
-1)1~X2
(1)
and an inequality (m + N( x,x') ~Xl
+ (m + N
- I)C(x,x') ~X2
+ ...+ (m + I)C(x,x') ~XN i= 0,
where {~Xi = x~ - Xi, X~, Xi E I , i = 1, ..., N} are the changes of the element ary parts {Pi , i = 1, ..., N} between the stat es x' = x~ ...x':v, x = Xl ...XN and m is an integer. Th e coefficients of the system become th e ent ries of the Vandermonde matrix , when th e number of th e equations is N . This fact is important in order to prove th at C(x,x') < N [1]. The equations (1) present a special typ e of correlations th at have no reference to the dist ances between the parts, local times and physical signals. Thus , according to the description parts of a complex systems may be far apart in space and time and yet remain interconnected with inst antaneous effect on each other, but no signaling. The space and non-signaling aspects of th e correlations are familiar properties of quantum correlat ions [3]. The time aspect of th e nonlocal correlations suggests an interesting persp ective. For th e observable ~Xi of an elementary part Pi, i = 1, ..., N the solutions to t he equat ions (1) may define a set of different possible values. Since th ere is no no mechanism specifying which of t hem is going to take place, an intrinsi c uncertainty about t he element ary part exists. At the same time, th e solutions can be used to evaluate the probability of the observable ~Xi , i = 1, ..., N to tak e each of the measurement out comes. Thus , th e description provides the st atist ical information about a complex system.
21
3
Self-Organization Processes of Prime Integer Relations and their Geometrization
Through the Diophantine equations (1) integer relati ons can be found . Th eir analysis reveals hierar chical st ruct ures, which can be interpreted as a result of self-organization processes of prime integer relations [1], [2].
3
2
o Figure 1: The left side shows one of th e hierarch ical structures of prime integer relations, when a complex system has N = 8 elementary par ts {Pi , i = 1, ... , 8}, x = 00000000, x' = +1 - 1 - 1 + 1 - 1 + 1 + 1-1, m = 0 and C(x ,x') = 3. Th e hierarchical st ruct ure is built by a self-organization process and determin es a correlat ion structure of th e complex syst em. Th e process is fully controlled by arit hmetic. It can not progress to level 4, because arithmet ic determines t hat +8 3 - 73 - 63 + 5 3 - 43 + 33 + 23 - 13 t= O. Th e right side present s an isomorphic hierar chical st ruct ure of geometr ical patterns determining t he dynamics of th e syst em. On scale level 0 eight rectangles specify th e dynami cs of the elementary parts {Pi , i = 1, ..., 8}. Under th e integration of the function j lk) th e geometrical patterns of t he parts at th e level k form th e geometrical patterns of th e parts at the higher level k + 1, where j[O) = j and k = 0,1,2. T hrough t he integrations arithmetic defines how t he geomet rical patterns must be curved to dete rmine the spacetime dynamics of the parts. All geometrical patterns are symmetrical and th eir symmetries are all interconnected. The symmetry of a geomet rica l pattern is global and belongs to a corresponding part as a whole.
Starting wit h integers as t he elementary building blocks and following a single principle, such a self-organization process ma kes up from the prime integer relations of a level of a hierarchical st ruct ure the prime integer relations of the higher level (Figure 1). Notab ly, a prime int eger relation is made as an inseparable obj ect : it ceases to exist upon removal of any of its formation compon ents. By using the integer code series [4] th e prime integer relations can be equiv-
22 alently geometrized as two-dimensional patterns and the self-organization processes can be isomorphica lly expressed by certain transformations of the patterns [1], [2]. As it becomes possible to measure a prime integer relation by a corresponding geometrica l pattern, quantities of a system the prime integer relation describes can be defined by quantities of the geometrical pattern such as the area and length of its boundary curve (Figure 1). In general, the quantitative description of a complex system can be reduced to equations characterizing properties and quantities of corresponding two-dimensional patterns. Due to the isomorphism in our description the structure and the dynamics of a complex system are united [1], [2]. The dynamics of t he parts are determined to produce precisely the geometrical patterns of the system so that the corresponding prime integer relations can be in place to provide the correlation structure of the whole system . If the dynamics of the parts are not fine-tuned, then some of the relationships are not in place and the system falls apart .
4
Optimality Condition of Complex Systems and Optimal Quantum Algorithms
Despite different origin complex systems have much in common and are investigated to satisfy universal laws. Our description points out that the universal laws may originate not from forces in spacetime, but through arit hmetic. There are many notions of complexity introduced in the search to communicate the universal laws into theory and practice. The concept of structural complexity is defined to measure the complexity of a system in terms of self-organization processes of prime integer relations [1]. In particular, as selforganization processes of prime integer relations progress from a level to the higher level, the system becomes more complex, because its parts at the level are combined to make up more complex parts at the higher level. Therefore, the higher the level self-organization processes progress to , t he greater is the structural complexity of a corresponding complex system . Existing concepts of complexity do not in general explain how the performance of a complex system may depend on its complexity. To address the situation we conducted computat ional experiments to investigate whether the concept of structural complexity could make a difference [5]. A special optimization algorithm, as a complex system , was developed to minimize the average distance in the trave lling salesman problem. Remarkably, for each problem the performance of t he algorithm was concave. As a result, the algorithm and a problem were characterized by a single performance optimum. The ana lysis of t he performance optimums for all problems tested revealed a relat ionship between the structural complexity of the algorithm and the structural complexity of the prob lem approximat ing it well enough by a linear function [5]. Th e results of the computational experiments raise the possibility of an optimality condition of complex systems : A complex system demonstrates the optimal performance for a problem, when
23 the structural complexity of the system is in a certain relationship with the structural complexity of the problem .
Remarkably, the optimality condition presents the structural complexity of a system as a key to its optimization. According to the optimality condition the optimal result can be obtained as long as the structural complexity of the system is properly related with the structural complexity of the problem. From th is perspective the optimization of a system should be primarily concerned with the contro l of the structural complexity of the system to match the structural complexity of the problem . The computational results also indicate that the performance of a complex system may behave as a concave function of the structural complexity. Once the structural complexity could be controlled as a single entity, the optimization of a complex system would be potentially reduced to a one-dimensional concave optimization irrespect ive of the number of variab les involved in its description. In the search to identify a mat hematical structure under lying optimal quan tum algorithms the majorization princip le emerges as a necessary condition for efficiency in quantum computational processes [6]. We find a connection between the optimality condition and the majorization principle in quantum algorithms . The majorization princip le provides a local directio n for an optimal quantum algorithm : the probability distribution associated to the quantum state has to be step-by -step majorized unt il it is maximally ordered . This means that an optimal quantum algorithm has to work in such a way t hat the probability distribution Pk+1 at step k + 1 majorizes Pk -< Pk+1 the probability distribution Pk at step k [6] . Our algorit hm also has a direction. It is given by the order of the selforganization processes of prime integer relations and expressed through the structural complexity. Importantly, the performance of the algorithm becomes concave, as it tries to work in such a way that the structural complexity C k+ 1 of the algorithm at step k + 1 majorizes C k -< C k+ 1 its structural complexity C k at step k. The concavity of the algorithm's performance suggests efficient means to find optimal solutions [5].
5
Global Symmetry of Complex Systems and Gauge Forces
Our descriptio n reveals a global symmetry of complex systems as the geometrical patterns of the prime integer relations appear symmetrical and the symmetries are all interconnected through their transformations. The global symmetry belongs to the complex system as a whole, but does not necessarily apply to its parts. Usually, when a global symmetry is transformed into a local one, a gauge force is required to be added . Because in the description arithmetic fully determines the breaking of the global symmetry, it is clear why the resulting gauge forces exist the way they do and not even slight ly different. Let us illustrate the results by a special self-organization process of prime
24
integer relations [1], [2]. The left side of Figure 1 shows a hierarchical structure of prime integer relations built by the process. It determines a correlation structure of a complex system with states of N = 8 elementary parts {Pi, i = 1, ..., 8} given by the sequences x = 00000000, x' = +1 - 1 - 1 + 1 - 1 + 1 + 1 - 1 and m = o. The sequence x' is the initial segment of length 8 of the Prouhet-Thue-Morse (PTM) sequence starting with +1. The self-organizat ion process we consider is only one from an ensemble of self-organization processes forming the correlation structure of the whole system . The right side of Figure 1 presents an isomorphic hierarchical structure of geometrical patterns. The geometrical pattern of a prime integer relation determines the dynamics of a corresponding part of th e complex system . Quantities of a geometrical pattern, such as the area and length of the bound ary curve, define quantities of a corresponding part. Quantities of t he parts are interconn ected through the transformat ions of the geometrical patterns.
1
Fig ure 2: The geometrical pattern of the part (PI +--> P2) +--> (P3 +--> P4 ) . From above the geometrical pattern is limited by th e boundary curve, i.e., the graph of the second integral j (2)(t), to ~ t ~ t4 of the function j defined on scale level 0 (Figur e 1), where t i = ie , i = 1, ..., 4, e = 1, and it is restri cted by the t axis from below. The geometrical pattern is equivalent to th e prim e integer relation + 8 1 _ 7 1 _ 6 1 +5 1 = 0 and determines the dynamics. If the part deviates from this dynamics even slightly, then some of the correlation links provided by the prime integer relation disappear and the part decays. The boundary curve has a special prop erty ensuring that the area of th e geometrical pattern is given as the area of a triangle: S = H2D , where Hand D are the height and the width of th e geometrical pattern . In the figure H = 1 and D = 4, thus S = 2. The property is illustrated in yin-yang motifs.
Starting with the elementary parts at scale level 0, the parts of the correlation structure are built level by level and thus a part of the complex system becomes a complex system itself (Figure 1). All geometrical patterns characterizing the parts are symmetrical and t heir symmetries are interconnected through the integrations of the function f . Specifically, we consider whether the description of the elementary parts of a scale level is invariant . At scale level 2 the second integral f [2l(t) , to ~ t ~ t4 , ti = ie, i = 1, ..., 4, e = 1 characterizes t he dynamics of the part (PI +-t P2 ) +-t (P3 +--> P4 ) . This composite part is made up of the elementary parts
25 PI, P2 , P3 , P4 and parts H ...... P2 , P3 ...... P4 changed under the transformations to be at scale level 2 (Figures 1 and 2). The description of the dynamics of the elementary parts PI , P2 , P3 , P4 and parts PI ...... P2 , P3 ...... P4 within the part (PI ...... P2 ) ...... (P3 ...... P4 ) is invariant relative to their reference frames. In particular, the dynamics of the elementary parts PI and P2 in a reference frame of the elementary PI is specified by
j[2J(t)
t2
= j~l(tp!) = - ~! + 2tp! -1 , tl = tl ,P! ~ tp! ~ t2,P! = t2'
The transition tP2 = -tp, - 2, j~J = - j~} -1 from the coordinate system of the elementary part PI to a coordinate system of the elementary P2 shows that the characterization
t2
[2J(t ) = ...!2. j P2 P2 2 '
(3)
of the dynamics of the elementary part P2 is the same, if we compare (2) and (3). Similarly, the description is invariant , when we consider the dynamics of the element ary parts P3 and P4 . Furthermore, it can be shown that the description of the dynamics of the parts H ...... P2 and P3 ...... P4 relative to their coordinate systems is invariant . However, at scale level 3 the description of the dynamics is not invariant . In particular, we consider the dynamics of the elementary parts PI and P2 changed under the transformations to be at scale level 3 within the part ((PI ...... P2 ) ...... (P3 ...... P4 ) ) ...... ((P5 ...... P6 ) ...... (P7 ...... Ps)). Relative to a coordinate system of the elementary part PI the dynamics can be specified by (Figure 1) [3J (
jp! tp! [3J (
) _
j P, tp, -
)
t:;', ' =31
t:;', + t 2p, -31
to,P,~t~tl,P! ,
tp!
1 + 3'
(4)
ii ,r, ~ tp! ~ t2,P,.
The transitions from t he coordinate systems of the elementary part PI to the coordinate systems of the elementary part P2 do not preserve the form (4). For example, if by tP2 = ir. + 2, j~J = - j~} + 1 the perspective is changed from the coordinate system of the elementary part PI to a coordinate system of the elementary part P2 , then it turns out that the description of the dynamics (4) is not invariant j [2J(t)
3
t = j[3P2J(t P2 ) = ...!2. 3! -
due to the additional term -tP2 '
tp2'
26 Therefore, at scale level 3 arithmetic determines different dynamics of the elementary parts PI and Pz. Information about the difference could be obtained from observers positioned at the coordinate system of the elementary part PI and the coordinate system of the elementary part Pz respectively. As one observer would report about the dynamics of PI and the other about the dynamics of Pz , the difference could be interpreted through the existence of a gauge force F acting on the elementary part Pz in the coordinate system to the effect of the term X(F) = -tP2
In summary, the results can be schematically expressed as follows: Arithmetic -+ Prime integer relations in control of correlation structures of complex systems t-t Global symmetry: geometrical patterns in control of dynamics of complex systems Not invariant descriptions of parts of complex systems t-t Gauge forces to restore local symmetries
-+
Bibliography [I] Victor KOROTKIKH, A Mathematical Structure for Emergent Computation, Kluwer Academic Publishers (1999). [2] Victor KOROTKIKH and Galina KOROTKIKH, "Description of Complex Systems in terms of Self-Organization Processes of Prime Integer Relations", Complexus Mundi: Emergent Patterns in Nature, (Miroslav NOVAK ed.), World Scientific (2006), 63-72 , arXiv:nlin.AOj0509008. [3J Nicolas GISIN, Can Relativity be Considered Complete? From Newtonian Nonlocality to Quantum Nonlocality and Beyond, arXiv :quant-phj0512168. [4J Victor KOROTKIKH, Integer Code Series with Some Applications in Dynamical Systems and Complexity, Computing Centre of the Russian Academy of Sciences, Moscow (1993). [5] Victor KOROTKIKH, Galina KOROTKIKH and Darryl BOND, On Optimality Condition of Complex Systems: Computational Evidence, arXiv :cs.CCj0504092. [6J Roman ORUS, Jose LATORRg and Miguel MARTIN-DELGADO, Systematic Analysis of Majorization in Quantum Algorithms, arXiv :quant-phj0212094.
Chapter 4
Measuring and Tracking Complexity in Science Jacek Marczyk Ph.D., Balachandra Deshpande Ph.D . Ontonix Srl, Ontonix LLC [email protected]
1. Introduction Recent years have seen the development of a new approach to the study of diverse problems in natural, social and technological fields: the science of complexity [GellMan 1994]. The objective of complex systems science is to comprehend how groups of agents, e.g. people, cells, animals, organizations, the economy, function collectively . The underlying concept of complexity science is that any system is an ensemble of agents that interact. As a result, the system exhibits characteristics different from that of each agent, leading to collective behavior [Gell-Man 1994]. This property is known as emergence [Morowitz 2002]. Moreover, complex systems can adapt to changing environments, and are able to spontaneously self-organize [Sornette 2000]. The dynamics of complex system tends to converge to time patterns, that are known as attractors [Sornette 2000] and is strongly influenced by the agent inter-relationships, which can be represented as networks [Barabasi 2002]. The topological properties of such networks are crucial for determining the collective behavior of the systems, with particular reference to their robustness to external perturbations or to agent failure [Barabasi, Albert 2000], [Dorogovtsev 2003]. Although the theoretical exploration of highly complex systems is usually very difficult, the creation of plausible computer models has been made possible in the past 10-15 years. These models yield new insights into how these systems function. Traditionally, such models were studied within the areas of cellular automata [Chopard 1998], neural networks [Haykin 1999] chaos theory [Sornette 2000], control theory [Aguirre 2000], non-linear dynamics [Sornette 2000] and evolutionary programming [Zhou 2003]. The practical applications these studies cover a wide spectrum, ranging from studies of DNA and proteins [Jeong 2001] to computational biology [Dezso 2002], from economics and finance [Mantegna
28 2000] to ecology [Lynam 1999] and many others . When addressing complexity and complex systems, many researchers illustrate the ways in which complexity manifests itself and suggest mathematical methods for the classification of complex behavior. Subjects such as cellular automata , stochastic processes, statistical mechanics and thermodynamics, dynamical systems, ergodic and probability theory , chaos, fractals, information theory and algorithmic complexity and theoretical biology, etc. are consistently covered but with very few concrete attempts to practically quantify complexity and to track its evolution over time. However, even though complexity is becoming an increasingly important issue in modern science and technology, there are no established and pra ctical means of measuring it. Clearly, measurement constitutes the basis of any rigorous scientific activity. The ability to quantify something is a sine-qua-non condition towards being able to manage it. There also does not exist a widely accepted definition of complexity. Many of the popular definitions refer to complexity as a "twilight zone between chaos and order". It is often maintained that in this twilight zone Nature is most prolific and that only this region can produce and sustain life. Clearly, for a definition to lend itself to a practical use, it needs to provide a means for measurement. In order to increase our understanding of complexity and of the behaviour of complex systems, it is paramount to establish rigorous definitions and metrics of complexity. Complexity is frequently confused with emergence. Emergence of new structures and forms is the result of re-combination and spontaneous self-organization of simpler systems to form higher order hierarchies, i.e. a result of complexity. Amino acids combine to form proteins , companie s join to develop market s, people form societies , etc. One can therefore define complexity as amount offunctionality , capacity , potential or fitn ess. The evolution of living organisms, societies or economie s constantly tends to states of higher complexity precisel y because an increase in functionality (fitness) allows these systems to "achieve more" , to face better the uncertainties of the respective environment s, to be more robust and fit, in other words, to survive better. To track or measure complexity, it is necessary to view it not as a phenomenon (such as emergence), but as a physical quantity such as mass , energy or frequency. There do exist numerous complexity measures, such as the (deterministic) Kolmogorov-Chaitin complexity, which is the smallest length in bits of a computer program that runs on a Universal Turing Machine and produces a certain object x. There are also other measures such as Computational Complexity, Stochastic Complexity, Entropy Rate, Mutual Information , Cyclomatic Complexity, Logical Depth, Thermodynamic Depth, etc. Some of the above definitions are not easily computable. Some are specific to either computer programs , strings of bits, or mechanical or thermodynamic systems. In general , the above definition s cannot be used to treat generic multi-dimensional systems from the standpoint of structure, entropy and coarse-graining. We propose a comprehensive complexity metric and establish a conceptual platform for practical and effective complexity management. The metrics established take into account all the ingredients necessary for a sound and comprehensive complexity measure, namely structure , entropy and data granularity, or coarse-graining. The metric
29 allows one to relate complexity to fragility and to show how critical threshold complexity levels may be established for a given system. The methodology is incorporated into Ontospace'", a first of its kind complexity management software developed by Ontonix.
2. Fitness Landscapes and Attractors The concept of Fitness Landscape is central towards the determation of complexity of a given system. We define a fitness landscape as a multi-dimensional data set, in which the number of dimensions is determined by the number of systems variables or agents (these may be divided into inputs and outputs, but this is not necessary). ....; ..... .... ....":'. .... :
3 ······:·· 25 .'
:
. .'; .....
.
o
X}~·;·
__
.
.., .
25 . . . . ,. . ..
~
'
~
.~
..:
-":
i:
;
:' ;
:
0
_,
.
"
3·:.. .~.... ... +...... i.;
'.'0::' .i
.
;
, ,~ .
:
,!
;
J, :: r',",t~ J : .'
':j :.
,
"
ro"
o
x 1 0~
0
6
1
4
Figure 1. Example of Fitness Landscape. The four views refer to the same data-set and represent different combinations of axes. As one moves within the landscape, different local properties, such as density for example, will be observed in the vicinity of each point. The fitness in in every point of the landscape is equated to complexity.
The number of data points in the landscape is equal to the number of measurements or samples that make up the data set. Once the fitness landscape is established (either via measurement or a Monte Carlo Simulation for example), we proceed to identify regions in which the system locally posseses certain properties which may be represented via maps or graph, which we call modes. It is not uncommon to find tens or even hundered of modes in landscapes of low dimension (say a few tens). Once all the modes have been obtained we proceed to compute the complexity of each mode as function of the topology of the corresponding graph, the entropy of each link in the graph and the data granularity . We define data granularity in fuzzy terms and, evidently, this impacts the computation of entropy for each mode. We define fitness at a given point of the landscape to be equal to complexity of the mode in that point. Since the same modal topology may be found in many points of the landscape, there clearly can exist regions
of equal fitness. We may also define the total fitness landscape complexity as the sum of all the modal complexities.
30
Figure 2. Examples of modes. The variables (agents) are arranged along the diagonal and significant relationships are determined based on exchanged information and entropy. Red nodes represent hubs. The mode on the left has complexity of 91.4, while the one on the right a value of 32.1. Both modes originate from the same landscape.
Examples of modes are indicated in Figure 2, where one may also identify hubs indicated in a darker shade of red - the number of which may be related to numerous properties such as robustness , fragility, redundancy, etc. As one moves across the landscape, the modal topology will change , and so will the hubs.
I
)0 ,
•- -
-
I
i
~
.....-r
I
I
I
~
nl . __ . _ _; _ _•. __ .~ _ . __ . _
, , ,
102
. _ ....
... :
."
"
_~ -.: _ .
.
..
-..~.:-~:: ~.
.
__ . _ _.
, _ _ . .. - . __
. - ~.
__ . __ .
....
''''··'-~''~2-~..,....-,....,..---'.s 235 3"
Figure 3. Example of modal complexity spectrum.
The complexities of all the extracted modes in a given landscape may be plotted in ascending order, forming a complexity spectrum . Flat spectra point to homogenoeus landscape while in the opposite case they clearly point to cluster-dominated situations. There is a sufficient body of knowledge to sustain the belief that whenever dynamical systems , such as those listed above, undergo a crisis or a spasm, the event is accompanied by a sudden jump in complexity. This is also intuitive. A spasm or collapse implies loss of functionality , or organization. The big question then is: to what maximum levels of complexity can the above systems evolve in a sustainable fashion? In order to answer this question, it is necessary to observe the evolution of complexity in the vicinity of points of crisis or collapse . We have studied empirically the evolution of complexity of numerous systems and have observed that:
31 D D D D
High-dimension systems can reach higher levels of complexity (fitness). The higher the graph density the higher the complexity that can be reached . The higher the graph density the less traumatic is breakdown of structure. For dense systems, the life-death curve looks like y(t)= t*A*exp(-k*t"4).
The plots in Figure 4 illustrate examples of closed systems (i.e. systems in which the Second Law of Thermod ynamics holds) in which we measured how complexity change s versus time. We can initially observe how the increase of entropy actuall y increases complexity - entropy is not necessarily adverse as it can help to increase fitness - but at a certain point, complexity reache s a peak beyond which even small increase of entropy inexorably cause the breakdown of structure. The fact that initially entropy actually helps increase complexity (fitness) confirms that uncertaint y is necessary to create novelty. Without uncertainty there is no evolution.
\
Figure 4. Examples of evolution of complexity versus time of two closed systems . The plot on the left represents a 20-dimensional system, while the one on the right a 50 dimensional one. In the case of the smaller system, the maximum value of complexity that may be reached is approximately 3, while in the case of the larger system the threshold is approximately 12. The corresponding graphs have density of 0.1 and 0.2, respectively.
In our metric , before the critical complexity threshold is reached, an increase in entropy does generally lead to an increase in complexity, although minor local fluctuations of complexity have been observed in numerous experiments. After structure breakdown commences, an increase in entropy nearly always leads to loss of complexity (fitness) but at times, the system may recover structure locally. However, beyond the critical point, death is inevitable, regardless of the dimensionality or density of the system.
3. A practical application of Complexity measurement: the James Webb Space Telescope In the past decade numerou s Monte Carlo-ba sed software tools for performing stochastic simulation have emerged. These tools were conceived of as uncertainty management tools and their goal was to evaluate, for example, the effects of tolerances on scatter and quality of performance, most likely behavior, dominant design variables, etc. An important focus of the users of such tools has been on robust design. However,
32 simply attempting to counter the effects of tolerance s or environmental scatter is not the best way to achieve robust designs. A more efficient way to robustness is via managing the complexity of a given design rather than battling with the uncertainty of the environment in which it operates. After all, the environment (sea, atmosphere, earthquakes , etc.) is not controllable. At the same time, it is risky to try to sustain a very complex situation or scenario or design in an uncert ain environment. Robustnes s requires a balance between the uncertainty of a given environment and the complexity of the actions we intend to take in that environment. Ontonix has collaborated with EADS CASA Espacio on the design and analysis of the James Webb Space Telescope adapter. The component in question is an adapter between a launcher and it' s payload (satellite) and the objective was to achieve a robust design using complex ity principle s. Given the criticality of the component and the restricti ve and stringent requirement s in terms
•
" , ''... -
. . """:""
0'
" ~~
.......
.,' - ~ '
.. '- '
.0 1
,
, 'I"'
"
Figure 5. Two candidate designs are evaluated. The one with a lower complexity metric is chosen because lesser complexity in an uncertain enviro nment is more robust.
of mass, stiffness, interface fluxes and strength, a stochastic study was performed. Furthermore, the problem has been rendered more complex due to certain assemblyspecific consideration s. Given the unique nature of the component in question - no commercially available adaptors could have been used - it was necessary to evaluate a broad spectrum of candidate design topologie s, Two different design options with the corresponding maps which relate input (design) variables to outputs (performance) are shown in figure 5. While both solutions offer the same performance, the design on the left has a comple xity of 20.1, while the one on the right has 16.8. The design on the right will therefore be less fragile and less vulnerable to performance degradation in an uncertain environment.
33
4. Conclusions We propose a comprehensive complexity metric which incorporates structure, entropy and data coarse-graining . Structure, represented by graphs, is determined locally in a given fitness landscape via a perturbation-based technique. Entropy of each mode (graph) is computed based on the data granularity . Finally, fitness in each point of the landscape is defined as complexity . The metric has been applied to a wide variety of problems, ranging from accident analysis of nuclear power plants, to gene expression data, from financial problems to analysis of socio-economical sytems. The metric shows how a closed system will reach a certain maximum complexity threshold, after which even a small increase in entropy will commence to destroy structure.
References GelI-Mann. The quark and the jaguar : adventures in the simple and the complex . New York: W.H. Freeman and Co.; 1994. H.J. Morowitz. The emergence ofeverything: how the world became complex . New York: Oxford University Press; 2002. D. Sornette. Critical phenomena in natural sciences : chaos,fractals, selforganization, and disorder : concepts and tools. Berlin ; New York: Springer; 2000. A.-L. Barabasi. Linked: the new science ofnetworks . Cambridge, Mass.: Perseus Pub.; 2002. A.-L. Barabasi, R. Albert. Statistical mechanics ofcomplex networks. Reviewsof Modem Physics. 2002 2002;74(1 ):47-97. S.N. Dorogovtsev, J.F.F. Mendes. Evolution ofnetworks:from biological nets to the Internet and WWW. Oxford ; New York: Oxford University Press; 2003. B. Chopard, M. Droz. Cellular automata modeling ofphysical systems . Cambridge, U.K. ; New York: CambridgeUniversity Press; 1998. CHA. Aguirre, L. A. B. Torres. Control ofnonlinear dynamics: where do the models fit in? International Journal of Bifurcationand Chaos. 2000;10(3):667-681. C. Zhou, W. Xiao, T.M. Tirpak, P.C. Nelson. Evolving accurate and compact classification rules with gene expression programming. IEEE Transactionson Evolutionary Computation. 2003;7(6):519-531 . S.S. Haykin. Neural networks : a comprehensivefoundation . 2nd ed. Upper Saddle River, N.J.: Prentice HalI ; 1999. H. Jeong, S.P. Mason, A.L. Barabasi,Z.N. Oltvai. Lethality and centrality in protein networks. Nature. May 3 2001;411(6833):41-42. Z. Dezso, A.L. Barabasi. Halting viruses in scale-free networks . Phys Rev E Stat Nonlin Soft Matter Phys. May 2002;65(5 Pt 2):055103. R.N. Mantegna, H.E. Stanley. An introduction to econophysics: correlations and complexity in finance. Cambridge, UK ; New York: Cambridge University Press; 2000. T. Lynam. Adaptive analysis oflocally complex systems in a globally complex world. Conservation Ecology. 1999;3(2):13.
Chapter 5
Data-Driven Modeling of Complex Systems Val K. Bykovsky Utah State University [email protected] The observation of the dependencies between the data and the conditions of the observation always was and is a primary source of knowledge about complex dynamics. We discuss direct program-dri ven analysis of these data dependenc ies with the goal to build a model directly in computer and thus to predict the dynamics of the object based on measured data. The direct generalization of data dependencies is a critical step in building data-driven models.
"Theory, described in its most homely terms, is the cataloging of correlations .. ." Nick Metropolis [H83]
1 Introduction There are two main sources of data dependencies : (1) direct physical experiments that immediately generate data dependencies of interest and (2) indirect, computer or in-silico experiments, a way of using a computer as an experimental setup to get the "measurement data". Such a setup mimics the real experiments when they are impossible or very costly. That is an alternative to using a computer as a number cruncher. "Good computing requires a firm foundation in the principles of natural behavior" , wrote Metropolis [H83]. The experimentation approach was proposed by S. Ulam and N. Metropolis [M87, BOO] at Los Alamos when designing weapon systems with direct experiments basically impossible.
35
With "experimental approach", computer experiments is an integral part of understanding complex phenomena. Same idea was a focus of proposed then Monte Carlo (MC) method [M87], a generation of random experimental configurations and their evolution in time. However, it was not just events generation and data collection. N. Metropolis wrote in his historical account of development of the MC method [M87]: "It is interesting to look back over two-score years and note the emergence, rather early on, of experimental mathematics, a natural consequence of the electronic computer. The role of the Monte Carlo method in reinforcing such mathematics seems self-evident." Stressing the hands-on aspect of experimental mathematics, he wrote then: "When shared-time operations became realistic, experimental mathematics came to age. At long last, mathematics achieved a certain parity - the twofold aspect of experiment and theory - that all other sciences enjoy." The idea of bridging a gap between a data source and the model based on the data was also introduced by Metropolis [A86, BOO], and he demonstrated it by two-way coupling a computer to the Navy cyclotron. The importance of cataloging correlations, or dependencies, as a basis for any theory was also stressed by Metropolis [H83]. The same idea of dynamic integration of data and modeling to steer the new measurements was recently (50 years after pioneering work by Metropolis) resurrected in the NSF-sponsored DDDAS program [NSF], "a paradigm whereby application (or simulations) and related measurements become an integrated feedback-driven control system". We are making a step in the same direction with the focus on dynamic data-driven model building, testing and update with an option of new data measured on-demand. The controlled search in the physical configuration space is another focal point. Yet another focal point is the persistent framework to be structured by data obtained in the experimentation process, an infrastructure that bridges the gap between the physical object to be explored and the observation system ("explorer") which handles experiments and generates data. The framework also generalizes data dependencies representing the properties of the physical object. Accordingly, it may have a built-in logic to support an in-situ data analysis. This way, data/properties tum out to be integrated with the processing logic. Recently, the same problem has been analyzed in the general context
36
of making databases more dynamic and integrated with the logic of programming languages[G04]. Conceptually, the proposed approach mimics the traditional "fromdata-to-model" process that includes human-driven generalization of the dependencies. The proposed method handles data dependencies programmatically and builds an online model (mapping) by their programmatic generalization. Thus, the human generalization of dependencies gets replaced by high-performance programmatic analysis. The data source becomes an integral part of the model building process and is available for online model testing and update. When a traditional model is built, its connection with the data source gets lost, and the model (symbolic equation) lives its own life occasionally interacting with the real world through parameters and the initial/boundary conditions. As the proposed approach relies on computer power and flexibility, it is complexity-neutral, a distinctive difference with the traditional approach sensitive to complexity.
2 Measurements and Data Dependencies Generally, data is a result of measurement of a property of a physical object, and used to be a combination (a pair) of the data and its measurement context that makes the data unique . With the context attached, a field position in a measurement record may not be its identification any more; and data can be located, accessed and interpreted based on its physical attributes, rather than on its position in a record. In particular, the access can be associative, so that all the data with the same tag (property) can be easily accessed, collected, moved, processed, and placed in a database based on its properties. In its tum, processing the data may lead to updating its attributes. Data Dependencies and State Vector Dynamics. A data record taken as a measurement of an object state at a specific time is actually a state vector, and its change in time describes the object dynamics which used to be described by the motion equations. So, analysis of record dynamics, a revealing hidden patterns and regularities, can be likened to the analysis of the object dynamics by using the motion equations.
The proposed programmatic approach enhances the concept of data bridging a gap between an experiment and its data. A regular data record gets elevated to the level of a data object with the built-in logic designed to validate and test the data so that data becomes the physical
37
properties (vs. just numbers). That makes a record a live or active record, a new dimension in data-driven model-building that makes it easy the building, testing and updating a model. A simple example of linking two data sets into a data pair is given in Fig. 1. The context is a set t of N numbers in the range [0, pi] ; the data is the function x=sin (t) in the same range. The linking is done using a simple "linker", the outer product out ( t, x ( t)} of two vectors. The reconstruction of x is done using the dot function x=dot ( t, ou t) . A few data dependencies p= (t,x) can be used to build a mapping between the contexts and the data. This can be done by computing the outer products for each pair p and their generalization by spatial superimposition. The 4 data sets (trajectories) case is shown at Fig. 2, where the bottom graph is the reconstructed trajectory x.
1 ~
JOO
1 )0
')0
100
I"I is a proper average; if =*, the particles pl and p2 do not interact. Then, the combinations of variables (fields) can be sought to minimize the interactions such as d «pl*p2>, O is the integration step size , and k, = I(~, Xi) k2 = I(~ + hl2, Xi + h/2 'k,) k3 = I(~ + h/2, Xi + h/2 'k 2) k4 = I(~ + h, Xi + h' k3) . To illustrate the proposed approach for simulations based on numerical methods, the system of ordinary differential equations representing the classical Lotka -Volterra model (a Predator-Prey model) [Morin 1999] is used . This model describes interactions between two species in an ecosy stem, a predator and a prey . If x(l) and y(l) represent the number of preys and predators respectively , that are alive in the system at time I, then the Lotka-Volterra model is defined by: dx/dl = a'x - b'x'y dy/dl = c'b'x'y - e'y where a is the per capita birth rate of the prey ; b is a per capita attack rate; c is the conversion efficiency of consumed prey into new predators; e is the rate at which predators die in the absence of prey . Now, using the Runge-Kutta method, the classical Lotka-Volterra model is described in the LEADSTO format as follow s: has_value(x, v1) 1\ has_value(y, v2)-- O, O, h, h has_value(x, v1 + hl6' (kl1 + 2'k , 2 + 2'k , 3 + k14)) has_value(x, v1) 1\ has_value(y, v2) - - O,O,h ,h has_value(y, v2 + h/6 '(k21 + 2'k 22 + 2'k 23 + k24 )) , where kl1 = avt-trvrvz. k21 = c'b'v1 'v2 - e'v2, k' 2 = a'(vt + h/2 'k,,) - tr(vt + h/2 'k,,)*(v2 + hl2 'k 21) , k22 = c'b'(vi + hl2 'k l1)*(v2 + hl2 'k 21) - e'(v2 + h/2 'k 21) , k' 3 = a'(vt + hl2 'k , 2) - b'(v1 + h/2 'k , 2)*(v2 + hl2'k 22 ) , k23 = c'b'(vt + h/2 'k , 2)*(v2 + hl2 'k 22 ) - e'(v2 + h/2 'k 22) , k' 4 = a'( v1 + h 'k13l - b'(v1 + h 'k , 3)*(v2 + h 'k 23) , k24 = c'b'(v1 + h 'k , 3)*(v2 + h 'kd e'(v2 + h ' kd . The result of simulation of this model with the initial values Xo=25 and yo=8 and the step size h=O.1 is given in [Bosse, Sharpanskykh and Treur, 2008] . It is identical to the result produced by Euler's method with a much smaller step size (h=O.01) for the same example . Although for most cases the Runge-Kutta method with a small step size provides accurate approximations, this method can still be computationally expensive and, in some cases , inaccurate. To achieve a higher accuracy together with minimum computational efforts, methods that allow the dynamic (adaptive) regulation of an integration step size are used . Generally, these approaches are based on the fact that the algorithm signals information about its own truncation error. The most commonly used technique for this is step doubling and step halving , see, e.g. [Gear 19711. Since its format allows the modeller to include qualitative aspects, it is not difficult to incorporate step doubling and step halving into LEADSTO . See [Bosse , Sharpanskykh and Treur, 20081 for an illustration of how this can be done.
46
3 The Predator-Prey Model with Qualitative Aspects In this section, an extension of the standard predator-prey model is considered, by some qualitative aspects of behaviour. Assume that the population size of both predators and preys within a certain eco-system is externally monitored and controlled by humans. Furthermore, both prey and predator species in this eco-system are also consumed by humans. A control policy comprises a number of intervention rules that ensure the viability of both species. Among such rules could be following: in order to keep a prey species from extinction, a number of predators should be controlled to stay within a certain range (defined by pred_min and pred_max); if a number of a prey species falls below a fixed minimum (prey_min), a number of predators should be also enforced to the prescribed minimum (pred_min); if the size of the prey population is greater than a certain prescribed bound (prey_max), then the size of the prey species can be reduced by a certain number prey_quola (cf. a quota for a fish catch) . These qualitative rules can be encoded into the LEADSTO simulation model for the standard predator-prey case by adding new dynamic properties and changing the existing ones in the following way: has_vaJue(x, V1) A has_value(y, V2) A vt-epreyrnax --+O,O,h ,h has_value(x, v1+h'(a'v1-b'v1'v2» has_value(x, V1) A has_value(y, v2) A v1 '" prey_max --+ 0, 0, h, h has_value(x, v1+h'(a'v1-b'v1'v2) - prey_quola) has_value(x, V1) A has_value(y, v2) A vt '" prey_min A v2 < pred_max --+ 0, 0, h, h has_value(y, v2+h' (c'b'v1 'v2-e'v2» has_value(x, V1) A has_value(y, v2) A v2 '" pred_max --+O,O,h ,h has_value(y, pred_min) has_vaJue(x, V1) A has_value(y, v2) A v1 < prey_min --+ 0, 0, h, h has_value(y, pred_min)
The result of simulation of this model using Euler's method with the parameter settings: a=4; b=O.2, c=O.1 , e=8, pred_min=10, pred_max=30, prey_min=40, prey_max=100, prey_quola=20, Xo=90, yo=10 is given in Figure 2.
WJ
.= C A
"
i i ll fl f\ {
I\( V V V V
Flgure.z, Simulation results for the Lotka-Volterra model combined with qualitative aspects .
4 Analysis In Terms of Local-Global Relations Within the area of agent-based modelling, one of the means to address complexity is by modelling processes at different levels , from the global level of the process as a whole, to the local level of basic elements and their mechanisms . At each of these
47
levels dynamic properties can be specifi ed, and by interlevel relations they can be logic ally related to each other; e .g., [Sharpanskykh and Treur 2006] . These relationships can provide an explanation of properties of a process as a whole in terms of properties of its local elements and mechanisms. Such analyses can be done by hand or automatically. To specify the dynamic properties at different levels and their relations, a more expressive language is needed than simulation languages based on causal relationships, such as LEADSTO. To this end, the formal language TIL has been introduced as a super-language of LEADSTO; cf. [Bosse et al. 2006) . It is based on order-sorted predicate logic, and allows including numbers and arithmetical functions . Therefore most methods used in Calculus are expressible in this language, includ ing methods based on derivatives and differential equations. In this section it is shown how to incorporate differential equations in the predicate-logical language TTL that is used for analysis . Further, in this section a number of global and local dynamic properties are identified, and it is shown how they can be expressed in TIL and logically related to each other.
Differential Equations in TTL A differential equation of the form dy/dt= I(y) with the initial condition y(Io)=Yocan be expressed in TIL on the basis of a discrete time frame (e.g ., the natural numbers) in a straightforward manner: ~ state(y, t+1) i= has_value(y, v + h • I(v)) Vt Vv state(y, t) i= has_value(y, v) The traces y satisfying the above dynamic property are the solutions of the difference equation . However, it is also possible to use the dense time frame of the real numbers, and to express the differential equation directly. Thus, x = dy/dtcan be expressed as: Vt,w 'iE>O 30>0Vt',v,v' 0 < dist(t',t) < b & state(y, t) i= has_value(x, w) & state(y, t) i= has_value(y, v) & state(y, t') i= has_value(y, v') ~ dist«v'-v)/(t'-t),w) < E where dist(u,v) is defined as the absolute value of the difference . The traces y for which this statement is true are (or include) solutions for the differential equation . Models consisting of combinations of difference or differential equations can be expressed in a similar manner. Thi s shows how modelling constructs often used in DST can be expressed in TrL.
Global and LocalDynamic Properties Within Dynamical Systems Theory, for global properties of a process more specific analysis methods are known . Examples of such analysis methods include mathematical methods to determine equilibrium points , the behaviour around equilibrium points, and the existence of limit cycles. Suppose a set of differential equations is given, for example a predator prey model : dx/dt = I(x, y) and dy/dt = g(x, y). Here, f'(x, y) and g(x, y) are arithmetical expressions in x and y. Within TIL the following abbreviation is introduced as a definable predicate: point(y, t, x, v, y, w) state(y, t) 1= has_value(x, v) A has_value(y, w)
=
Equilibrium points These are points in the (x , y) plane for which, when they are reached by a solution, the state stays at this point in the plane for all future time points . This can be expressed as a global dynamic property in TTL as follows: has_equilibrium(y, x, v, y, w) Vt1 [ point(y, t1 , x, v, y, w) ~ V12..t1 point(y, t2, x, v, y, w) 1 occurring_equilibrium(y, x, v, y, w) 3t point(y, t, x, v, y, w) & has_equilibrium(y, x, v, y, w)
=
=
48
Behaviour Aroundan Equilibrium attracting(y, x, v, y, W, EO) ¢> has_equilibrium(y, x, v, y, w) & EO>O " Vt [point(y, t, x, vt , y, w1) " dist(v1 , w1, v, w) < EO •• VE>O 3t1~t Vt2~t1 (point(y, t2, x, v2, y, w2) .. dist(v2, w2, v, w) < EII
Here, dist(v I, wl , v2, w2) denotes the distanc e between the points (vi , wi) and (v2, w2) in the (x, y) plane . The global dynamic properties described above can also be addressed from a local perspective.
Local equilibrium property From the local perspective of the underlying mechanism, equilibrium points are those points for which dxldt =dy/dt =0, i.e., in terms of f and g for this case fix, y) =g(x, y) =O. equilibrium_state(v, w) f(v, w) = 0 & g(v, w) = 0 ¢>
Localpropertyfor behaviouraroundan equilibrium: attracting(y, x, v, y, w, b, EO, d) ¢> has_equilibrium(y, x, v, y, w) & EO>O " 0< b . In contrast, the scale-free network with power-law degree distribution implies that vertices with only a few edges are numerous, but a few nodes have a very large number of edges. The presence of few highly connected nodes (i.e. 'hubs') is the most prominent feature of scale-free networks and indicates that scale-free networks are heterogeneous. The heterogeneity leads to many unique properties of scale-free networks. For example, Albert et al. [?] demonstrated that scale-free networks possess the robust-yet-fragile property, in the sense that they are robust against random failures of nodes but fragile to intentional attacks. Moreover, it is found that homogeneous networks are more synchronizable than heterogeneous ones, even though the average network distance is larger[?]. Consequently, the measure and analysis of heterogeneity is important and desirable to research the behaviours and functions of complex networks. Several measures of heterogeneity have been proposed . Nishikawa et al. [?] quantified the heterogeneity of complex networks using the standard deviation of degree 17. Sole et a1.[?] proposed entropy of the remaining degree distribution q(k) to measure the heterogeneity. Wang et al.[?] measured the heterogeneity of complex networks using entropy of the degree distribution P(k). With these measures above, the most heterogeneous network is the network obtained for P(l) = P(2) = ... = P(N - 1), and the most homogeneous network is the network obtained for P(k o) = 1 and P(k) = l(k =1= ko), i.e, a regular networks. However, these conventional measures are not in agreement with the normal meaning of heterogeneity within the context of complex networks. For example, we are generally inclined to believe that a random network is quite homogeneous, but it is not the truth with the measure above. In addition, a star network is generally considered to be very heterogeneous because of the presence of the only hub-node, but the star network is quite homogeneous with the conventional measures. In this paper, we first present a new measure of heterogeneity called entropy of degree sequence (EDS) and compare it with conventional measures . Then we investigate the heterogeneity of scale-free networks using EDS.
2
Entropy of degree sequence
A complex network can be represented a graph G with N vertices and M edges. Assume that G is an undirected and simple connected graph. Let d i be the degree of a vertex Vi' As shown in Figure 1, we sort all vertices in decreasing order of degree and get a degree sequence D(G) = (D 1 , D 2 , ... D N ) , where D 1 2: D 2 2: ... 2: D N • Note that different vertices may have the same degree, we group all vertices into N - 1 groups according to their degree. That is, the
68 degree of the vertices in the 8 t h group is N - 8. Let is be the number of vertices in the 8 t h group, namely, the frequences of vertices with degree N - 8 . Let 8i be the index of the group in which vertex Vi is located, and ri be the global rank of Vi among all of the vertices in decreasing order of degree. Let P(k) be the degree distribution, i.e. the probability that a randomly chosen vertex has degree k . Let d = f(r) be the degree-rank function, which gives the relationship function between the degree and the rank of the degree sequence D( G) and is non-stochastic, in the sense that there need be no assumption of an underlying probability distribution for the sequence.
degree
group
2
3
N-2
N-I
Figure 1: Sorting all verticesin decreasing order of degree and group all vertices into groups according to degree. To measure the heterogeneity of complex networks, we define entropy of degree sequence (EDS) as N
EDS=- LIilnIi
(1.1)
i=l
where
t, = Dd
N
2: o,
(1.2)
i=l
Substituting eqution (2) into eqution (1), we have
(1.3)
Obviously, the maximum value of EDS is EDSmax = 10g(N) obtained for Ii = liN, i.e. D 1 = D 2 = ... = D N . Note that D, > 0 and D, is integer, so the minimum value of EDSmin = (ln4(N - 1))/2 occurs when D(G) = (N - 1,1, ..., 1). The maximum value of EDS corresponds to the most homogeneous network, i.e, a regular network, and the minimum value of EDS corresponds to the most heterogeneous network, i.e, a star network. The normalized entropy of degree sequence (NEDS) can be defined as NEDS =
ERD max - ERD ERD max - ERDmin
(1.4)
69 For comparison, we present the definition of entropy of remain degree distribution (ERD) in [?] N
ERD
= -
L q(k) lnq(k)
(1.5)
k =l
where q(k) = (k + l)P(k) / in [?]
L j jP(j) , and ent ropy of degree distribution (EDD ) N
EDD = -
L P(k) lnP(k)
(1.6 )
k= l
For ERD or EDD , th e maximum value is log(N - 1) obt ained for P(k) = l /(N - 1) ( Vk = 1,2 , ..., N - 1 ) which corresponds to th e most heterogeneous network and th e maximum value is 0 obt ained for P(k o) = 1 and P(k) = l(k ::f k o) which corresponds to the most homogeneous network. To be consistent with NEDS, we define the normalized entropy of remain degree distribution (NERD) and th e normalized ent ropy of degree distribution (NEDD) as
N E RD =
ERD-ERD min E RD max - E RDmin
(1.7)
NEDD
EDD - EDD min EDD max - EDDmin
(1.8)
and =
Then a network becomes more heterogeneous as NEDS/NERD/NEDD increases. Using NEDS, NERD , NEDD and standard deviation of degree (J to measure: (a) a regular network with N = 1000; (b) a random network (ER model) with N = 1000 and connection probability p = 0.3; (c) a st ar network with N = 1000; (d) a scale-free networks (Barabsi-Albert model) with N = 1000 and mo = m = 3. The result is shown in Table. 1. With NERD or NEDD, the order of heterogeneity is th at random network >scale-free network >- st ar network >- regular network. With standard deviation of degree (J , the order of heterogeneity is that st ar network >- random network >scale-free network >- regular network. With NEDS, the order of heterogeneity is th at st ar network >- scale-free network >- random network >- regular network. We are generally inclined to believe th at a scale-free network is more heterogeneous th an a random network and a star network is very heterogeneous because of the presence of the only hub-node. So our measure is agreement with the normal meaning of heterogeneity within the context of complex networks compared with convent ional measures.
3
Heterogeneity of scale-free networks
To obtain EDS of scale-free networks, we first present a th eorem on the relationship between the degree distribution P (k) and the degree-rank function f( r) .
70 Table 1: Result of compa risons between t he convent iona l measures and ours.
NEDS
o
Regular network Random network Scale-free network Star network
0.0004 0.1178 1
NERD 0 0.6451 0.4304 0.0502
NEDD 0 0.6480 0.2960 0.0011
a 0 26.2462 6.8947 31.5595
Theorem 1 If t he degree dist ribution of a network is P (k ), the degreerank function of the network is d = f (r ) = N - T * , where T * satisfies T+
II
P (k = N- s ) ·ds = r / N.
PROOF
Since
Vi
is locat ed in th e s~h group, then we obtain s,
s;-1
Lis 2: r. and L is ~
s= 1
(1.9)
ri
s= 1
Namely, s; is t he minimum T th at satisfies
T
E is 2: ri, i.e,
s= 1
s,
T
= T min ( E is 2: ri)'
Note t hat is = N· P (k = N - s), we obtain
s=1
T
s, = Tmin(L P (k = N - s) 2: rdN)
(1.10)
s= 1
Th en
T
d;
=N -
Si
= N - Tmin(L P (k = N - s) 2: rd N )
(1.11 )
s= 1
Assuming t hat P (k ) is integrabel, with continuous approximat ion for the degree distribution, equt ion. (11) can be written as di
=N
- Tmin(j T P (k
Note that P(k = N - s) 2: 0 , hence with respect to T , leading to j Tm;n P (k
=N
- s)ds 2: rdN)
(1.12)
It P(k = N - s)ds is an increasing function = N - s) ds = rdN
(1.13)
Using eqution (13), eqution (12) can be expressed as (1.14 )
where T * satisfies
1
T+
P(k =N - s)ds=rd N
(1.15)
71 The theorem is proofed. • For scale-free networks with power law degree distributions P(k) = ck:" , where>' is the scaling exponent, substituting P(k) = Ck- A into eqution (15), we have
1
T'
C(N - s)-Ads
= rdN
(1.16)
Solving eqution (16) for T* , we have (1.17) Substituting eqution (17) into eqution (14), we obtain the degree-rank functions of scale-free networks as follows: >. -1 d = f(r) = [ NC ' r
>.+1] + (N -1)-
1
- A+ l
(1.18)
Note that the scaling exponent of most scale-free networks in real world ranges between 2 and 3. We have (N - 1)->'+1 --+ 0 as N --+ 00 when>' > 2. Then, eqution (18) simplifies to (1.19) where C 1 = (~-:-6ra and 0: = 1/(>' -1) . We call 0: the degree-rank exponent of scale-free networks . Substituting eqution (19) into eqution (3), we obtain EDS of scale-free networks as follows:
(1.20)
With continuous approximation for the degree distribution, we obtain of scale-free networks as a function of degree-rank exponent 0:
EDS=
0: . N 1- a - In N 1- a
N 1- a - 1 +In-----(l-o:)(Nl-a-l) 1-0: 1 -0:
(1.21)
Substituting 0: = 1/(>' - 1) into eqution (21), we obtain EDS of scale-free networks as a function of scaling exponent >. A- 2
EDS
=
A- 2
Nx=T In Nx=T (>'-2)(N~ -i-l)
A -2
+ In
(>. - I)(Nx=T - 1) >'-2
1 - ->'-2
(1.22)
Substituting Eq. (22) into Eq. (4), we can obtain NEDS of scale-free networks. In Figure 2, we show th e plots of NEDS versus for different>. E (2,3).
72 We can find th at a scale-free network becomes more heterogeneous as ,\ decreases. Moreover, we can find that the plots of NEDS for different N overlap wit h each ot her which indicates t hat NEDS is independent of N and then NEDS is a suitable measur e of heterogeneity for different size of scale-free networks.
--N=1oJ --N=lo' ->,,> N=10'
en
o w
z
21
22
23
24
..
25
26
27
28
29
Figure 2: Plots of NEDS versus A E (2,3) for different N . It is shown that NEDS decreases as Aincreases and NEDS is independent of N.
4
Conclusion
Many unique properties of complex networks are due to the heterogeneity. Th e measur e and analysis of heterogeneity is important and desirable to research the behaviours and functions of complex networks. In this paper , we have proposed entropy of degree sequence(EDS) and normalized entropy of degree sequence (NEDS) to measure the heterogeneity of complex networks. The maximum value of EDS corresponds to th e most heterogeneous network , i.e. star network , and the minimum value of EDS corresponds to th e most homogeneous network, i.e. regular network. We measure different networks using convent ional measur es and ours . Th e results of comparison have shown t hat EDS is agreement with the normal meaning of heterogeneity within t he context of complex networks. We have studied th e heterogeneity of scale-free networks using EDS and derived the analyt ical expression of EDS of scale-free networks. We have demonst rated that scale-free networks become more heterogeneous as scaling exponent decreases. We also demonstr ated that NEDS of scale-free networks is independent of t he size of networks which indicates tha t NEDS is a suitable and effective measur e of heterogeneity.
73
Bibliography [1] ALBERT, R. , BARABASI, A.-L. , "St at ist ical mechanics of complex networks" , Rev. Mod. Phys 74 (2002) ,47-51. [2] STROGATZ, S.H., "Exploring complex networks" , Nature 4 10 (2001) , 268276. [3] ERDOS, P. and RENYI , A., "On random graphs" , Publ. Math . 6 290-297.
(1959) ,
[4] WATTS, D.J and STROGATZ, S.H, "Collect ive dynamics of 'small-world' networks" , Nature 393 (1998) ,440-442. [5] BARABASI, A.-L. and ALBERT, R., "Emergence of scaling in rando m networks", Science 286 (1999), 509-512. [6] ALBERT, R., J EONG, H., and BARABASI, A.-L., "Error and attack to lera nce of complex networks" , Nature 406 (2000) , 378-382. [7] NISHIKAWA , T ., MOTTER, A.E ., LAI, Y.C., and HOPPENSTEADT, F.C., "Heterogeneity in Oscillator Networks: Are Smaller Worlds Easier to Synchronize?" , Phys. Rev. Lett. 91 (2003) , 014101. [8] SOLE, R.V., VALVERDE, S.V., "Informat ion Theory of Complex Networks" , Lect. Not es. Phys. 650 (2004) , 189. [9] WANG , B., TANG, H.W ., Guo , C.H. , and XIU, Z.L., "E nt ropy Optimization of Scale-Free Networks Robustness to Random Failures" , Pbysice A 363 (2005), 591.
Chapter 10
Are technological and social networks really different? Daniel E. Whitney Engineering Systems Division Massachusetts Institute of Technology Cambridge, MA 02139 [email protected] David Alderson Operations Research Department Naval Postgraduate School Monterey, CA 93943 [email protected]
The use of the Pearson coefficient (denoted r) to characterize graph assortativity has been applied to networks from a variety of domains. Often, the graphs being compared are vastly different, as measured by their size (i.e., number of nodes and arcs) as well as their aggregate connectivity (l.e., degree sequence D). Although the hypothetical range for the Pearson coefficient is [-1, +1]' we show by systematically rewiring 38 example networks while preserving simplicity and connectedness that the actual lower limit may be far from -1 and also that when restricting attention to graphs that are connected and simple, the upper limit is often very far from +1. As a result, when interpreting the r -values of two different graphs it is important to consider not just their direct comparison but their values relative to the possible ranges for each respectively. Furthermore, network domain ("social" or "technological") is not a reliable predictor of the sign of r . Finally, we show that networks with observed r < 0 are constrained by their D to have a range of possible r which is mostly < 0, whereas networks with observed r > 0 suffer no such constraint. Combined, these findings say that the most minimal network structural constraint, D, can explain observed r < 0 but that network circumstances and context are necessary to explain observed r > O.
75
1
Introduction
Newman [1] observed that the Pearson degree correlation coefficient r for some kinds of networks is consistently positive while for other kinds it is negative. Several explanations have been offered [2, 3]. In this paper we offer a different explanation based on embedding each subject network in the set of all networks sharing the subject network's degree sequence (denoted here as D) . Our primary contribution is to show with 38 example networks from many domains that the degree sequence for simple and connected graphs dictates in large part the values of r that are possible. More precisely, we show that, although D does not necessarily determine the observed value of r , it conclusively determines the maximum and minimum values of r that each subject network could possibly have, found by rewiring it while preserving its D, its connectedness, and its simpleness. Approaching the problem this way reveals interesting properties of D that affect the range of possible values of r, In particular, networks with observed r < 0 have a smaller range that is all or mostly < O. But for networks with observed r > 0 the range covers most of [-1, +1]. After studying these properties and their underlying mathematics, we ask if the alternate wirings are semantically feasible, in an effort to see how the domain of each network might additionally constrain r .1
2
Observed data and mathematical analysis
Table 1 lists the networks studied and their properties of interest. The values of r m a x and rmin were obtained by systematically rewiring each subject network while preserving connectivity and degree sequence. This type of rewiring procedure was used previously by Maslov et al. [4], who argued that graph properties such as assortativity only make sense when the graph of interest is compared to its "randomized" counterpart. The message of this paper is similar in spirit, but focuses on empirical evidence across a variety of domains . The networks in Table 1 are listed in ascending order of r . It should be clear from this table that one find networks of various types , such as "social," "biological," or "technological," having positive or negative values of r . This indicates that networks do not "naturally" have negative r or that any special explanation is needed to explain why social networks have positive r. All empirical conclusions drawn from observations are subject to change as more observations are obtained, but this is the conclusion we draw based on our data. In Table 1, the kinds of networks, briefly, are as follows: social networks are coauthor affiliations or clubs; mechanical assemblies comprise parts as nodes and joints between parts as edges; rail lines comprise terminals, transfer stations or rail junctions as nodes and tracks as edges; food webs comprise species as nodes and predator-prey relationships as arcs; software call graphs comprise subroutines as nodes and call-dependence relationships as arcs; Design Structure 1 No causality is implied. The domain may well provide the constraints that shape D . The present paper does not attempt to assign a causal hierarchy to the constraints.
76
Matrices (DSMs) [9] comprise engineering tasks or decisions as nodes and dependence relationships as arcs; voice/data-com systems comprise switches, routers and central offices as nodes and physical connections (e.g., wire or fiber) as arcs; electric circuits comprise circuit elements as nodes and wires as arcs; and air routes comprise airports or navigational aids as nodes and flight routes as arcs. In Table 1, we introduce the notion of elasticity, defined here as e = Irmax rminl/2, which reflects the possible range of r relative to the maximum range [-1 ,1] obtained for all networks having the same degree sequence. We call a degree sequence with large e elastic, while a degree sequence with small e is called rigid. The vastly different observed ranges for possible values of r can be explained by a closer look at the respective degree sequences for each network and the way in which they constrain graph features as a whole. In the remainder of this paper, when refering to the degree sequence D for a graph, we mean a sequence {d 1 , d2 , •.. , dn }, always assumed to be ordered d 1 2: d2 2: .. . 2: dn without loss of generality. The average degree of the network is simply (k) = n -1 L.J i=1 di· For the purposes of this paper, we define the Pearson coefficient (known more generally as the correlation coefficient [5]) as
""n
(1.1) where m is the total number of links in the network, and we define di m- 1 E(i,j) di = m- 1 Ek(d k ) 2 . Here, di is the average degree of a node seen at the end of a randomly selected edge. It is easy to see that di = dj when averaging over links (i,j), so in what follows we will simply refer to d. Observe that d:f (d) . In fact, d = (n- 1 Ei dT)/(n- 1 E j dj ) = (d2 ) / (d), so d is a measure of the amount of variation in D . Figure 1 shows that the most rigid D are characterized by a few dominant nodes of relatively high degree, with the remaining vast majority of nodes having relatively low degree, equivalent to a small supply of available edges and implying a small value of (d). This gives D a rather "peaked" appearance. By comparison, the more elastic D have a more gradually declining degree profile. The importance of d in determining r can be easily seen from Equation 1.1. Positive r is driven by having many nodes with d; > d that can connect to one another. However, for networks with large d, there are typically fewer such nodes, and thus many more connections in which di > d but dj < d. The implication is that for highly variable D in which there are only a few dominant high degree nodes larger than d, most connections in the network will be of this latter type, and r will likely be negative . This line of reasoning is suggestive but not conclusive, yet as a heuristic, it succeeds in distinguishing the rigid D from the elastic D studied here. The observed values of r, r max, and rmin from Table 1 are plotted in Figure2. The range [rmin, r max] provides the background against which the observed r should be compared, not [-1, +1]. When the observed r < 0, the whole range is
77 Network Karate Club "Erdos Network" (Tirole)
"Erdos Network" (St iglitz) Scheduled Air Routes, US Littlerock Lake" food web Gran d Pia no Act ion 1 key Santa Fe coauthors V8 engi ne Grand Piano Action 3 keys Abilene-inspired toynet (l uternet) Bike Six speed transmission "HOT" -inspired toynet (Int ernet) Ca r Door* DSM Jet Engine' DSM TV Circuit" Tokyo Regional Rail FAA Nav Aids , Unscheduled Mozilla, 19980331* softwar e Canton food web" Mozilla, all components" Munich Schnellbahn Rail FAA Nav Aids, Scheduled St . MarkS' food web \ Vestern Power Grid Unschedu led Air Routes , US Apache software call list * Physics coauthors
Tokyo Regional Rail + Subways Traffic Light controlle r* (circuit ) Berlin U- & S-Bahn Rail London Undergro und Regional Powe r Grid Moscow Subways
Neno-be ll (tel ephon e) Broom food web*
Company di rectors
Moscow Subway,
+ Regiona l Ra il
n
m
34 93 68 249 92
78 149 85 :1389 997 92 198 367 242 896 208 244 1049 2128 6:l9 1050 204 7635 4077 697 4129 65 4444 221 6594 5384
71
118 243 177 886 131 143 1000 649 60 329 147 2669 811 102 1187 50 1787 48 4941 900 62
365
346 191 300 133 255 III 75 92 139 1658 2589 51 82 104 121 82 223 6731 50775 129 204 145
fraction r of nodes Tmin " max di > J 4.5882 0.1471 -0.806 -0.4756 -0.0139 3.204 0.0645 -0.6344 -0.4412 0.OHl7 2.50 0.0882 -0.6417 -0.4366 -0.0528 27.22 -0.39 -0.3264 10.837 0.337 2.59 0.197 -0.7262 -0.3208 0.8955 3.3559 0.0593 -0.5098 -0.2916 0.1412 3.01 0.0122 -0.2932 -0.269 -0.1385 2.73 0.2034 -0.5375 -0.227 0.7461 2.02:1 0.0158 -0.2300 -0.2239 -0.0:179 3.1756 0.0458 -0.435 -0.2018 0.18 3.4126 0.1 -0.3701 -0.1833 0.3431 2.098 0.0170 -0.1847 -0.1707 -0.009 3.279 -0.1590 -0.1345 lO.65 6.:183 0.018 -0.lO9 2.775 0.3401 -0.8779 -0.0911 0.6820 5.72 -0.0728 5.0271 0.0259 -0.0499 6.833 0.157 -0.0694 3.4785 -0.0393 2.6 -0.8886 -0.0317 0.4870 0.34 4.974 -0.0166 4.602 0.146 -0.0082 2.6691 0.2022 -0.69 0.0035 0.9 11.96 0.0045 5.88 0.007 4.7724 0.1517 -0.652 0.0159 0.553 :1.1414 0.4188 -0.8864 0.0425 0.8467 1.9173 0.0614 2.96 0.48 -0.778 0.0957 0.5051 0.413 -0.9257 0.0997 0.7589 3.02 3.117 0.1695 -0.5095 0.1108 0.8096 3.216 0.1765 -0.9958 0.1846 0.7758 2.327 0.2196 2.623 0.2301 15.09 0.1703 -0.65 0.2386 0.89 -0.8970 0.2601 0.7641 3 0.4191 (d)
"elasticity" IT max - Tminl
2 0.396 0.3073 0.2945
0.8108 0.325 0.07735 0.6418 0.096 0.3075 0.3565 0.08785
0.7799
0.6878 0.795 0.6025 0.8665 0.6415 0.8423 0.6596 0.8613 0.77 0.8:105
Ta bl e 1: Networks Studied a nd So m e of Their Properties, Or dered by Increasin g Pear-
so n D egre e C o rr e lation r , Each network is simple, connect ed , and u nd ir ec t ed unless marked *. In the case of t he physics coauthors, company directors, and soft ware, on ly the largest con nected component is analyzed. Table omissions corr esp ond to cases wh ere only summary statistics (and not the entire network ) wer e available or where t he network was d irect ed (complicating th e calculation and interpretation of il). Soc ial networks were ob tained from published articles and d ata ava ilable directly from res earchers . Their definition s of node and edge were used. T he Santa Fe researche rs data were t a ken from Figure 6 of [7J. Air route and navigational aid data were taken from FAA data bases. Mechanical assemblies were a nalyzed using dr awings or exploded views of products. DSM data were ob tained by interviewing participants in design of t he respective products. Rail and subway lines were an alyzed based on published network maps ava ilable in travel guides and web sites. Food web data represent condensation to t ro ph ic spe cies. Software call list data were analyzed usi ng standard software analysis took The traffic light control circ uit is a standard b enchma rk ISCAS89 circuit. "Nano be ll" is a mo dern competitive loca l exchange carrier operating in one state wit h a fiber optic loop network a rchitecture. Its pos itive value for r reflects this architect ure. The regional bell operating company (REOC) that operates in the same state has a legacy copper wire network that re llects the tree- like architecture of the original AT&T monopoly , a nd in this st at e it s network's r is -0.6458. This st a t ist ic is based on ignoring all links be tween central offices . Adding 10% more links at random between known central offices brings r up to zero. The RBOC wou ld not divu lge infor m a t ion on t hese lin ks for competitive reasons.
78
wholly or mostly < O. Wh en the observed r > 0, the whole range approximat es [-1 , +1]. Networks of all typ es may be seen across th e whole range of r in this figur e. 60;
~
50 ~
O nly a few con nections
,
~
~
-
-~
Ir. . among no des ha vi ng ti j
~ 40~
E I " 30 20
25' ~~~--r----:----=---====; IPhySICSc oaulhors l
> ,/
20
O nl y 1.2% _ of no des have
"
1
'8 10 z
5
12.9 2 .
• 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Fraction 01 Nodes
O '~
!
No des ha v ing ti; > J can connect w ith no des havi ng d j > J and d j < J.
11 o2' 15 [;•
No des ha vi ng dj > il mo st ly connect w ith nodes h aving ti j < ,T.
l0lf~i > ,I = o
~
because so fe w o f t hem .
1
~
IvaEnglne l
\
-. ......:'- _
L i
:
:
1
o
15 .2% o f no des 7.3 2.
l-- have ti j > tT =
o
i 0.1 0.2
-
- -
~
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fractio n 01 Nodes
Figure 1 : Degree Profiles of Two Networks in Table 1. A greater fraction of nodes have di > d in the physics coauthors (right) than in the V8 engine (left) , consistent with increasing elasticity.
~
0.' 0.'
j
0.'
z-
-e.z
13 ~ Q
~
:
0 .2
-0 .4 · 0. 6
-0 .8
Figure 2: Relationship Between r and its Range.
3
Domain analysis
The preceeding data and indicators lead us to a striking conclusion: in some cases whether a network has r < 0 or r > 0 may be simply a function of network 's degree sequence D it self. For exa mple, if th e entire range of allowable r is negative, then no domain-sp ecific "explanation" is required to justify why the network has r < O. Networks with rigid D are obviously more const rained t han thos e with elast ic D , and why a particular network having an elastic D gives rise to a particular r-value when a lar ge ran ge is mathemat ically possible remains an important question. It t hen makes sense to ask if th e mathemat ical range of possible values of r is in fact plausible for a funct ioning syste m. Stat ed an ot her way: do the domain-specific features of t he system necessarily constrain the network to the observed (or nearby) r-values?
79 For t he mechanic al assemblies, t he answer is t hat not all values of r within the possible ra nge correspond to functioning systems. The rewired bikes are not different bikes, but meani ngless snarls of spokes, pedals, wheels, bra ke cables, and so on. These networks are not only constrained by rigid D , they are functio na lly intolerant of t he slightest rewiring. But t he rewired coaut hor networks, even at t heir ext remes of positi ve and negative r, represent plau sible coaut horship scena rios. A negative r scenario could arise in classic German university resear ch inst itutes, where each insti t ute is headed by a professor whose name is on every pap er t hat t he institu te publi shes. Some of t he coa ut hors go on to head t heir own institutes and ultimately have many coaut hors t hemselves, while t he majority of t he ot hers go into industry and pu blish few pap ers after gra dua t ing. The result is a network with relatively few high-degree nodes connected to many low-degree nodes and only one, if any, connections to other high-d egree nod es, leading to negative r , The fact t hat coaut horship and other social networks have been found with negative r shows t ha t such plausible scenarios might act ually exist . The opposite scena rio could be observed at a lar ge resear ch institute devot ed to biomedical resear ch , where huge efforts by many investi gators are needed to crea te each publication , and t here are often 25 or 30 coa ut hors on each pap er .r If such groups produce a series of pap ers, the result will be a coaut hor network with positive r . T he same may be said of t he Western Power Grid , where t he observed connections are no more necessary t ha n many other similar hookup s. In [8] it was shown t hat a communication network wit h a power law degree sequence could be rewired to have very different r-values and st ructure , and t hat t he different st ruct ures could display very different total ba ndwidt h capacity. While all t hese networks are feasible (i.e., t hey could be built from existing technology), engineering and economic crite ria preclud e some as "unrealistic" (i.e., pro hibitively high cost and/or poor perform ance). Interestingly, t he observed st ructure strongly resembles the planned form of th e AT&T long dist ance network as of 1930 [6] . Large mechani cal assemblies like t he v8 and t he walker have a few high degree nodes th at support the large forces and to rques t hat occur in t hese devices. The six speed t ransmission and t he bike similarly suppo rt lar ge forces and torques but have a larger number of load-bearing parts and consequent ly fewer edges impinging on those par ts. Assemblies with rigid parts are severely restricted in allowed magnitude of (d) by their need to avoid over-const raint in the kinem atic sense [10] . Elastic parts do not impose the severe mechanic al const ra int on their neighb ors that rigid ones do, so t he limit on (d) is not as severe. The ent ries in Table 1 bear this out. In the bike, t he parts that create elasticity are the spokes while in t he t ransmission t hey are t hin clutch plates. Both kind s of parts appea r in lar ge numbers and connect to parts like wheel rims, hubs, and main foun dation castings without imposing undue mechanical constra int . For t hese reasons t he bike and t he t ra nsmission have less peaked D , larger d, and offer more opt ions for rewiring. Nonetheless, all of t hese rewirings are impl ausible 2For all 55 reports published in Science in t he summer and fall of 2005, t he average nu mber of authors is 6.9 with a standard dev iation of 6.
80 and are not observed in practice. Tr ansport ation networks may be tree-like or mesh-like, depending on th e constraints and obj ecti ves und er which t hey were designed or evolved, as the case may be. It is easy to show t hat regular trees have negative r while meshes have positive r . Pl anned urb an rail and subway systems increasingly include circle lines superposed on a closely knit mesh , tending to push r toward positive values. If a simple grid is rewired to have respect ively minimum and maximum r, we can easily imagine geogra phic const raints t hat make t he rewired versions plausible, as shown in Figur e 3.
: §il: 4
10
9
3
Rural Roads r 0.2941
=
Region Divided by a River r=-0.7647
Old City on Hill New Suburbs Outside r 0.7351
=
Figure 3: Three Road Systems. Left: a simple grid, typical of roads in Iowa or Nebraska. Center: the grid rewired to have minimum degree correlation, reflecting roads in a region divided by a large river or mountain range. Right: the grid rewired to have maximum degree correlation, reflecting an old European city as a citadel on high ground surrounded by suburbs with a geographically constrained road system.
4
Conclusions
This pap er st udied simple connected network s of various ty pes and investi gated the extent to which t heir degree sequences determined t he observed value of r or t he range of mathematically feasible values of r that t hey could exhibit . We found t ha t certain cha racteristics of D , mainl y a few domin ant high degree nodes, small (d), and large d relative to (d) give rise t o observed r < 0 and const ra ined r to a narrow range comprising mostly negative values. It is then of dom ain interest to und erstand why a particular network has a degree sequenc e D with these characteristics. For the rigid assembly networks, this can be traced to the fact th at they must support large forces and torques and t hat they have a few high degree parts t hat perform t his function while supporting t he rest of t he parts. For rigid social networks like th e Kar at e Club and the Tirole and St iglitz coaut horship networks, it can be t raced to t he presence of one or a few dominant individuals who cont rol t he relationship s repr esented in t he network. For networks whose D does not have t hese restri ctive cha racteristics, the observed value of r, while usually > 0, may not mean anyt hing from eit her a mathematical point of view (beca use a wide ran ge of r of both signs is mathematically feasible) or from a domain point of view (beca use ot her rewirings wit h very different r exist or are plausible). Thus, our findings contradict t he
81 claim made in [3], namely t hat" Left to t heir own devices, we conjecture, networks normall y have negative values of r. In ord er to show a positive value of r, a network must have some specific additional st ruct ure th at favors assortative mixing. " Th e exa mples in t his pap er disaffirm such such genera lizat ions and suggest instead that th e observed r for any network should not be compared to [-1 , +1] but rather to t he allowed range of r for t hat network, based on its D. Similar arguments based on graph theoreti c properties are made in [11].
5
Acknowledgments
The au th ors t ha nk J. P ark , J. Davis, and J. Dunne for sharin g and explaining physics coaut hor, company director , and food web data, respectively, and S. Maslov, G. Bounova and M.-H. Hsieh for sharing and explaining valuable Matlab routines. Th e authors also thank J . Noor, K. Steel, and K. Tapi a-Ahumada for data on th e regional power grid , P. Bonnefoy and R. Weibel for dat a on the US air tr affic syste m, C. Vaishn av, J. Lin and D. Livengood for several telephon e networks, N. Sudarsan am for the t raffic light cont roller, C. Baldwin and J. Rusnak for Apache and Mozilla call list dat a, and C. Rowles and Eri c McGill for DSM data. The autho rs t ha nk C. Magee and L. Li for valuable discussions.
Bibliography [1] Newman, M., 2003, The struct ure and function of complex networks, SIA M Review 45 , 167. [2] Maslov, S. and Snepp en, K. , 2002, Specificity and Stability in Topology of Protein Networks, Science 296 , 910-913. [3] Newman, M. E. J ., and Park , J ., 2003, Wh y Social Networks are Different from Other Ty pes of Networks, Physical Review E 68 , 036122. [4] Maslov, S., Sneppen, K., and Zalianyzk, A., 2004, Detection of topological patte rns in complex networks: corre lation profile of the internet , Physica A 333 , 529-540. [5] Eric W. Weisstein . Correlat ion Coefficient . From MathWorld- A Wolfram Web Resource. http: / /mathworld .wolfram .com/ CorrelationCoefficient.html [6] Fagen , Ed ., 1975, A History of Engineering and Science, in Th e Bell System, T he Early Years (1875-1925), Bell Teleph one Labor atories. [7] Girvan , M., and Newma n, M. E. J., 2002, Community Structure in Social and Biological Networks, PNA S 99 , 12, 7821-7826. [8] Li, L., Alderson , D., Doyle, J. C., and Willinger , W. , 2006, Towards a T heory of Scale-Free Gra phs: Definitio n, P roperties, and Implicat ions, Internet Met liemetics 2, 4, 431-523. [9] Steward , D. V., 1981, Systems Analysis and Management : Structu re, Strategy, and Design, PBI (New York ). [10] Whi tn ey, D. E., 2004, Mechanical Assemblies and their Role in Product Development , Oxford University Pr ess (New York). [11] Alderson , D., and Li, L., 2007, Diversity of gra phs with highly var iable connectivity, Phys. Rev. E 75 , 046102.
Chapter 11 Evolutional family networks generated by group-entry growth mechanism with preferential attachment and their features Takeshi Ozeki Department of Electrical and Electronics Engineering, Sophia University, 7-1 Kioicho, Chiyodaku, Tokyo, 102-8554 , Japan [email protected] The group-entry growth mechanism with preferential attachment generates a new class of scale free networks, of which exact asymptotic connectivity distribution and generation function are derived. They evolve from aristocratic networksto egalitarian networks with asymptotic power law exponent of = 2 + M depending on the size M and topology of the constituent groups. The asymptotic connectivity distribution fits very well with numerical simulationeven in the region of smaller degrees. Then it is demonstrated small size networks can be analysed to find their growth mechanism parameters using asymptotic connectivity distribution templates in region of smaller degrees, where it is easy to satisfy a statistical level of significance. This approachis believedto develop new search of scale free networksin real worlds. As an exampleof evolutional familynetwork in the real world,Tokyo Metropolitan RailwayNetworkis analysed.
r
1 Introduction The scale free network science [Barabasi 2002, Buchanan 2002,Newman 2006] is expected to provide potential methods to analyse various network characteristics of complex organizations in real world systems, such as the epidemics in Internet [Newman 4]] and dependability of social infrastructure networks [Mitchell 2003] . The evolutional network modelling is desirable to analyse such characteristics depending on network topologies. The WattsStrogatz's small world [Watts 1998] evolves from regular lattice networks to the Erdos-Renyi's random networks by random rewiring links as adjusting its probability [ErdosI960] . The Watts-Strogatz's small world having fixed number of nodes is discussed as a static network. On the other hand, the scale-free network of Barabasi-Albert (BA model) introduces the concept of growing networks with preferential attachment [Barabasi [9]]. One of characterizations of networks is given by the connectivity distribution of P (k), which is the probability that a node has k degrees (or, number of links) . In the scale free networks based on BA model, the connectivity distribution follows the power law, in which P (k) is approximated to k-r, having the exponent F3 . The real world complex networks are analysed to find various scale free networks with various exponents, which are covered in references [Barabasi 2002 , Buchanan 2002 ,Newman 2006] . For an example, it is well known that social infrastructure networks, such as power grids, as egalitarian networks, follow the power law with exponent 4 [Barabasi 1999]. There were many trials reported to generate models with larger exponents for fitting these real-world networks in reference [10-17] : Dorogovtsev et af [Dorogovtsev2000] modified the preferential attachment probability as n(k) oc am + k and derived the exact asymptotic solution of the connectivity distribution showing the wide range of exponents
83 y = a + 2 , where a is the attractiveness and m is a degree of a new node. One of similar models modifying the probability of preferential attachment is successfully applied to analyse the power-law relation of the betweenness as a measure of a network load distribution, for an instance [Goh 2001]. The BA model modified in the preferential attachment probability is practical to fit the exponents of the real world networks, but there are a lot of works necessary to identify the physical causes of such preferential attachment probability. Especially the model is difficult to find the parametric relations with network topologies. In this report, "the evolutional family networks" generated by "a group entry growth mechanism" with the preferential attachment is proposed . This is a modification of the BA model in "the growth mechanism", that is, the basic BA model assumed "one node" joining to the network at each time step. This is the first model to generate a new kind of scale free networks with exponents depending on the topological parameters of the constituent groups. One can imagine this new growth mechanism by considering the famous Granovetter's social world [Granovetterl973], i.e., the social world grows with entry of a family or a group. The family members are initially connected strongly with Evoh niou nl r.mily "
>
j
1·\0"
10 . r of l inh k
Fig.5 Tokyo Metropolitan Networks
As a real world network analysis by the template, the Tokyo metropolitan railway system described in reference [21] is analysed by a statistically combined growth mechanism with the line and loop family networks: Fig.5 depicts the connectivity distribution of a central part of Tokyo Metropolitan Railway System of which number of total stations and links are 736 and 100 1762, respectively. The number of links is counted topologically: that is, Railway we count the number of links between Tokyo to Kanda as 1, even though we
88 have three double railways between them, for an example in reference [22]. Fitting with N=3 and 6=0.75. is excellent, which suggests that the growth mechanism is coincident with the growth mechanism of the evolutional family networks: In the construction of railway system, number of stations as a group with a line or a loop topology are installed simultaneously. The exponent measured at the degree of 8 is 4, which is coincident with that of the power grids of the northern America [Barabasi 1999]. The number of nodes in constituen line and loop networks is N=3, which is reasonable in the central part of Tokyo from its complexity. It is found a real world network suitable for fitting by modified evolutional family network model with loop topology, which is Tokyo metropolitan railway network.
6 Discussions The asymptotic connectivity distribution function of the evolutional family network satisfies the conditions of the statistical distribution function. The network parameters such average number of degree < k > , the standard deviation o, the clustering coefficient [Satorras 2004] and the network diameter of the evolutional family networks are listed in Table 1, which are calculated by the numerical simulation with No=1000. The standard deviation o for M=1 is infinity, corresponding to BA model. The regularity of the network increases as M increase. The clustering coefficients increase as M increases, which is approximated by (I-21M) for larger M The diameter, in the Table 1, is the number of hops to reach all of the nodes from the largest hub node, which are compared with generation function method of approximately counting the number of neighbours to estimate average path length in reference [Newman 2001]. Table I .Full-mesh family network features These characteristic parameters suggest that the evolutional 3 5
family networks evolve from Barabasi-Albert scale free 0.82 0.37 0.18 etex» CO network to a new class of scale . clustering 0.78 0 0.38 free networks, which is diameter 7 6 9 5 characterized by the larger clustering coefficient, small diameter and high regularity with decreasing cr/ as increasing M. It is said that the evolutional family network is a new class of scale free networks different from the networks of the Watts-Strogatz small worlds even with higher regularity in larger constituent family size M. M
I I
2
4
8 9
7 Conclusion The group-entry growth mechanism with preferential attachment generates a new class of scale free networks, of which exact asymptotic connectivity distribution and generation function are derived. They evolve from aristocratic networks to egalitarian networks with asymptotic power law exponent of r = 2 + M depending on the size M and topology of the constituent groups. The asymptotic connectivity distribution fits very well with numerical simulation even in the region of smaller degrees. Then it is demonstrated small
89 size networks can be analysed to find their growth mechanism parameters using asymptotic connectivity distribution templates in region of smaller degree s, where it is easy to satisfy a statistical level of significance. This approach is believed to develop new search of scale free networks in real worlds. As an example of evolutional family network in the real world, Tokyo Metropolitan Railway Network is analysed.
Bibliography [I] Barabasi , A L. "Linked ", A Plume Book, (2002) [2] Mark Buchanan , "Nexus ", Norton & Company Ltd., New York (2002) [3] Newman, M..Baraba si, A.L and. Watts, J., "The structure and Dynamics of Networks" Princeton Univ .Press (2006) [4] Chapter4, pp.180-181 , ibid, [5] Mitchell, William J. "Me+ + ",MIT Press (2003) [6] Watts, DJ. and Strogat z, S.H. Collective dynamics of "small-world" Networks, Nature 393, 440-442 (1998) [7] Erdos, P. and Renyi, A Pub\. Math. Inst. Acad. Sci., 5,17(1960) [8] . Barabasi . A. L and Albert , R. Emergence of scaling in random networks, Science 286, 509 (1999) [9] Barabasi , A. L., Albert , R. and Jeong, H., Mean fi eld theory of scale free random networks , Physica A272, 173 (1999 ) [10] Barabasi, AL., Ravasz, E. and Vicsek, T., Determini stic sca le-free networks, Physica A299 , 559-5 64 (200 1) [II] Arenas, A Diaz-Guilera, A and Guimera, R. , Communication in networks with hierarchical branching, Phys.Rev.Lett s. 86, 3196 (200 I) [12] Albert R., Baraba si, A. L. , Topology ofevolving networks Phys. Rev. Lett., 85, 5234-5237 (2000) [13] Mathias N. and Gopal, V., Small Worlds:How and Why, Phys. Rev. E-63, 021117 (200 1) [14] Dorogo vtsev, S.N., J. F. F. Mende s, & A. N. Samukhin,", Stru cture of growi ng networks with pref erential linking, Phys.Rev.Lett. 85, 4633-4636 (2000) [15] Krapivsky, P. L., Redner , S., Lcyvraz, F. Conn ectivity of growing random networks, Phys. Rev. Lett., 85, 4629(2000) [16] Goh, K.I. Kahng B.,and.Kim, D, Unive rsal Behavior ofLoad Distribution in Scale free networks, Phys. Rev. Lett. 87,278701 (2001) [17] Alava, M. J. and Dorogovtsev, S. N. , Comp lex networks created by aggregation, Phys.Rev. E71, 036107 (2005) [18] Grano vetter, M., The Strength ofweak ties, American Journal of Sciology 78, 1360-1380(1973) [19] Newman , M. E. J. Strogatz, S H. and Watts, D. J. Scientifi c collaboration networks Phys.Rcv. E64,026 I 18(200 I) [20] Morigu chi, S. , Udagawa, K. ,and Ichimatsu, S. ,The Mathematical Formula 11, Iwanami Pub. Co. Tokyo (1956) [21] Rail map ofTokyo (Shoubunsha Publications, 2004), ISBN4-398-72008-1 . [22] The number of stations with k=! includes some of stations counted as the end station s at the zone boundary for the central part of Tokyo. [23] Satorras, R.P. and Vespignani, A., "Evolution and Structure ofthe Internet ", Cambridge Univ. Press.(2004)
Chapter 12
Estimating the dynamics of kernel-based evolving networks Gabor Csardi Center for Complex Systems Studies, Kalamazoo, MI, USA and Department of Biophysics, KFKI Research Institute for Particle and Nuclear Physics of t he Hungarian Academy of Sciences, Budapest , Hungary [email protected] Katherine Strandburg DePaul University - College of Law, Chicago, IL, USA Laszlo Zalanyi Department of Biophysics, KFKI Research Institute for Particle and Nuclear Physics of t he Hungarian Academy of Sciences, Bud apest , Hungary Jan Tobochnik Department of Physics and Center for Complex Systems Studies, Kalamazoo College, Kalamazoo, MI, USA Peter Erdi Center for Complex Systems Studies, Kalamazoo College, Kalamazoo, MI, USA
In this pap er we present the application of a novel meth odology to scient ific citation an d collab orat ion networks. T his method ology is design ed for underst anding th e govern ing dyn am ics of evolving networks and relies on an attachment kernel, a scalar funct ion of node pr op erties, that stochas t ically drives the addition and deleti on of vertices and edges. We illustrate how th e kernel function of a given networ k ca n be extracted from the hist ory of t he network and discuss ot her possible applications.
91
1
Introduction
The network represent ati on of complex systems has been very successful. Th e key to t his success is universality in at least two senses. First , t he simplicity of representin g complex syste ms as networks makes it possible to app ly network theory to very different syste ms, ranging from the social st ructure of a group to t he interactions of proteins in a cell. Second, these very different networks show universal st ruct ural traits such as t he small-world property and the scale-free degree-distribution [16, 3]. See [1 , 13] for reviews of complex network research. Usually it is assumed that the life of most complex systems is defined by someoften hidden and unknown - underlying governing dynamics. Th ese dynamics are the answers to the question 'How does it work?' and a fair sha re of scient ific effort is t aken to uncover thi s dynamics. In th e network represent ation th e life of a (complex or not) system is modeled as an evolving graph: sometimes new vertices are introduced to th e system while oth ers are removed, new edges are formed, others break and all these events are governed by the underlying dynamic s. See [5, 12,2] for data-driven network evolution studies. Thi s paper is organized as follows. In Section §2 we define a framework for studying the dynamics of two types of evolving networks and show how this dynamics can be measur ed from the data. In Section §3 we present two applications and finally in Section §4 we discuss our results and ot her possible applicat ions.
2
Modeling evolving networks by attachment kernels
In this section we intr oduce a framework in which the underlying dynamics of evolving networks can be est imated from knowledge of the time dependence of the evolving network. This framework is a discrete time model, where time is measur ed by the different events happ ening in th e network. An event is a structural change: vertex and/or edge additions and/or deletions. The interpr etation of an event depends on th e system we're studying; see Section §3 of this paper for two examples. Th e basic assumption of the model is that edge addit ions depend on some propert ies of th e vertices of th e network. This property can be a st ructural one such as the degree of a vert ex or its clustering coefficient but also an intrinsic one such as the age of a person in a social network or her yearly net income. The model is independent of th e m eaning of these properties. The vertex prop erties drive t he evolution of th e network stoc hast ically through an at tachment kernel, a function giving the probabilities for any new edges which might be added to the network. See [9] for anot her possible application of attachment kernels. In this pape r we specify the model framework for two special kinds of networks: cit ation and non-decaying networks, more genera l results will be published in forthcoming publications.
92
2.1
Citation networks
Citation networks are special evolving networks. In a citation network in each t ime step (event) a single new node is added to the network together with its edges (citations). Edges between "old" nodes are never introd uced and there are no edge or vertex deletions eithe r. For simplicity let us assume t hat the A(·) attachment kernel depends on a only one property of the potentially cited vert ices, t heir degree. (T he formalism can be genera lized easily to include ot her prop erties as well.) We assume that the probability that at time ste p t an edge e of a new node will attach to an old node i wit h degree di is given by
Pre cites i] =
t
A(di(t ))
(1)
~k=1 A(dk(t) )
Th e denominator is simply the sum of th e at tachment kernel functions evaluated for every node of the network in th e curre nt tim e step. With t his simple equation th e model framework for cita tion networks is defined: we assume that in each t ime step a single new node is at tached to the network and that it cites ot her, older nodes with the probability given by (1). For a given citation network we can use t his model to estimate t he form of the kernel function based on data about the history of a network. In this pap er we only give an overview of this est imation process, please see [7] for the details. Based on (1) t he probability that an edge e of a new node at time t cites an old node with degree d is given by
P re cites a d-degree node] = Pe(d) =
A(d)Nd(t) S(t) ,
t
S(t) = L A(dk(t ))
(2)
k=1
Nd(t) is the number of d-degree nodes in t he network in time step t. From here we can extract the A(d) kernel function: A(d) = Pe(d)S(t ) Nd(t)
(3)
If we know S(t ) and Nd(t), th en by est imating Pe(d) based on the network dat a we have an est imate for A(d) via (3), and by doing this for each edge and d degree, in practice we can have a reasonable approximat ion of the A(d) function for most d values. (Of course we cannot est imate A(d) for those degrees which were never present in t he network.) It is easy to calculate Nd(t) so th e only piece missing for the estimation is th at we need S(t) as well; however this is defined in terms of the measured A(d) function. We can use an ite rative approach to make bet ter and better approximat ions for A(d) and S(t). First we assume that So(t) = 1 for each t and measure Ao(d) which can be used to calculate the next approximation of S(t), S1 (t) yielding Al (t) via the measurement , etc . It can be shown that t his procedure is convergent an d in practice it converges quickly - after five iterations the difference between successive An(d) and A n+ 1 (d) estimations is very small.
93
Non-decaying networks
202
Non-decaying networks are more general then citation networks because connections can be formed between older nodes as well. It is still true, however, that neither edges nor nodes are ever removed from the network . Similarly to the previous section , we assume that the attachment kernel depends on the degree of the vertices, but this time on the degree of both vertices involved in the potential connection. The probability of forming an edge between nodes i and j in time step t is given by
' df Pl t an J
'11be connecte d] =
WI
A(di(t),dj(t))
-N:-:-:-(t:-)---cN:-:""('"""'t),....--'----'--'---"--'---'-----
2:: k
2::1# (1 - akl(t))A(dk(t), dl(t))
(4)
The denominator is the sum of the attachment kernel function applied to all possible (not yet realized) edges in the network. akl(t) is 1 if there is an edge between nodes k and l in time step t and 0 otherwise. Using an argument aimilar to that of the previous section we can estimate A(d*, d**) via *
**
Pre connects d and d** degree nodes] = Pe(d*,d ) =
A(d*,d**)Ndodoo(t) S(t) ' , (5)
Ndo .d: (t) is the number of not yet realized edges between d* and d** degree nodes in time step t, and 0
N(t) N(t)
S(t) =
L L(1- akl(t))A(dk(t),dl(t))
(6)
k 1#
A(d* d**) = Pe(d*,d**)S(t) , Ndo ,doo(t)
(7)
S(t) can be approximated using an iterative approach similar to that introduced in the previous section.
3
Applications
In this section we briefly present results for two applications for the model framework and measurement method. For other applications and details see [6, 7] .
301
Preferential attachment in citation networks
The preferential attachment model [3,8] gives a mechanism to generate the scale-free degree-distribution often found in various networks. In our framework for citation networks it simply means that the kernel function linearly depends on the degree:
A(d) = d + a,
(8)
94 where a is a constant. By using our measurement method, it is possible to measure the kernel function based on node degree for various citation networks and check whether they evolve based on this simple principle. Let us first consider the network of high-energy physics papers from the arXiv eprint archive. We used data for papers submitted between January, 1992 and July, 2003, which included 28632 papers and 367790 citations among them. The data is available online at http ://www . cs. cornell. edu/proj ects/kddcup/datasets. html. This dataset and other scientific citation networks are well studied, see [15, 10] for examples. First we've applied the measurement method based on the node degree to this network and found that indeed , the attachment kernel of the network is close to the one predicted by the preferential attachment model, that is
(9) gives a reasonably good fit to the data. See the measured form of the kernel in Fig . 1. The small exponent for d is in good agreement with the fact that the degree distribution of this network decays faster than a power-law. Next, we've applied the measurement method by using two properties of the potentially cited nodes: their degree and age, the latter is simply defined as the difference of the current time step and the time step when the node was added. We found that the two variable A(d, a) attachment kernel has the following form: (10) This two-variable attachment kernel gives a better understanding of the dynamics of this network: the citation probability increases about linearly with the degree of the nodes and decreases as a power-law with their age. Note that these two effects were both present in the degree-only dependent A * attachment kernel , this is why the preferential attachment exponent was smaller there (0.85 < 1.14). Similar results were obtained for the citation network of US patents granted between 1975 and 1999 containing 2,151,314 vertices and 10,565,431 edges: 2 A** patent (d, a) = (d1.
+ 1)a-1.6 .
(11)
These two studies show that the preferential attachment phenomenon can be present in a network even if it does not have power-law degree-distribution because there is another process - aging in our case - which prevents nodes from gaining very many edges.
3.2
The dynamics of scientific collaboration networks
In this section we briefly present the results of applying our methods to a nondecaying network: the cond-mat collaboration network. In this network a node is a researcher who published at least one paper in the arXiv cond-rnat archive between
95
·· ·
........ ""=l
o
o 0
""=l
:\
'-'
~ * .,.. *-
"-
~
o
50
100 degree
150
o 10
1
200
- 20
5 degree
2
30
x
10
40
I 20
F ig ure 1: The two measured kernel functions for the REP citation network. The left plot shows the degree dependent kernel, the right the degree and age dependent kernel. On the right plot four sections along the degree axis are shown for different vertex ages.
1970 and 1997 (this is the date when the paper was submitted to cond-mat, not the actual publication date, but most of the time these two are almost the same) . There is an edge between two researchers/nodes if they've published at least one paper together. The data set contains 23708 papers, 17636 authors and 59894 edges. We measured the attachment kernel for this network based on the degrees of the two potential neighbors. See Fig. 2 for the Acond-mat (d*, d**) function .
r- 40 o
t
;j
:30
.
~')O '2,>'3,...,>'r are the oth er eigenvalues of F ordered in a way such th at >'1 = 1 > 1>'21;::: 1>'31;::: ... ;::: I>'rl, and m2 is t he algebraic multiplicity of ). 2 , t hen
(6) where O(f (t )) represents a function of t such t hat there exists 0:,13 E R , with o < 0: ::; 13 < 00, such that o:f(t) ::; O(f (t )) ::; f3 f(t) for all t sufficient ly large. T his shows that the convergence of t he consensus protocol is geomet ric, with
101 relative speed equal to SLEM. We denote by It = 1 - SLEM(G) the spectral gap of a graph, so graphs with larger spectral gaps converge more quickly. For the general case where topology changes are also included , Blondel et al [1] showed that the joint spectral radius of a set of reduced matrices derived from the corresponding F matrices, determines the convergence speed. For ~ a set of finite n x n matrices, their joint spectral radius is defined as:
(7) Calculation of the joint spectral radius of a set of matrices is a mathematically hard problem and is not tractable for large sets of matrices. Our goal is to find network topologies which result in good convergence rates. Switching over such topologies will also result in good convergence speed. We limit our scope to the case of fixed topology here and examine the conjecture that small world graphs have high convergence speed .
3
Convergence in small world graphs
Watts and Strogatz [12] introduced and studied a simple tunable model that can explain behavior of many real world complex networks. Their small world model takes a regular lattice and replaces the original edges by random ones with some probability 0 ::; ¢ ::; 1. It is conjectured that dynamical systems coupled in this way would display enhanced signal propagation and global coordination, compared to regular lattices of the same size. The intuition is that the short paths between distant parts of the network cause high speed spreading of information which may result in fast global coordination. We examine this conjecture. In this study, we use a variant of the Newman-Moore-Watts [6] improved form of the ¢ -model originally proposed by Watts and Strogatz. The model starts with a ring of n nodes, each connected by undirected nodes to its nearest neighbors to a range k . Shortcut links are added -rather than rewiredbetween randomly selected pairs of nodes, with probability ¢ per link on the underlying lattice; thus there are typically nk¢ shortcuts. Here we actually force the number of shortcuts to be equal to nk¢ (comparable to the Watts ¢-model.) In our study, we have considered different initial rings (n , k) = (100,2), (200,3), (500,3) ,(1000,5), generated 20 samples of small world graphs G(¢) for 50 different ¢ values chosen in a logarithmic scale between 0.01 and 1. Selecting these choices of (n, k) is done for comparison purposes with the results of [7] . In the figures 1 and 2, we have depicted the gain in spectral gap of the resulting small world graphs with respect to the spectral gap of the base lattice. We will just discuss below the results for cases (500,3) and (1000,3). The others follow a similar pattern. Some important observations and comments follow:
< 4> < 0.01) there is no spectral gap gain observed and the SLEM is almost constant and a drastic increase in the spectral gap is observed around ¢ = 0.1.
1. In the low range of 4> (0
102 ,."
ten
s»
.
."
en
100
,
' lOO
e»
,."
500
""
ee
..
en
"" lOO
lOO '00 0
0
10'
10"'
IO~
,if
Figure 1: Spectral gap gain for (n,k) = (500,3)
..
10. '
,if
Figure 2: Spectral gap gain for (n, k) = (1000, 5)
2. Simulations show t hat sma ll world gra phs possess good convergence propert ies as far as consensus protocols are concern ed. Some analyt ical results are includ ed in t he next sect ion but the complete analysis is t he subject of future work. The results show t hat adding nk¢ shortc uts to a 1 - D lat t ice dramatically improves t he convergence properties of consensus schemes for ¢ ~ 0.1. For example in a (500, 3) lat t ice, by adding ra ndomly 150 edges, we can on average increase t he spectral gap approximately by a factor of 100. However, our aim is to find a more clever way of adding edges so t hat after adding 150 edges to a (500,3) lat t ice we get a much lar ger increase in t he spectral gap. To formul ate thi s problem, we consider a dynamic gra ph which evolves in time start ing from a 1 - D lattice Go = C(n, k) . Let 's denote t he complete gra ph on n vertices by K n . Also, denote t he complement of a graph G = (V, E) (which is the graph with t he same vertex set but whose edge set consists of the edges not present in G) by G. SO, E (G) = E (K n ) \ E (G). If we denot e the operation of adding an edge to a gra ph by A, the dynamic gra ph evolut ion can be written as:
G(t + 1) = A(G(t) , u(t)) u(t) = e(t + 1) G(O ) = Go
t
= 0,1 ,2 , ..., nk¢ - 1
e(t
+ 1) E E(G(t ))
(8)
So, now t he probl em to solve is: maxe(l).....e(n)EE(G(t» max [A2(F (nk¢) ), -AN(F(nk¢))]
subject to:
(8)
(9)
where F (nk¢ ) = D(G(nk ¢))- lA(G(nk¢)). We will now mention some observat ions which are useful to build a framework for studying the above problem.
103
3.1
Spectral analysis
T he choice of Go = C(n , k) to be a regul ar 1 - D lat tice wit h self loops, mean s that (possibly afte r re-lab elling vertices) t he adjacency matrix of t he gra ph ca n be writt en as a circ ulant mat rix: al an an -l
A=
a2
in which:
a2 al an
a3 a2 al
an an- l an-2
a3
= circ[al, a2, ...an ]
(10)
al
a ~ [al , a2, ...a n] = ~
k+l
0, ..., 0,
'-v-"
n - 2k - l
8k
(11)
Circulant matrices have a special st ruct ure which provid es t hem with special pr op erties. All entries in a given diagonal are the same . Each row is det ermined by it s previous row by a shift to t he right (modul o n) . Consider the n x n permutation mat rix , II = circ[O 1 0]. Then for any circ ulant matrix we ca n write: A = circ[al,a2, ..., a n ] = all + a2II + ... + anIIn- l . For a vect or a = [al, a2, ...,an], the polyn omial Pa(z) = a l + a2Z + a3z2 + ...anz n is called the representer of the circulant . The following theorem based on [2] states how to calculate t he eigenvalues of circulants .
° ..
2",;=I
Theorem 3.1 [2} Let w = e be th e n th root of unity. The eigenvalues of A = circ[al , a2, ..., an ] are given by Ai = Pa(w i- l ), where i = 1,2 , ... n . n
The main result considering the spectral properties of Go follows.
Proposition 3.1 The corresponding F m atrix of Go = C(n , k) is circ ulant . Furth ermore, its SLEM has multiplicity at least 2. Sketch of Proof: Since Go = C(n, k ) is 2k+ l-regular (including the self loop) , F = tr:' A = 2k~1 A . So F is circ ulant, F = circ(2k~ 1 a ), where a is as in (11). The representer of t his circ ulant is
Pa(z)
1
= -k--(1 + z + ...+ zk - l + zk + zn-k + zn-kH + ... + zn - l) 2
+1
(12)
So, the eigenvalues of this matrix are Ai = Pa(wi- l) . It is easy to show that Al = 1 and moreover it is a simple eigenvalue becau se the underlying graph is connecte d. Since for integers A and B , w An+ B = w B , it follows that A2 = An , A3 = An- l and so on . In t he case that n is odd, apart from Al = 1, all eigenvalues come in pairs. In t he case that n is even, it can be shown t hat A ~ + l is the on ly
104 ....r - -- - - - -- -----,
.-
Dl _
t;IC "" IllO la) . tD'""dl.1_-' llQC1O "t
IJO
eoo
1(1(1)
Figure 3: Adding a shortcut to (1000,5). Line ta ngent to curve shows th e SLEM before new edge
Figure 4 : T he opt imal topol ogy; adding 2 shortcuts to C(16, 2)
eigenvalue apart from -1 which can be single, however direct calculation shows that it is equal to ~k~~ which is clearly less than A2 = An. A simple geometric argument shows th at SLEM = A2 = An = 2k~ 1 [1 + 2Re(w) + 2Re(2w) + ... + 2Re(kw)] < 1 and Ai :::; A2 for i E 2, ..., n - 1. This shows that for the case where k « n , which are t he cases we are more interested in, as n ---. 00 two of t he non-unity eigenvalues approach 1. This describes the slow convergence of consensus protocols when the diameter is large.
4
Simulation results: The effect of adding one and two shortcuts
We ran a set of simulat ions with different purposes based on (8). A counte r intui tive result is t hat the SLEM does not monotonically change wit h addition of edges. Specifically, in cases when n is even, adding an edge will increase th e SLEM except in th e case where a vertex is connected to the farthest vertex from it , th at is i is connected to i + n/2 (modulo 2). In this case one of the multiplicities of the SLEM is decreased but the oth er multiplicity is not changed. Figures 3 and 4 illustrat e this effect . The dotted line t angent to the curves show th e SLEM of the original curves. The more distant the two joined vertices, th e smaller the increase in the SLEM. Addin g two shortcuts can however decrease t he SLEM. It is worthwhile to ment ion t hat in all of our simulat ions, for a given n, shortc uts th at reduced t he diameter of th e gra ph more, result ed in larger spectral gap. For example, for the case of adding 2 shortc uts to Go = C(16,2), Figure 4 shows t he opt imal topology. The analysis of this conjecture is th e sub ject of future work. In this paper, we presented a structural st udy on t he convergence of consensus problems on small world gra phs. Simulations and some preliminary analytical
105
results were presented. Our future work focuses on the analytical study of small world phenomena in consensus problems .
Bibliography [1] V. Blondel, J. Hendrickx, A.Olshevsky, and J . Tsitsiklis. Convergence in multiagent coordination, consensus and flocking. Proceedings 44th IEEE Conference on Decision and Control, pp. 2996-3000, 2005. [2] P. J. Davis. Circulant Matrices. Wiley, 1979.
[3] L. Fang and P. Antsaklis. On communication requirements for multi-agents consensus seeking. Proceedings of Workshop NESC05, Lecture Notes in Control and Information Sciences (LNCIS) , Springer, 331:53-68, 2006.
[4] A. Jadbabaie, J . Lin, and A. S. Morse. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48(6):988-1001 , 2003 .
[5] T . Jiang and J . S. Baras Autonomous trust establishment. Proceedings 2nd International Network Optimization Conference, Lisbon, Portugal, 2005.
[6] M. E. J . Newman, C. Moore, and D. J . Watts, Mean-field solution of the small-world network model. Physical Review Letters 84: 3201-3204, 2000.
[7] R Olfati Saber . Ultrafast consensus in small-world networks. Proceedings American Control Conference, 4:2371-2378, 2005.
[8] R Olfati Saber and RM Murray. Consensus problems in networks of agents with switcing topology and time-delays . IEEE Transactions on Automatic Control, 49(9):1520-1533, 2004.
[9] W. Ren, R W. Beard , and T. W. McLain. Coordination variables and consensus building in multiple vehicle systems. Lecture Notes in Control and Information Sciences series, 309:171-188, 2004, Springer Verlag. [10] E. Seneta. Nonnegative Matrices and Markov Chains. Springer, 1981.
[11] T. Vicsek, A. Czirok, E. Ben Jakob, 1. Cohen, and O.Schochet. Novel type of phase transitions in a system of self-driven particles. Physical Review Letters, 75:1226-1299, 1995. [12] D.J . Watts and S.H. Strogatz. Collective dynamics of small-world networks. Nature, 393:440-442, 1998. [13] D.J. Watts. Small Worlds: The Dynamics of Networks Between Order and Randomness. Princeton University Press, 1999. [14] L. Xiao, S. Boyd, and S. Lall. A scheme for robust distributed sensor
fusion based on average consensus. Proceedings International Conference on Information Processing in Sensor Networks, pp. 63-70, 2005.
Chapter 14
Complex Knowledge Networks and Invention Collaboration Thomas F. Brantle t Wesley 1. Howe School of Technology Management Stevens Institute of Technology Castle Point on Hudson Hoboken, NJ 07030 USA [email protected] M. Hosein Fallah, Ph.D. Wesley J. Howe School of Technology Management Stevens Institute of Technology Castle Point on Hudson Hoboken, NJ 07030 USA [email protected] Knowledge and innovation flows as characterized by the network of invention collaboration is studied, its scale free power law properties are examined and its importance to understanding technological advancement. This research while traditionally investigated via statistical analysis may be further examined via complex networks. It is demonstrated that the invention collaboration network's degree distribution may be characterized by a power law, where the probability that an inventor (collaborator) is highly connected is statistically more likely than would be expected via random connections and associations, with the network's properties determined by a relatively small number of highly connected inventors (collaborators) known as hubs. Potential areas of application are suggested.
1.
Introduction
Presently we are constructing ever increasingly integrated and interconnected networks for business, technology , communications, information, and the economy . The vital nature of these networks raises issues regarding not only their significance and consequence but also the influence and risk they represent. As a result it is vital t
Corresponding Author
107 to understand the fundamental nature of these complex networks. During the past several years advances in complex networks have uncovered amazing similarities among such diverse networks as the World Wide Web [Albert et al. (1999)], the Internet [Faloutsos et al.(1999)], movie actors [Amaral et al. (2000)], social [Ebel et al. (2002)], phone call [Aielo et al. (2002)], and neural networks [Watts and Strogatz (1998)] . Additionally, over the last few decades we have experienced what has come to be known as the information age and the knowledge economy. At the center of this phenomenon lies a complex and multifaceted process of continuous and farreaching innovation advancement and technological change [Amidon (2002)], Cross et al. (2003) and Jaffe and Trajtenberg (2002)]. Understanding this process and what drives technological evolution has been of considerable interest to managers, researches, planners and policy makers worldwide. Complex networks offer a new approach to analyze the information flows and networks underlying this process.
1.1
Knowledge and Innovation Networks
Today, nations and organizations must look for ways of generating increased value from their assets. Human capital and information are the two critical resources . Knowledge networking is an effective way of combining individuals' knowledge and skills in the pursuit of personal and organizational objectives. Knowledge networking is a rich and dynamic phenomenon in which existing knowledge is shared, evolved and new knowledge is created. In addition, in today's complex and constantly changing business climate successful innovation is much more iterative, interactive and collaborative, involving many people and processes. In brief, success depends on effective knowledge and innovation networks. Knowledge collaboration and shared innovation , where ideas are developed collectively, result in a dynamic network of knowledge and innovation flows, where several entities and individuals work together and interconnect. These networks ebb and flow with knowledge and innovation the source and basis of technological advantage. Successful knowledge and innovation networks carry forth the faster development of new products and services, better optimization of research and development investments , closer alignment with market needs, and improved anticipation of customer needs resulting in more successful product introductions, along with superior competitor differentiation. [Skyrme (1999), Amidon (2002), and Cross et al. (2003)] This paper discusses knowledge and innovation flows as represented by the network of patents and invention collaboration (inventors and collaborators) and attempts to bridge recent developments in complex networks to the investigation of technological and innovation evolution. The recent discovery of small-world [Watts and Strogatz (1998)] and scale-free [Barabasi and Albert (1999)] network properties of many natural and artificial real world networks has stimulated a great deal of interest in studying the underlying organizing principles of various complex networks , which has led in tum to dramatic advances in this field of research. Knowledge and innovation flows as represented by the historical records of patents and inventors, with future application to technology and innovation management is addressed.
108 1.2
Gaussian Statistics to Complex Networks
Patents have long been recognized as a very useful and productive source of data for the assessment of technological and innovation development. A number of pioneering efforts and recent empirical studies have attempted to conceptualize and measure the process of knowledge and innovation advancement as well as the impact of the patenting process on patent quality, litigation and new technologies on innovation advancement [(Griliches (1990), Jaffe and Trajtenberg (2002), Cohen and Merrill (2003)]. However, these studies have primarily relied upon traditional (Gaussian) statistical data analysis. Complex networks should reveal new associations and relationships, thus leading to an improved understanding of these processes. Recent studies in complex networks have shown that the network's structure may be characterized by three attributes, the average path length, the clustering coefficient, and the node degree distribution. Watts and Strogatz (1998) proposed that many real world networks have large clustering coefficients with short average path lengths, and networks with these two properties are called "small world." Subsequently it was proposed by Albert et al. (1999) and Barabasi and Albert (1999) that many real world networks have power law degree distributions, with such networks denoted as "scale free." Specifically scale free networks are characterized by a power law degree distribution with the probability that a node has k links is proportional to k-Y (i.e., P(k) - k"), where y is the degree exponent. Thus, the probability that a node is highly connected is statistically more significant than in a random network, with the network's properties often being determined by a relatively small number of highly connected nodes known as hubs. Because the power law is free of any characteristic scale, networks with a power law node degree distribution are called scale free. [Albert and Barabasi (2002), Newman (2003), and Dorogovtsev and Mendes (2003)] In contrast, a random network [Erdos and Renyi (1959)] is one where the probability that two nodes are linked is no greater than the probability that two nodes are associated by chance, with connectivity following a Poisson (or Normal) distribution . The Barabasi and Albert (BA) (1999) model suggests two main ingredients of selforganization within a scale-free network structure, i.e., growth and preferential attachment. They highlight the fact that most real world networks continuously grow by the addition of new nodes, are then preferentially attached to existing nodes with large numbers of connections, a.k.a., the rich get richer phenomenon. Barabasi et al. (2002) and Newman (2004) have also previously studied the evolution of the social networks of scientific collaboration with their results indicating that they may generally be characterized as having small world and scale free network properties.
1.3
Invention, Knowledge and Technology
Patents provide a wealth of information and a long time-series of data about inventions, inventors, collaborators, prior knowledge, and assigned owners. Patents and the inventions they represent have several advantages as a technology indicator. In particular, patents and patent citations have long been recognized as a very rich and fertile source of data for studying the progress of knowledge and innovation. Hence, providing a valuable tool for public and corporate technology analysis, as well as planning and policy decisions [Griliches (1990), Jaffe and Trajtenberg (2002),
109 Cohen and Merrill (2003)]. Nevertheless , patents and invention collaboration have undergone limited investigation, thus offering a very rich information resource for knowledge and innovation research that is even less well studied and is yet to be fully exploited [Jaffe and Trajtenberg (2002)]. A companion paper analyzes patents and patent citations from a complex networks perspective [Brande and Fallah (2007)].
2
Invention Collaboration
Patents and invention collaboration data contains relevant information allowing the possibility of tracing multiple associations among patents, inventors and collaborators . Specifically, invention collaboration linkages allows one to study the respective knowledge and innovation flows, and thus construct indicators of the technological importance and significance of individual patents, inventors and collaborators . An item of particular interest is the connections between patents and invention collaborators . Thus, if inventor A collaborates with inventor B, it implies that inventor A shares or transfers a piece of previously existing knowledge with inventor B, and vice versa, along with the creation of new knowledge as represented by the newly patented invention. As a result, not only is a flow of knowledge shared between the respective invention collaborators, but an invention link or relationship between the individual collaborators is established per the patented invention. The supposition is that invention collaboration is and will be informative of the relationships between inventors and collaborator s as well as to knowledge and innovation. The construction of the invention collaboration network is discussed and it' s bearing to knowledge and information. Next, summary statistics, probability distributions and finally the power law degree distribution is analyzed.
2.1
Bipartite Graphs and Affiliation Networks
An invention collaboration network similar to that produced by the movie actor network [Watts and Strogatz (1998)] may be constructed for invention collaboration where the nodes are the collaborators , and two nodes are connected if two collaborators have coauthored a patent and therefore co-invented the invention. This invention affiliation or collaboration relationship can be easily extended to three or more collaborators. The relationship can be completely described by a bipartite graph or affiliation network where there are two types of nodes, with the edges connecting only the nodes of different types. A simple undirected graph is called bipartite if there is a partition of the set of nodes so that both subsets are independent sets. Collaboration necessarily implies the presence of two constituents, the actors or collaborators and the acts of collaboration denoted as the events. So the set of collaborators can be represented by a bipartite graph, where collaborators are connected through the acts of collaboration . In bipartite graphs, direct connections between nodes of the same type are impossible, and the edges or links are undirected. Figure 1 provides a bipartite graph or affiliation network representation with two sets of nodes, the first set labeled "patents" which connect or relate the second set labeled "in vention collaborators" who are linked by the shared patent or invention. The two mode network with three patents, labeled PA, PB and Pc, and seven patent or invention collaborators, C 1 to C7, with the edges joining each patent to the respective
110 collaborators is on the left. On the right we show the one mode network or projection of the graph for the seven collaborators . It is noted that singularly authored patents would not be included in the bipartite graph and resulting invention collaboration network. Invention Collaborators
c, Invention Collaborators Two-Mode Network
One-Mode Network
Figure 1 - Invention Collaboration Bipartite Graph or Affiliation Network 2.2
Knowledge and Innovation Flows
Patents and invention collaboration constitute a documented record of knowledge transfer and innovation flow, signifying the fact that two collaborators who coauthor a given patent, or equivalently co-invent said invention, may well indicate knowledge and innovation flowing between the respective collaborators along with the creation of new knowledge and innovation as represented by the new invention. The patent invention link and collaboration knowledge and innovation flow is illustrated in Figure 2 and can be easily extended to three or more collaborators. Thus, knowledge and innovation information made publicly available by the patent has not only flowed to the invention, but has significantly influenced the invention 's collaborators. Several network measures may be applied to the collaboration network in order to both describe the network plus examine the relationship between and the importance and significance of individual inventors and COllaborators [Newman (2004)].
Collaborato
A
II
•
Collaborators
Patent Invention Link
Figure 2 - Knowledge & Innovation Flows and Patent Invention Links 2.3
Patents, Inventors and Data
The invention collaboration network is constructed using the inventor data provided by the NBER (National Bureau of Economic Research) patent inventor file [Jaffe and Trajtenberg (2002)]. This file contains the full names and addresses of the inventors for patents issued from the beginning of 1975 through the end of 1999, comprising a twenty-five year period of patent production and invention collaboration . This includes approximately 4.3M patent- inventor pairs, 2.1M patents and 104M inventors.
111 2.4 2.4.1
Invention Collaboration Distribution Power Law Degree Distribution
Figure 3 provides the probability degree distribution for the invention collaboration network on logarithmic scales. It may be seen that the best fit line for this distribution follows a power law distribution with an exponent of 2.8. Hence it is concluded that a power law provides a reasonable fit to the data. It is noted that a truncated power law distribution with an exponential cutoff may provide a suitable representation, with an associated improvement in the explanation of total variance (R2 ;::; 1.0). This systematic deviation from a power law distribution is that the highest collaborating inventors are collaborating less often than predicted and correspondingly the lowest collaborating inventors are collaborating more often than predicted. A reasonable rationale for this deviation is that in many networks where aging occurs, show a connectivity distribution that possess a power law organization followed by an exponential or Gaussian decay distribution [Amaral et al. (2000)].
[J
s
~ .J ~
~~ ~
i-g
c:
·5
-
[J
log10(prob) YS log10(Collab) lineal
-6
O~-"""""---'-------',---'-====-=~ 0.5 1 15 2 3 Collaboun.lon Count . l og1D 00
Figure 3 - Invention Collaboration: Collaborators Per Inventor The improved fit of the truncated power law with exponential cutoff model, may be attributed to a distinction in the objectives of invention patenting versus scientific publishing. As a result, patent invention collaboration with the sharing of patent rights, further dividing any potential economic rewards and financial gains might have a minimizing or at least optimizing effect on any incentive to increase the number of collaborators. It would be expected that inventors would evaluate and weigh the potential technical contribution against the economic and financial impact of the prospective collaboration on the invention and its shared ownership. Again, with respect to scientific publication this objective is much less of a consideration. For the patent invention collaboration network the degree exponent of the number of patent invention collaborators is approximately 2.8. Thus, it is demonstrated that the number of invention collaborators roughly follows a power law distribution. That is, the numbers of collaborators per inventor falls off as k-Y for some constant y ;::; 2.8, implying that some inventors account for a very large number of collaborations,
112 while most inventors collaborate with just a few and smaller number of additional collaborators. These results are consistent with the theoretical and empirical work concerning scale free networks where a degree exponent of 2 < 'Y < 3 is predicted for very large networks, under the assumptions of growth and preferential attachment.
3
Summary, Discussion and Conclusions
Knowledge and innovation as typified by the network of patents and invention collaboration and the significance of this network to the advancement of technology is discussed. This area of research while traditionally investigated via statistical analysis may be further advanced via complex network analysis. The scale free power law property for the invention collaboration network is presented, where the probability that an inventor or collaborator being highly connected is statistically more significant than would be expected via random connections or associations. Thus the network's properties now being determined by a relatively small number of highly connected inventors and collaborators known as hubs . Immediate areas of potential application and continued investigation include: technology clusters and knowledge spillover [Saxenian (1994) , Porter (1998), Jaffe et al. (2000) , Jaffe and Trajtenberg (2002)] and patent quality , litigation and new technology patenting [Cohen and Merrill (2003) , Lanjouw and Schankerman (2004)]. Analyses of invention collaboration and application to these areas from a complex network analysis perspective should provide a deeper understanding as to their underlying structure and evolution which may influence both private and public policy decision making and planning initiatives. Significant effort and research has been placed into investigating the organization, development and progression of knowledge and innovation, and its impact on technology advancement. Complex network analysis offers tremendous potential for providing a theoretical framework and practical application to the role of knowledge and innovation in today's technological and information driven global economy.
R
References
[1] Aielo, W., Chung, F., and Lu, L. (2002). Random Evolution of Massive Graphs. In Abello , J. Pardalos, P.M., and Resende, M.G.C. eds., Handbook of Massive Data Sets. (pp. 97-122) Dordrecht, The Netherlands: Kluwer Academic Publishers. [2] Albert, R. and Barabasi A.L. (2002) . Statistical Mechanics of Complex Networks. Reviews ofModern Physics, 74,47-97. [3] Albert , R., Jeong, H. and Barabasi A.L. (1999). Diameter of the World -Wide Web. Nature, 401,130-131. [4] Amaral, L.A.N ., Scala, A., Barthelemy M., and Stanley , H.E. (2000). Classes of Small World Networks. Proceedings National Academy Sciences, USA, 9 7, 21, 11149-11152. [5] Amidon, D.M. (2002). The Innovation SuperHigh way: Harnessing Intellectual Capital for Collaborative Advantage. Oxford, UK: Butterworth-Heinemann. [6] Barabasi, A.L. and Albert, R. (1999). Emergence of Scaling in Random Networks. Scienc e, 286, 509-512.
113 [7] Barabasi, A.L., Jeong , H., Neda, Z., Ravasz, E., Schubert, A., and Vicsek, T. (2002). Evolution of the Social Network of Scientific Collaborations. Physica A, 311, 590-6 14. [8] Brande, T.F. and Fallah M.H. (2007). Complex Innovation Networks, Patent Citations and Power Laws. Proceedings ofPICMET '07 Portland International Conference on Management of Engineering & Technology, August 5-9, 2007 Portland, OR [9] Cohen, W.M. and Merrill , S.A., eds. (2003) . Patents in the Kno wledge-Based Economy. Washington, DC: The National Academic Press. [10] Cross , R., Parker, A. and Sasson, L., eds. (2003). Networks in the Knowledge Economy. New York, NY: Oxford University Press. [11] Dorogovtsev, S.N. and Mendes, J.F.F .F. (2003). Evolution ofNetworks: From Biological Nets to the Internet and www. Oxford, Great Britain: Oxford University Press. [12] Ebel, H., Mielsch, L.I. and Bornholdt, S. (2002) . Scale-Free Topology of E-mail Networks. Physical Review E, 66, 035103. [13] Erdos, P. and Renyi, P. (1959) . On Random Graphs . Publi cationes Mathematicae, 6,290-297. [14] Faloutsos, M., Faloutsos, P. and Faloutsos, C. (1999) . On Power Law Relationships of the Internet Topology. Computer Commun ication Review, 29(4), 25 1-262. [15] Griliches, Z. (1990). Patent Statistics as Economic Indicators: A Survey. Journal ofEconomic Literature, 28(4), 1661-1707. [16] Hall, B., Jaffe, A. and Trajtenberg, M. (2005). Market Value and Paten t Citations. Rand Journal ofEconomics 36, 16-38 [17] Jaffe, A. and Trajtenberg, M., eds. (2002). Patents, Citations, and Innovations: A Window on the Knowledge Economy. Cambridge, MA: MIT Press. [18]Jaffe, A. , Trajtenberg, M. and Fogarty, M. (2000). Knowledge Spillo vers and Patent Citations: Evidence from a Survey of Inventors. Am erican Economic Review, Papers and Proceedings, 90, 215-218 . [19] Lanjouw, J and Schankerman, M. (2004) . Patent Quality and Research Productivity: Measuring Innovation with Multiple Indicators. Economic Journal, 114,441-465 . [20] Newman, M.EJ. (2003) . The Structure and Function of Complex Networks. SIAM Review, 45, 167-256. [2 I] Newman, M.EJ. (2004). Who is the Best Connected Scientist? A Study of Scientific Co-authorship Networks. In Complex Networks, E. Ben-Nairn, H. Frauenfelder, and Z. Toroczkai (eds.), pp. 337-370, Springer, Berlin. [22] Porter, M.E. (Nov.-Dec. 1998). Clusters and the New Economics of Competition. Harvard Busin ess Review, 78, 77-90 . [23] Saxenian, A. (1994) . Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Cambridge, MA: Harvard University Press. [24] Skyrme , D.M. (1999). Knowledge Networking: Creating the Collaborative Enterpris e. Oxford, UK: Butterworth-Heinemann. [25] Watts , DJ. and Strogatz, S.H. (1998). Collective Dynamics of Small-world Networks. Nature, 393, 440-442.
Chapter 15
Complexity, Competitive Intelligence and the "First Mover" Advantage Philip Vos Fellman Southern New Hampshire University Jonathan Vos Post Computer Futures, Inc. In the following paper we explore some of the ways in which competitive intelligence and game theory can be employed to assist firms in deciding whether or not to undertake international market diversification and whether or not there is an advantage to being a market leader or a market follower overseas. In attempting to answer these questions, we take a somewhat unconventional approach. We first examine how some of the most recent advances in the physical and biological sciences can contribute to the ways in which we understand how firms behave. Subsequently , we propose a formal methodology for competitive intelligence. While space consideration s here do not allow for a complete game-theoretic treatment of competit ive intelligence and its use with respect to understanding first and second mover advantage in firm internationalization, that treatment can be found in its entirety in the on-line proceedings
of
the
6th
International
Conference
bttp:llknowledl:etoday.orl:/wikiljndex,pbp/lCCS06/89.
on
Complex
Systems
at
115
1 Agent-Based Modeling: Mapping the Decision-maker Agent-based modeling , particularly in the context of dynamic fitness landscape models, offers an alternative framework for analyzing corporate strategy decisions, particularly in the context of firm internationalization and new market entry. While we agree with Caldart and Ricart (2004) that corporate strategy is not yet a mature discipline, it is not our intention to claim that agent-based modeling is either a complete discipline or that it can solve all of the problems of corporate strategy . However , agent-based modeling can help solve a number of problems which pure empirical research , or research which depends upon homogeneity assumpt ions cannot readily solve. Working with "local rules of behavior" (Berry, Kiel and Elliot 2002; Bonabeau 2002) agent based modeling adopts a "bottom up" approach , treating each individual decision maker as an autonomous unit. It is a methodology sharing features with Nelson and Winter's evolutionary economics insofar as it depends on computational power to model multiple iteration interactions. As Edward Lorenz demonstrated as far back as 1963, even simple "couplings" between agents can give rise to very complex phenomena. Agent-based modeling also gets around excessive assumptions of homogeneity because it allows each agent to behave according to its own rules . Additionally, agent preferences can evolve and agent-based systems are capable of learning. Agent based modeling can capture emergent behaviors as well as recognize phenomena which appear random at the local level , but which produce recognizable patterns at a more global level (Bonabeau, 2002).
2 Modeling the Firm and Its Environment Industry and firm specific factors have recently come to be seen as playing increasingly important roles in the performance of international market entrants. Comparing earlier regression studies which found country specific factors to be a greater determinant of risk than industry specific factors , Cavaglia, Brightman and Aked have developed a factor model to compare country vs. industry effects on securities returns in a more contemporary setting and obtained a conclusion quite different from the conventional wisdom . In general , they found that industry specific factors were a more important determinant of extraordinary returns and investor exposure than country factors (Cavaglia et al, 2000). In this context , traditional tools such as linear regression analysi s may not be very useful for dealing with heterogeneous actors, or in settings in where industry or firm specific factors playa dominant role . Caldart and Ricart (2004) , for example, cite the overall ambiguousness of studies on not just international diversification, but, following Barney (1991) , on diversification strategies in general , they conclude is that "as a result of the persistence of mixed results on such important lines of research, there is increasing agreement around the idea that new approaches to the study of the field would be welcomed".
116
2.1 Kauffman's Dynamic Fitness Landscape Approach The choice of Kauffman's approach is not random to this study. Earlier work by by McKelvey (1999) demonstrates both strong practical management applications stemming from Kauffman's work as well as strong theoretical connections to the strategy literature , established through McKelvey's application of NK Boolean Dynamic Fitness Landscapes to Michael Porter 's value chain . Fundamental properties of Kauffinan 's model include, first, a type of complexity with which we are already familiar - one which is easily described by agent based modeling: A complex system is a system (whole) comprising of numerous interacting entities (parts) each of which is behaving in its local context according to some rule(s) or force(s). In responding to their own particular local contexts, these individual parts can, despite acting in parallel without explicit interpart coordination or communication, cause the system as a whole to display emergent patterns , orderly phenomena and properties , at the global or collective level.
2.2 Firm Rivalry and Competitive Intelligence One might legitimately ask why the authors have taken such a long way around in getting to the role of competitive intelligence in finn competition. The answers are essentially that : (a) Real business is tricky and messy ; (b) So real businesses are led by people who made satisficing and ad hoc arguments; (c) By dint of repetition over generations, these ad hoc arguments became the received wisdom; (d) Formal methods, be they statistical or Game Theoretic or whatever, began to be applied to Business; (e) However , these were mostly dressing up the ad hoc received wisdom in pretty new clothes; (f) Since both Mathematicians and Business theorists complained about this, a new generation of more sophisticated models sprang up, which we survey; (g) However, a closer look shows that they suffer from the sins of the original ad hocracy, plus flaws in the methodology as such; (h) Hence, in conclusion, most everything in the literature that we have surveyed is a dead end, except for a few gems which we think deserve attention ; (i) This is even more germane when we admit to Competitive Intelligence as a meta-industry, so that players of business games know things about each other which classically they cannot know; and further that the businesses have employees or consultant who know formal methods such as Game Theory, and so model each other in ways deeper than classical assumptions admit; and these problems will grow more acute as computation-intense methods such as agent-based models are used by all players in a game, so this is a bad time to sweep problems and meta-problems under the rug; hence (j) we attempt a synthesis in Complex Systems .
3.0 Competitive Intelligence: An introductory Definition McGonagle and Vella (1990), among the earliest authors seeking to define competitive intelligence (CI) in a formal fashion define CI programs as being :
117 "A formalized, yet continuously evolving process by which the management team assesses the evolution of its industry and the capabilities and behavior of its current and potential competitors to assist in maintaining or developing a competitive advantage". A Competitive Intelligence Program (CIP) tries to ensure that the organization has accurate, current information about its competitors and a plan for using that information to its advantage (Prescott and Gibbons 1993). While this is a good "boilerplate" operational definition, it does not, in fact, address the deeper question of how we determine if performing the activities of competitive intelligence does, in fact, convey a competitive advantage to the firms so engaged. Even at the simplest level of the process, such general definitions provide little guidance for the evaluation of either the type or magnitude of the value creation involved. The evaluative challenge becomes even more complex when it becomes necessary to assess the value of accurate competitive intelligence for firms whose advantage in international markets is subject to network externalities . Under these circumstances , measuring the role and performance of competitive intelligence becomes one of the central problems of Competitive Intelligence. Can competitive intelligence raise a finn's overall fitness? Can competitive intelligence be used to drive a competitive advantage in product and market diversification? In the following section, we suggest a meta-framework (Competitive Intelligence MetaAnalysis, or CIMA) which lays out the criteria which a formal system of competitive intelligence must meet if it is to accomplish these goals.
4.0 Competitive Intelligence Meta-Analysis Where we diverge from virtually all previous authors, it is critical to ensure that the process by which CI is defined is formal rather than Ad Hoc. This is an inherent relational property shared between CI and CIMA. An underlying goal of the formalization of this process is to ensure that CI evolves adaptively and is a robust element of value creation within the finn . As we have already argued, without such rigorously grounded first principles, the present, largely ad hoc, definition of CI may lead those attempting to pursue competitive advantage through CI in an unrewarding, self-limiting or even value-destructive direction. Another question whose answer will frame much of the subsequent discourse is whether CI can actually be developed into an academic discipline, which can prosper in business schools or universities academic departments . For some areas of CI, especially those related to traditional security concerns, this may not be so great a problem. However, in high technology fields, particularly where either intellectual property rights are at issue or the success or failure of a move towards internationalization is dependent upon obtaining accurate information about existing network externalities and their likely course of development, the approach to the CI discipline is absolutely critical.
4.1 Are there "Standard" Tools and Techniques for all Competitive Intelligence? In a fashion similar to that faced by traditional intelligence analysts, each business organization's CI unit must also face the complexities of choice in sources of
118 raw data. In its simplest form, the question may revolve around whether the organization should use government sources, online databases, interviews, surveys, drive-bys, or on-site observations. On a more sophisticated level, the organization must determine, to draw from the analysis of Harvard Professor and former Chairman of the National Intelligence Council, Joseph Nye, whether the issues it addresses are "secrets" to be uncovered or "mysteries" which require extended study of a more academic nature. In addition. Although government sources have the advantage of low cost, online databases are preferable for faster turnaround time. Whereas surveys may provide enormous data about products and competitors, interviews would be preferred for getting a more in-depth perspective from a limited sample. These breakdowns indicate essential strategies for CI countermeasures, and countercountermeasures. One methodology for assessing the effects of CI countermeasures, and counter-countermeasures is Game Theory. For example, we can look, in simplified terms at a matrix of offensive and defensive strategies, such as: Game Theory Matrix for CI Unit and Target Target Uses Counter Unit Uses Tactic
Target Doesn't Use Counter
Mixed Results
Unit Doesn't Use Tactic Bad Data
Mixed Results
The matrices grow larger when one considers various CI tactics, various countermeasures, and various counter-counter measures. Once numerical weights are assigned to outcomes, then there are situations in which the matrix leads formally to defining an optimum statistical mixture of approaches. Imagine the benefits to a CI unit in knowing precisely how much disinformation they should optimally disseminate in a given channel? This is a powerful tool in the arsenal of the sophisticated CI practitioner.
5.0 The Formal CI Process-Sources of Error Beyond this, the firm's CI group, whether internal or external must address the problems not only of raw data collection, but far ore importantly, must deal with the transformation of that data into a finished intelligence product. This requires stringent internal controls not only with respect to assessing the accuracy of the raw data, but also a variety of tests to assess the accuracy and reliability of the analysis in the finished produc. Throughout the processes of collection, transformation, production and dissemination, rigorous efforts are needed to eliminate false confirmations and disinformation. Equally strong efforts are required to check for omissions and anomalies. Critical to this process is a clear understanding of first principles and an evaluative process which is not merely consistent but which embodies a degree of appropriateness which serves to maximize the firm's other value creating activities.
119
5.1 Omissions Omission, the apparent lack of cause for a business decision, makes it hard to execute a plausible response . While most omissions are accidental, there is a growing body of evidence that in a rich organizational context "knowledge disavowal"--the intentional omission offacts which would lead to a decision less than optimally favorable to the person or group possessing those facts-plays a significant role in organizational decision making. Following the framework of rational choice theory and neo-institutionalism, it is clear that every business decision has both proximate and underlying causes. Because historical events can often be traced more directly to individual decision makers than conventional wisdom would have us believe, avoiding omission must then take two forms. The first approach is basically a scientific problem, typically expressed in applied mathematics as the twin requirements of consistency and completeness . The second approach to omissions is more complex and requires an organizational and evaluative structure that will, to the greatest extent practicable, eliminate omissions which arise out of either a psychological or an organizational bias.
5.2 Institutional Arrangements Between the profound and persistent influences of institutional arrangements and the often idiosyncratic nature of corporate decision making, even when decisions are perceived as merely the decisionmaker 's "seat-of-the-pants" ad hoc choice, these decisions generally have far broader causes and consequences. The organizational literature is replete with examples of "satisficing", unmotivated (cognitive) bias and motivated biases (procrastination, bolstering, hypervigilance, etc.) which play heavily on decision makers facing difficult choices and complex tradeoffs. In particular bolstering is especially pernicious because complex psychological pressures often lead decision makers towards excessive simplification of decisions and towards a focus which relies almost exclusively on the desired outcome rather than on the more scientific joint evaluation of probability and outcome.
5.3 Additional Problems Other problems associated with the decision process are include framing and inappropriate representativeness. All of these tendencies must be rigorously guarded against during both during the collection and the production of competitive intelligence. Again, to the extent which it is practicable, the finished product of CI should be presented in a way which minimizes the possibilities of its misuse in the fashions indicated above. To some extent these problems are inescapable, but there are a host of historical precedents in traditional intelligence which should offer some guidelines for avoiding some of the more common organizational pitfalls.
120
5.4 Anomalies Following the above arguments, anomalies -- those data that do not fit -- must not be ignored any more than disturbing data should be omitted. Rather, anomalies often require a reassessment of the working assumptions of the CI system or process (McGonagle & Vella, 1990). In traditional intelligence, anomalies are often regarded as important clues about otherwise unsuspected behavior or events. In the same fashion, anomalies may represent import indicators of change in the business environment. There is also a self-conscious element to the methodology of dealing with anomalies. In the same way that anomalous data can lead researchers to new conclusions about the subject matter, the discovery of anomalies can also lead the researcher to new conclusions about the research process itself. It is precisely the anomalies that lead to a change of paradigm in the scientific world. Since conclusions drawn from data must be based on that data, one must never be reluctant to test, modify, and even reject one's basic working hypotheses. Any failure to test and reject what others regard as an established truth can be a major source of error. In this regard, it is essential to CI practice (collection, analysis and production) that omissions and anomalies in the mix of information (and disinformation) available in the marketplace be dealt with in an appropriate fashion.
6.0 Countermeasures Similarly, it is essential that CI Counter-countermeasures incorporate an appropriate methodology for dealing with anomalies and omissions (intentional or accidental). An important element of this process is developing the institutional capability to detect the "signatures" of competitors' attempts to alter the available data. CI counter-measures should be designed with the primary goal of eliminating the omissions and anomalies which occur in the mix of information and disinformation available in the marketplace. Included in this category are: False Confirmation: One finds false confirmation in which one source of data appears to confirm the data obtained from another source. In the real world, there is NO final confirmation, since one source may have obtained its data from the second source, or both sources may have received their data from a third common source. Disinformation: The data generated may be flawed because of disinformation, which is defined as incomplete or inaccurate information designed to mislead the organization's CI efforts. A proper scientific definition of disinformation requires the mathematics of information theory, which the authors detail elsewhere. Blowback: Blowback happens when the company's disinformation or misinformation that is directed at the target competitor contaminates its own intelligence channels or CI information. Inevitably, in this scenario, ALL information gathered may be inaccurate, incomplete or misleading.
121
Bibliography [I] Arthur, W. Brian, "Inductive Reasoning and Bounded Rationality", American Economic Review, May 1994, Vol. 84, No.2, pp. 406-411 [2] Barney, 1., "Finn Resources and Sustained Competitive Advantage", Journal of Management 1991,Vol. 17, No. 1,99-120 [31 Berry, L., Kiel, D. and Elliott, E. "Adaptive agents, intelligence, and emergent human organization: Capturing complexity through agent-based modeling" , PNAS,99 Supplement 3:7187 -8.
[4] Bonabeau, Eric "Agent Based Modeling: Methods and Techniques for Simulating Human Systems", PNAS, Vol. 99, Supplement 3,7280-7287. [51 Cavaglia, J., Brightman , C., Aked, M. "On the Increasing Importance of Industry Factors: Implications for Global Portfolio Management", Financial Analyst Journal, Vol. 56. No.5 (September/October 2000) : 41-54. [61 Caldart, A ., and Ricart, J. "Corporate strategy revisited: a view from complexity theory", European Management Review, 2004, Vol. 1,96-104 [7] Gal , Y. and Pfeffer, A., "A language for modeling agents' decision making processes in games", Proceedings of the second international joint conference on Autonomous agents and multiagent systems, ACM Press , New York, 2003 . [81 Kauffman, Stuart, "The Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press, 1993 [9] McGonagle, JJ. & Vella, C.M., Outsmarting the Competition: Practical Approaches to Finding and Using Competitive Information, Sourcebooks, 1990 [10] McKel vey , Bill "Avoiding Complexity Catastrophe in Co evolutionary Pockets: Strategies for Rugged Landscapes", Organization Science, Vol. 10, No.3, May-June 1999 pp. 294-321 [III Malhotra, J . "An Analogy to a Competitive Intelligence Program: Role of Measurement in Organizational Research", in Click & Dagger: Competitive Intelligence, Society of Competitive Intelligence Professionals, 1996
[12] Prescott, J .E. & Gibbons, P.T. (1993) . "Global Competitive Intelligence: An Overview", in Prescott, J .E., and Gibbons, P.T. Eds., Global Perspectives on Competitive Intelligence, Alexandria, VA: Society of Competitive Intelligence Professionals. [13] Ruigrok, W., and Wagner, H.,(2004) "Internationalization and firm performance: Meta-analytic review and future research directions" , Academy of International Business, Stockholm, Sweden , July 11 ,2004. [14) Sato, S., E. Akiyama, and J.D. Farmer. "Chaos in Learning a Simple Two Person Game." Proceedings of the National Academy of Science. 99 (7) pp. 4748-4751 (2002) .
Chapter 16
Mobility of Innovators and Prosperity of Geographical Technology Clusters: A longitudinal examination of innovator networks in telecommunications industry Jiang He 1 Wesley J. Howe School of Technology Management Stevens Institute of Technology Castle Point on Hudson Hoboken, NJ 07030 USA [email protected] M. Hosein Fallah, Ph.D. Wesley 1. Howe School of Technology Management Stevens Institute of Technology Castle Point on Hudson Hoboken, NJ 07030 USA [email protected] Abstract Knowledge spillovers have long been considered a critical element for development of technology clusters by facilitating innovations. Based on patent co-authorship data, we construct inventor networks for two geographical telecom clusters - New
I
Corresponding Author
123 Jersey and Texas - and investigate how the networks evolved longitudinally as the technology clusters were undergoing different stages of their lifecycles. The telecom industry in the former state had encountered a significant unfavorable environmental change, which was largely due to the breakup of the Bell System and evolution of the telecom industry. Meanwhile, the telecom cluster of Texas has been demonstrating a growing trend in terms of innovation output and is gradually replacing New Jersey's leadership in telecom innovation as measured by number of patents per year. We examine differences and similarities in dynamics of the innovator networks for the two geographical clusters over different time periods . The results show that TX's innovator networks became significantly better connected and less centralized than the ones ofNJ in the later years of the time series while the two clusters were experiencing different stages of lifecycle. By using network visualization tools, we find the overwhelming power of Bell System's entities in maintaining the NJ innovator network to be lasting a very long time after the breakup of the company. In contrast the central hubs of TX's networks are much less important in maintaining the networks.
Key words: Social network, Technology clusters, Innovation, Telecommunications R&D
1. Introduction Clustering has become one of the key drivers of regional economic growth by promoting local competition and cooperation. The impact of clustering on business competitiveness and regional prosperity has been well documented (Porter, 1998). The paper is to identify the extent to which technology spillovers are associated with regional economic growth. This is an area of active research. In this study, the authors provide a new approach of monitoring cluster evolution by conducting a longitudinal analysis of the dynamics of inventor networks . The paper focuses on the telecom sectors of New Jersey and Texas. For almost a century, New Jersey has been the leader in telecommunications innovation, due to the presence of Bell Laboratories. With the break-up of AT&T and passage of 1996 Telecommunications Act that drove the de-regulation of US telecommunications market, New Jersey's telecom sector went through a period of rapid change. However, since the industry downturn of 2000, the NJ's telecommunications sector has been experiencing a hard time. While NJ is struggling to recover from the downturn, we've observed that some other states, such as Texas, have been able to pull ahead and show greater growth (He and Fallah, 2005) as measured by the number oftelecom patents. It seems that New Jersey 's telecommunications cluster is currently stuck in a stagnant state. The analysis of inventor networks within the telecom industry can provide further insight into the evolution of NJ's telecom cluster and the influence of such networks on performance of the cluster.
2. Inventors Networks
124 Complex networks are often quantified by three attributes : clustering coefficient, average path length, and degree distribution. The clustering coefficient measures the cliquishness of a network, which is conceptualized by the likelihood that any two nodes that are connected to the same node are connected with each other. The average path length measures the typical separation between any two nodes. Degree distribution maps the probability of finding a node with a given number of edges. Following the discovery of "small world" network phenomenon , which is characterized by short average path length and high degree of clustering properties (Watts and Strogatz, 1998), many empirical studies proved that small-world properties are prevalent in many actual networks such as airline transportation networks and patent citation networks. The dense and clustered relationships encourage trust and close collaboration, whereas distant ties act as bridge for fresh and non-redundant information to flow (Fleming et al., 2004). There is evidence that the rate of knowledge diffusion is highest in small-world networks (Bala and Goyal, 1998; Cowan and Jonard, 2003; Morone and Taylor, 2004). In order to test the relationship between knowledge transfer network and regional innovation output, Fleming et. al (2004) analyzed co-authorship network data from US patents of the period between 1975 and 2002. Their results are inconsistent with the generally believed proposition that "small world" networks are associated with high level of innovation. It appeared that decreased path length and component agglomeration are positively related to future innovation output; however clustering, in that study, has a negative impact on subsequent patenting. In fact, the existing empirical literature is not rich enough to illustrate the role of knowledge spillovers , created by inventors' mobility or/and collaboration, in promoting the development of clusters. In this study, we will investigate the evolution of telecom inventor networks of New Jersey versus Texas, and examine their significant differences which may explain the differences in innovation growth and cluster development of the two states.
3. Data and analysis approach In this study we map the network of telecom innovators using patent co-authorship data. We believe patent co-authorship data is a good quantitative indicator for knowledge exchange. As majority of patents are delivered by teams instead of independent individuals, it is reasonable to assume that co-authors know each other and technical information exchange occurs in the process of innovation. Secondly, patent data can reflect the mobility of the inventors as long as they create patents in different firms or organizations . Cooper's study (2001) suggests that a higher rate of job mobility corresponds to greater innovation progress because parts of the knowledge generated by a mobile worker can be utilized by both firms involved. The data for this study was originally collected from the United States Patent and Trademark Office (USPTO) and organized by Jaffe, et. al. (2002). This dataset provided us with categorized patent information covering the period between 1975 and 1999. The objective of our study is to analyze the dynamics of inventor networks for different geographical clusters over time. For this study, we selected the telecom
125 patents granted to inventors in New Jersey and Texas between 1986 and 1999 for analysis . We consider a patent belong s to either New Jersey or Texas , as long as one or more inventors of the patent were located within that state. Those patents belonging to both states were deleted for this initial study (accounts for 0.9% of the total number of patents). For each state, we investigated how the inventor network evolved over time by moving a 3-year window . The patent dataset enables us to develop a bipartite network (upper portion of Fig. I) which consists of two sets of vertices---patent assignees and patent inventors . This type of affiliation network connects inventors to assignee s, not assignee s to assignees or inventors to inventor s, at least not directly . The bipartite networks are difficult to interpret as network parameters such as degree distribution have different meanings for different sets of vertices. In order to make the bipartite network more meaningful, we transform the bipartite network to two one-mode networks . Figure I illustrates an example of this transformation from bipartite to one-mode networks. The network analysis tool Pajek was used to explore the patent network and visualize the analysis results. For the one-mode network of assignees , a link between two nodes means that the two organizations share at least one common inventor. In this network , a link indicates that one or more inventors have created patents for both of the organizations during that time frame. In practice this happens when an inventor who create s a patent for company A joins a team in company B or moves to company B and creates a new patent that is assigned to company B. In either of the scenarios, one can assume there would be a knowledge spillover due to R&D collaboration or job movement.
4. Findings and interpretation As described , using Pajek software package , we constructed two sets of one-mode networks for each geographical cluster with a longitudinal approach. This paper focuses on the one-mode network s which consist of organizations, in which a tie between any two nodes indicate s at least one patent inventor is shared by both assignees during the window period . Before proceeding to examine the network structures and level of inventors' mobility, we had noticed from the original patent dataset that some assignees indeed represent entities which are associated with a large organization. The Bell System's multiple entities form linkages between their innovators via patent co-authorship and those linkages could account for a considerable portion of linkages over the whole network (He and Fallah, 2006) . Since this kind of linkage is not directly associated with the spontaneous job mobility of innovators , which is the focus of our study, we regarded them as noise for our interpretation and therefore treated multiple entities of an organization as one unit by reassigning a new unique name to them.
126
Two- mcx:le network
c O ne- mcx:le network
IS
One- mode network
Figure 1: Transformation ofa two-mode network to one-mode networks Figure 2 shows the average node degree of all vertices for each state over different three year window periods. A higher degree of vertices implies a denser network, which in this case indicates inventors are more likely to move from one organization to multiple other ones. Based on this figure, it appears the NJ network was better connected in the earlier years, but the advantage was lessened gradually as the time window moves ahead and NJ finally fell behind of TX in the last period of observation. The situation of NJ in patent network connectedness during the early years may correspond to the regulatory adjustment of the telecom industry. The 1984 Bell System Divestiture broke up the monopoly of AT&T which led to some redistribution of employees including some of the R&D people. The Telecom deregulation created a substantial undeveloped new market which could be exploited with new technologies . As new starts-ups emerge in a cluster, the job mobility among organizations can be expected to grow also. As can be seen in Figure 2, the connectedness of NJ's patent network had experienced a dynamic change during the period between 1986 and 1993. The network of TX maintained a low level of connectedness in that period because there was very little telecom R&D work in that state. Starting from 1993, the networks in both NJ and TX demonstrated a significant growing trend in connectedness. Indeed, largely due to the further openness of telecom market, that was also a period in which the total patent output for both states was growing rapidly (Figure 3). In terms of network structure, the major difference between the NJ network and the TX one is the level of degree centralization 2• We observe that the NJ network is more centralized than that of TX, especially for the later period of observation (Figure 4). Based on our analysis, we noticed that, compared with the counterpart of Degree centralization is defined as the variation in the degree of vertices divided by the maximum degree variat ion which is possible in a network of the same size (De Nooy, 2005). Put it differently, a network is more central ized when the vertices vary more with respect to their centrality; a star topology network is an extreme with degree central ization equals one . 2
127 TX, the main component of the NJ network always accounts for a larger portion of the total connectivity, and the difference becomes more significant in the later periods. This may correspond to the disappearance of many of the start-ups that emerged in mid to late 1990s. Based on the network measurement in overall connectedness, though the NJ network also shows a growing trend after 1993, we conclude that the growth was largely corresponding to the size growth of the main component rather than a balanced growing network . Figure 5 and 7 visualize the one-mode network of assignees for NJ and TX, respectively (window period of 1997-1999). Figure 6 and 8 correspondingly demonstrate the main components extracted from the parent networks . Interestingly , we notice that the "Bell Replace" which represents the entities ofthe old Bell System is the key hub maintaining the main component of the NJ network (Figure 6).
(]) (])
.....
Ol (])
"0
'0 c (IJ
(])
2:
0.6 0. 5 0.4 0.3
0. 2 0.1 0
---....
.,/'
CD CD
lJ)
~
cb
r-, CD
CD
alJ) cD
CD
~
lJ)
9
lJ)
en CD
0
Ol
1-- Mean of node degree-NJ
. ....
m
N
/'
lJ)
N Ol
Ol
lfl
m
lJ)
Ol
..:.---i
10
9 .... Ol
~
/
r-,
lJ)
CD Ol
Ol
Ol
lJ)
9 r-,
cb
of,
lJ)
Year . - Mean of node degree-TXI
Figure 2: Mean ofDegree - NJ vs. TX 700
'" C OJ
ro0-
'0
'lot:
.
GOO 500 400 300 2 00 10 0 0
...
a;
~
~
m
T o t al # of oat e n t s- Nr
N
m
-
.
~
~
m "" en ~
..,. m
en ~
lJ")
O'l
~
U'" '. ..-.-.......t .. ~~
'"
Figur. 5: NJ's on..mod. network of asslgn••s - 1997.1999
ra
· JQN"r' ~T1O"o
FiQure 6: Main comoon.nt . xr,.ctfKI from Fio.5 ......~ ru "'U ~n::Jl>,; • .... u
\
! .
\
\
N~caClO !,.TO
- J
-.',.
.......---...
.J
I
. //
. _- .-- ....
Figure 7: TX's one-mode network of assignees - 1997·1999
Figur e 8: Main componenlexlracled from Fig. 7
We interpret the highly centralized network structure as a weakness ofNJ's telecom industry which may explain the cluster 's performance in innovation output. As a majority ofjob movements in the cluster originate from a common source, the diversity of the knowledge transferred in such a network is limited if compared to a network in which innovators move around more frequently with a variety of random routes. Also, considering the history and background of AT&T and the change of regulatory environment, there is a good possibility that the AT&T -hubbed patent
129 network may, to a large extent, correspond to a firm- level adjustment resulting from the corporate fragmentation , rather than the macro-dynamics of labor market; while the latter scenario is a more desirable attribute for encouraging cluster development.
5. Conclusions and future work Our results illustrates that patterns of job mobility may be predicti ve of the trend in cluster development. The study suggests, compared with New Jersey, Texas telecom inventors were more frequently changing their employers , starting their own business orland joining others teams from different organizations. The latter scenario may often result from formal collaborations between organizations, such as contracted R&D projects . Either way these types of ties increase the possibility of technical information flowing within the industry cluster, though the two classifications of ties may vary in their capability for knowledge transfer. One limitation of the network analysis is that, based on the patent dataset itself, the proportion of connections corresponding to each type of ties cannot be explicitly measured , so future researches may benefit from interviewing those inventors to further investigate their motivations or duties involved with the connected patents .
Reference Bala, V. and S. Goyal, "Learning from neighbors ," Review of Economic Studies , 65, 224:595-621 ,1998. Cooper D.P., "Innovation and reciprocal externalities: information transmission via job mobility," Journal of Economic Behavior and Organization , 45,2001. Cowan R. and N. Jonard, "The dynamics of collective invention," Journal of Economic Behavior and Organization , 52, 4, 513-532 , 2003. Cowan R. and N. Jonard, "Invention on a network," Structural Change and Economic Dynamics, In Press. De Nooy, W., A. Mrvar and V. Batagelj, Exploratory social network analysis with Pajek. Cambridge University Press, 2005. Jaffe , A.S. and M. Trajtenberg , "Patents. Citations, and Innovations: A window on the knowledge economy. " MIT Press, 2002 . He, 1. and M.H. Fallah, " Reviving telecommunications R&D in New Jersey: can a technology cluster strategy work," PICMET 2005. He, J. and M.H. Fallah, "Dynamic s of inventors' network and growth of geographic clusters", PICMET 2006. Morone P. and R. Taylor, "Knowledge diffusion dynamics and network properties of face-to-face interactions ," Journal ofEvolutionary Economics, 14, 3, 327-351, 2004. Porter, M.E.; "Clusters and the new economics of competition ," Harvard Business Review. Nov./Dec. 1998. Watts, DJ. and S.H. Strogatz , "Collective dynamics of 'small-world networks' ," Nature, 393,440-442, 1998.
Chapter 17
Adaptive capacity of geographical clusters: Complexity science and network theory approach Vito Albino , Nunzia Carbonara, Haria Giannoccaro DIMEG, Politecnico di Bari Viale Japigia 182,70126 Bari, Italy
This paper deals with the adaptive capacity of geographical clusters (GCs), that is a relevant topic in the literature. To address this topic, GC is considered as a complex adaptive system (CAS). Three theoretical propositions concerning the GC adaptive capacity are formulated by using complexity theory. First, we identify three main properties of CASs that affect the adaptive capacity, namely the interconnectivity, the heterogeneity, and the level of control, and define how the value of these properties influence the adaptive capacity. Then, we associate these properties with specific GC characteristics so obtaining the key conditions of GCs that give them the adaptive capacity so assuring their competitive advantage. To test these theoretical propositions, a case study on two real GCs is carried out. The considered GCs are modeled as networks where firms are nodes and inter-firms relationships are links. Heterogeneity, interconnectivity, and level of control are considered as network properties and thus measured by using the methods of the network theory.
1 Introduction Geographical clusters (GCs) are geographically defined production systems, characterized by a large number of small and medium sized firms that are involved at various phases in the production of a homogeneous product family. These firms are highly specialized in a few phases of the production process, and
131 integrated through a complex network of inter-organizational relationships [Porter 1998]. The literature on GCs is quite rich and involves different streams of research, such as social sciences, regional economics, economic geography, political economy, and industrial organization. Referring to this literature, studies have mainly provided key notions and models to explain the reasons of GC competitive success [Krugman 1991; Marshall 1920; Sabel 1984]. However, in the recent competitive context the foregoing studies do not explain why some GCs fail and some other not and why some GCs evolve by assuming different structures to remain competitive and other not. They in fact adopt a static perspective to analyze GCs restricting the analysis to the definition of a set of conditions explaining GC competitive advantage in a particular context. In addition they focus on the system as a whole and not on the single components (firms), observe the phenomena when they are already happened at the system level, and describe them in terms of cause-effect relations by adopting a topdown approach. Our intention is to overcome these limitations by adopting a different approach. We look at the GC competitive advantage as not the result of a set of pre-defined features characterizing GCs but as the result of two different capabilities, namely adaptability and co-evolution of GCs with the external environment. The high the GC adaptive and co-evolution capabilities, the high the GC competitive success. In fact, if GCs possess the conditions that allow them to adapt and co-evolve with the environment, they will modify themselves so as to be more successful in that environment. In this way, GCs have competitive advantage not because they are characterized by a set of features but because they are able to evolve exhibiting features that are the spontaneous result of the adaptation to the environment. This result is not known a priori, but emerges from the interactions among the system components and between them with the environment. This approach is consistent with the perspective adopted in the paper to study GCs by using complexity science [Cowan et al. 1994], which studies the complex adaptive systems (CASs) and explains causes and processes underlying emergence in CASso Once GCs have been recognized as CASs, CAS theory on co-evolution is used to look for GC features that allow the adaptability of GCs in "high velocity" environments [Eisenhardt 1989]. In particular, three theoretical propositions regarding GC structural conditions supporting GC adaptability are formulated. To test these theoretical propositions, a case study on two real GCs is carried out. The considered GCs are modeled as networks where firms are nodes and inter-firms relationships are links. Heterogeneity, interconnectivity, and level of control are considered as network attributes and thus measured by using the methods of the network theory. In the next we give a brief review of CASs, we show that GCs possess relevant properties of CASs, and based on CAS theory we derive the propositions on the GC co-evolution in "high velocity" environments. Then, we present the methodology applied to this research and the empirical evidence.
2 The complexity science approach to GC competitive advantage Complexity science studies CAS and their dynamics. CASs consist in evolving network of heterogeneous, localized and functionally-integrated interacting agents. They interact in a non-linear fashion, can adapt and learn, thereby evolving and developing a form of self-organization that enables them to acquire
132 collective properties that each of them does not have individually. CASs have adaptive capability and co-evolve with the external environment, modifying it and being modified [Axelrod and Cohen 1999; Gell-Mann 1994]. During the 1990s, there was an explosion of interest in complexity science as it relates to organizations and strategy. The complexity science offers a number of new insights that can be used to seek new dynamic sources of competitive advantage. In fact, application of complexity science to organization and strategy identifies key conditions that determine the success of firms in changing environments associated with their capacity to self-organize and create a new order, learn and adapt [Levy 2000]. Complexity science is used in this study to identify what conditions of GCs enabling them to adapt to external environment. Therefore, the basic assumption of this study is that GCs are CASs given that they exhibit different properties of CAS, such as the existence of different agents (e.g. firms and institutions), the non-linearity, different types of interactions among agents and between agents and the environment, distributed decision making, decentralized information flows, and adaptive capacity [Albino et al. 2005]. In the follow three theoretical propositions concerning the GC adaptive capacity are formulated by using CAS theory. Interconnectivity. CAS theory identifies the number of interconnections within the system as a critical condition for self-organization and emergence. Kauffman [1995] points out that the number of interconnections among agents of an ecosystem influences the adaptive capacities of the ecosystem. He uses the NK model to investigate the rate of adaptation and level of success of a system in a particular scenario. The adaptation of the system is modeled as a walk on a landscape. During the walk, agents move by looking for positions that improve their fitness represented by the height of that position. A successful adaptation is achieved when the highest peak of the landscape is reached. The ruggedness of the landscape influences the rate of adaptation of the system. When the landscape has a very wide global optimum, the adaptive walk will lead toward the global optimum. In a rugged landscape, given that there are many peaks less differentiated, the adaptive walk will be trapped on the many suboptimal local peaks. By using the concept of tunable landscape and the NK model, Kauffman [1995] demonstrates that the number of interconnections among agents (K) influences the ruggedness of the landscape. As K increases, the ruggedness raises and the rate of adaptation decreases. Therefore, in order to assure the adaptation of the system to the landscape, the value of K should not be high. This result has been largely applied in organization studies to modeling organizational change and technological innovation [Rivkin and Siggelkow 2002]. In organization studies the K parameter has an appealing interpretation, namely, the extent to which components of the organization affect each other. Similarly, it can be used to study the adaptation of GCs, by considering that the level of interconnectivity of CGs is determined by the social and economic links among the CG firms. When the number of links among firms is high, the behavior of a particular firm is strongly affected by the behavior of the other firms. On the basis of the discussion above, we formulate the following proposition: Proposition 1. A medium number oflinks among GC firms assures the highest GC adapti ve capacity. Heterogeneity. Different studies on complexity highlight that variety destroys variety. As an example, Ashby [1956] suggests that successful adaptation requires a system to have an internal variety that at least matches
133 environmental variety. Systems having agents with appropriate requisite variety will evolve faster than those without. The same topic is studied by Allen [200I] and LeBaron [2001]. Their agent-based models show that novelty, innovation, and learning all collapse as the nature of agents collapses from heterogeneity to homogeneity. Dooley [2002] states that one of the main properties of a complex system that supports the evolution is diversity. Such a property is related to the fact that each agent is potentially unique not only in the resources that they hold, but also in terms of the behavioral rules that define how they see the world and how they react. In a complex system diversity is the key towards survival. Without diversity, a complex system converges to a single mode of behavior. Referring to firms, the concept of agent heterogeneity can be associated to competitive strategy of firms. This in fact results from the resources that firm possesses and defines the behavior rules and the actions of firms in the competitive environment [Grant 1998]. Therefore, we assume that: Proposition 2. The greater the differentiation of the competitive strategies adopted by GCsfirms, the higher the GC adaptive capacity. Level of control. The governance of a system is a further important characteristic influencing CAS self-organization and adaptive behaviors. Le Moigne [1990] observes that CASs are not controlled by a hierarchical command-and-control center and manifest a certain form of autonomy. The latter is necessary to allow evolution and adaptation of the system. A strong control orientation tends to produce tall hierarchies that are slow to respond [Carzo and Yanousas 1969] and invariably reduce heterogeneity [Jones 2000]. The presence of "nearly" autonomous subunits characterized by weak but not negligible interactions is essential for the long-term adaptation and survival of organizations [Simon 1996]. The level of control in GCs is determined by the governance of the GC organizational structure. Higher the degree of governance, higher the level of control exerted by one or more firms on the other GC firms. Therefore, we assume that: Proposition 3. A medium degree of governance of the GC organizational structure improves the GC adaptive capacity.
3 Research methodology The proposed theoretical propositions have been supported by the results of an empirical investigation. The empirical investigation, adopting a multiple-case study approach [Yin 1989], involves three in-depth case studies on real GCs. To address the purpose of our research we selected the GCs on the basis of their competitive advantage and we modeled each GCs as a network where firms are nodes and inter-firms relationships are links. In particular, we chosen three cases characterized by a different degree of success in the competitive scenario and by using the methods of the network theory we measured the GC structural characteristics identified in the three theoretical propositions as network attributes. We considered two Italian industrial districts': the leather sofa district of Bari-Matera and the agro-industrial district of Foggia, both localized in Southern Italy. The leather sofa district of Bari-Matera has been analyzed in two I Industrial districts are a specific production model characterized by the agglomeration of small- and medium-sized firms integrated through a complex network of buyer-supplier relationships and managed by both cooperative and competitive policies .
134 different years, 1990 and 2000, which correspond to different stages of its lifecycle, Development and Maturity respectively [Carbonara et a1. 2002]. The case studies have been preceded from an explorative phase addressed to delineate the characteristics of the considered GCs. Such phase involved two stages: 1) the collection of data and qualitative information from the firm annual reports and the relations of Trade Associations and Chambers of Commerce; 2) the survey of the firms operating in the two GCs, based on the Cerved database (the electronic database of all the Italian Chambers of Commerce). The analysis of data collected during stage one has been addressed to evaluate the competitive performance of the three cases. In particular, for each case we measured the percentage ratio between export and turnover (Table 1). Through the survey it has been possible to define the dimension of the considered GCs in terms of number of firms. Successively, a sample of firms has been selected within each GCs. A reputational sampling technique [Scott 1991] has been used rather than a random one. To do this, we asked a key GC informant to select a sample of firms based on their reputations as both main players of the GC and active players in the network. This sampling technique ensures to identify a sample of firms that better represents the population. The three networks that model each considered GCs are: 1) the network of the agro-industrial district of Foggia; 2) the network of the leather sofa district of Bari-Matera in the Development stage; 3) the network of the leather sofa district of Bari-Matera in the Maturity stage. We labeled these three networks "alfa-net", "beta-net" , and "gamma-net", respectively. Table. t: Geographical clusters' dimension andcompetitive perfonnance. Agro-industrial district of Foggia Numb er of firms Export/turnover (%)
140
Leather sofa district of Bari-Matera Development Stage ( 1990) 101
33%
60%
Leather sofa district of Bari-Matera Maturity Stage (2000) 293 77%
In particular, we selected 66 firms active in the alfa-net, 43 in the beta-net, and 58 in the gamma-net. These samples represent the 47 percent, the 43 percent, and the 20 percent of the GC's total firm population, respectively. The data on each firm of the three networks have been collected through interviews with the managers of the firms and questionnaires. In particular, we collected network structure data by asking respondents to indicate with which other sample firms they have business exchanges. We then used these data to build the network of business inter-firm relationships characterizing each considered GCs.
4 The network analysis To test the three theoretical propositions by the empirical study the measurement of the three features of the GC organizational structure, namely heterogeneity, interconnectivity, and level of control, is required. To this aim we have first operationalized the three GC structural features in terms of network attributes and then we have measured the identified network attributes by using the methods of the network theory. In particular, we have used the following set of measures: network density, network heterogeneity, and network centrality. The test of Proposition 1 has been based on a simple measure of the network structure, network density (ND), defined as the proportion of possible linkages that are
135 actually present in a graph. The network density is calculated as the ratio of the number of linkages present, L, to its theoretical maximum in the network, n(n-1), with n being the number of nodes in the network [Borgatti and Everett 1997]: L
NDo - - noOnolO To test Proposition 2 we performed an analysis of the heterogeneity of the coreness of each actor in the network. By coreness we refer to the degree of closeness of each node to a core of densely connected nodes observable in the network [Borgatti and Everett 1999]. Using actor-level coreness data, we calculated the Gini coefficient, that is an index of network heterogeneity. Finally, to test Proposition 3 we used an index of network centrality: the average normalized degree centrality (Average NDC). The degree centrality of a node is defined as the number of edges incident upon that node. Thus, degree centrality refers to the extent to which an actor is central in a network on the basis of the ties that it has directly established with other actors of the network. This is measured as the sum of linkages of node i with other j nodes of the network. DCOnjOo 0 xij Due to the different sizes of the three networks, a normalized degree centrality NDC has been used [Giuliani 2005]. This is measured as the sum of linkages of node i with other j nodes of the network DC(nJ and standardized by (n -1) .
"DCrn-ro-DOnJ
JV.
,
tn 0 Ir
Once we have operationalized the GC structural features in terms of network properties, we applied network analysis techniques using UCINET (Borgatti et al., 2002) software to represent the three networks and to compute the identified network attributes (Table 2). Table. 2: Measure s of the three network attributes . Alfa-net Beta-net Gamma-net
Network density 0,0138 0,0321 0,0236
0,041 0.22 0.24
2.751 5.648 4.295
Each network attribute represents a structural GC characteristics. In particular, we used: 1) the network density to measure the GC interconnectivity, 2) the Gini coefficient to measure the GC heterogeneity, and 3) the average normalized degree centrality to measure the level of control inside the Gc. As regards the competitive performance of each GC, we used the percentage ratio between export and turnover. Export measures are usually adopted to evaluate the competitiveness at different Icvels, namely country, industry, firms and product [Buckley ct al. 1998]. Then, the percentage ratio between export and turnover can be considered as a good proxi to compare the competitive advantage of different firms and/or system of firms. Results are summarized in Table 2 and confinn the three propositions.
5 Conclusion
136 This paper has used complexity science concepts to give new contributions to the theoretical understanding on Geographical Clusters (GCs) competitive advantage. In fact, the complexity science has been used as a conceptual framework to investigate the reasons for the success of GCs. This approach is particularly valuable given that allows the limits of traditional studies on GCs to be overcome. In particular, the GC competitive advantage is not the result of a set of pre-defined features characterizing GCs, but it is the result of dynamic processes of adaptability and evolution of GCs with the external environment. Therefore, the GC success is linked to the system adaptive capacity that is a key property of complex adaptive system (CAS). Using the theory on CAS, the key conditions of GCs that give them the adaptive capacity have been identified, namely the number of links among GC firms, the level of differentiation of competitive strategies adopted by GCs firms, and the degree of governance of the GC organizational structure . The theory on CAS has been then used to identified the value that these variables should have to increase the system adaptive capacity . In this way, three theoretical propositions concerning GC adaptive capacitive have been formulated . The proposed theoretical propositions have been supported by the results of an empirical investigation. In particular, the empirical investigation, adopting a multiple-case study approach , involves three in-depth case studies on real GCs. The three cases were analyzed by using the methods of the network theory. We measured the GC structural characteristics identified in the three theoretical propositions as network attributes . Simulation results have confirmed the theoretical propositions showing that: (i) a medium network density assures the highest performance, measured in terms of percentage ratio between export and turnover, (ii) the higher is the network heterogeneity, measured by the Gini coefficient, the higher the GC performance, and (iii) a medium value of the network centrality, measured by the average normalized degree centrality, determines the highest GC performance.
Bibliography III Albino , V., Carbonara, N., & Giannoccaro, I., 2005, Industrial districts as complex adaptive systems: Agent-based models of emergent phenomena, in Industrial Clusters and Inter-Firm Networks, edit by Karlsson, C; Johansson, B., & Stough, R., Edward Elgar Publishing, 73-82. [2] Allen , P. M., 2001 , A complex systems approach to learn ing, adaptive networks. International Journal of Innovation Management, 5, 149-180. [3J Ashby, W.R ., 1956, An Introduction to Cybernetics, Chapman & Hall (London). [4] Axelrod, R., & Cohen, M.D ., 1999, Harnessing Complexity: Organizational Implications of a Scientific Frontier, The Free Press (New York) . [5] Borgatti , S.P., & Everett, M.G, 1999, Models of core/periphery structures, Social Network s, 21: 375-395. [6] Borgatti, S.P., Everett, M.G., & Freeman, L.c., 2002, UCINET 6 for Windows , Harvard: Analytic Technologies. [7] Buckley, Pzl., Christopher, L.P., & Prescott, K., 1988, Measures of International Competitiveness: A critical survey. Journal of Marketing Management,4 (2), 175-200.
137 18] Carbonara, N., Giannoccaro 1., & Pontrandolfo P., 2002, Supply chains within industrial districts: a theoretical framework, International Journal of Production Economics, 76 (2),159-176 . [9] Carzo , R., & Yanousas, J. N., 1969, Effects of flat and tall structure, Administrative Science Quarterly , 14, 178 191. [10] Cowan , G. A., Pines, D., & Meltzer, D. (eds.) , 1994, Complexity: Metaphors, Models, and Reality, Proceedings of the Santa Fe Institute , Vol. XIX. Addison-Wesley (Reading, MA). [II] Dooley , KJ., 2002, Organizational Complexity, in International Encyclopedia of Business and Management, edit by M. Warner, Thompson Learning (London) . [12] Gell-Mann, M. , 1994, The quark and the jaguar, Freeman & Co (New York) . [13\ Giuliani, E., 2005, The structure of cluster knowledge networks: uneven and selective, not pervasive and collective, Working Papers, Copenhagen Business School. [15] Grant, R.M., 1998, Contemporary Strategy Analysis . Concepts, Techniques, Applications, Blackwell (Oxford). [16] Jones , G.R., 2000 , Organ izational Theory, Addison-Wesley (Reading, MA). [17] Kauffman, S.A., 1993, The Origins of Orders: Self-Organization and Selection in Evolution, Oxford University Press (New York/Oxford) . [18] Kauffman , S.A., 1995, At home in the universe: the search for laws of selforganization and complexity, Oxford University Press (New York). [191 Krugman, P.R., 1991, Geography and Trade, MIT Press (Cambridge, MA) . [21] Le Moigne , J. L., 1990, La Modelisation des Systernes Complexes, Dunod (Paris). [221 Lebaron, B., 2001 , Financial market efficiency in a coevolutionary environment. Proceedings of the Workshop on Simulation of Social Agents : Architectures and Institutions, Argonne National Laboratory and The University of Chicago, 33-51. [251 Levy, D.L. , 2000, Applications and Limitations of Complexity Theory in Organization Theory and Strategy, in Handbook of Strategic Management, edit by Rabin, J ., Miller, G. J. , & Hildreth, W.B., Marcel Dekker (New York). [27] Marshall , A. , 1920, Principles of Economics, Macmillan (London) . [28) McKelvey, B., 2004, Toward a Oth Law of Thermodynamics: Order Creation Complexity Dynamics from Physics and Biology to Bioeconomics, Journal of Bioeconomics, 6, 65-96. [301 Piore, M., & Sabel , C.F., 1984, The Second Industrial Divide , Basic Books (New York). [31) Porter, M., 1998, Clusters and the new economics of competition, Harvard Business Review, 76, 77-90. 1321 Rivkin, J.W., & Siggelkow, NJ ., 2002, Organizational Sticking Points on NK Landscape, Complexity, 7 (5), 31-43. [33] Scott, J., 1991, Social network analysis, Sage (London) . 134] Simon , H.A., 1996, The sciences of the artificial, MIT Press (Cambridge, MA). [35) Yin, R., 1989, Case Study research: Design and methods, Sage (Newbury Park,CA) .
Chapter 18
Corporate Strategy an Evolutionary Review Philip V. Fellman Southern New Hampshire University As Richard Rumelt indicates in his book, "Fundamental Issues in Strategy: A Research Agenda", corporate strategy is a relatively recent discipline. While pioneers in the field like B.H. Liddell Hart and Bruce Henderson (later to found the Boston Consulting Group and creator of the famous BCG Growth-Share Matrix) began their research during the Second World War, the modem field of business strategy as an academic discipline, taught in schools and colleges of business emerged rather later. Rumelt provides an interesting chronicle in the introduction to his volume by noting that historically corporate strategy, even when taught as a capstone course, was not really an organized discipline . Typically, depending on the school's location and resources , the course would either be taught by the senior most professor in the department or by an outside lecturer from industry. The agenda tended to be very much instructor specific and idiosyncratic rather than drawing in any systematized fashion upon the subject matter of an organized discipline .
139
1.0 Corporate Strategy as the Conceptual Unifier of the Business One of the most important early thinkers on corporate strategy was Harvard professor Kenneth T. Andrews. Perhaps his most famous work is his book "The Concept of Corporate Strategy",' which has also appeared in other places as both a brief book chapter' and a rather longer audiotape lecture . Andrews primarily conceptualized corporate strategy as a unifying concept, centered around pattern .3 While many of his notions regarding the corporation may seem a bit dated, as they did after all, reach their final form before the majority of landmark decisions in corporation law and corporate governance which emerged out of the merger-mania of the 1980's and far before the electronic commerce and internet revolution of the next decades, as with the work of Hart and Henderson, their foundations are nonetheless essentially sound. Andrews' most important contribution to the literature is more likely his emphasis on "pattern" as the relational foundation of corporate activities which we call strategy. However, strategy is often emergent in practice. Henry Mintzberg, in a now famous series of articles on "Emergent Strategies", made some very convincing arguments that for most corporations, strategy is more likely to be a pastiche of structures which emerge (like undocumented, patched operating system code) as the corporation grows, rather than a monolithic set of policy statements formulated at the birth of the corporation."
2.0 Corporate Pattern as the Core of Corporate Strategy Nonetheless, Andrews' recognition of pattern as a defining variable in corporate strategy is a major contribution to the literature. Among other things, in defining strategic tradeoffs and in evaluating the strategic behavior of a business, it allows the analyst to determine how coherent that overall pattern is. Questions like, "does this corporate activity make sense within the broader context of our organizational mission?" fir very neatly into the "coherence test" and in a very real way anticipate the concepts of cross-subsidization and strategic fit among a the different activities in a corporation's value chain which are so well expressed by Michael Porter at the end of the 1990'S.5,
Kenneth T. Andrews, "The Concept of Corporate Strategy", Richard Irwin, Inc, (New York: 1980) See "Readings in the Strategy Process", Ed. Henry Mintzberg, Prentice Hall, (New Jersey: 1998) and "The Concept of Corporate Strategy (I Don't Know What This Means)", Careertapes Enterprises, 1991. 3 Ibid. No.6, "Corporate strategy is the pattern of decisions in a company that determines and reveals its objectives, purposes or goals, produces the principal policies and plans for achieving those goals, and defines the business the company is to pursue, the kind of economic and human organization it intends to be and the nature of the economic and noneconomic contribution it intends to make to its shareholders , employees, customers and communities ... " 4 See, Henry Mintzberg, 'The Design School: Reconsidering the Basic Premises of Strategic Management.' (Strategic Management Journal, vol. 11/3, 1990, 171-195). For a grand summation of Mintzberg's views, see "The Academy of Management Executive" , August 2000 v14 i3 p31, View from the top: Henry Mintzberg on strategy and management. (Interview) Daniel J. McCarthy. 5 See, Michael Porter, "What is Strategy", Harvard Business Review, November-December, 1996. I
2
140 While Andrews draws a strong distinction between strategic planning and strategy implementation, the work of later authors like Mintzberg and Drucker 6 demonstrates rather clearly that this is probably not the best way of viewing the strategy process . As the next section of this brief paper discusses, this was the earliest conceptual break which Richard Rumelt , Andrews' "star student" and a leader of the first generation of professional strategists made with Andrews. While Andrews explicitly argued that breaking up a company' s activities into a sequence of activities would undermine the compan y's ability to formulate a coherent strategy, Rumelt argues that in order to assess the appropriateness of corporate strategy, a reduction to basic elements is a necessary prerequisite.
3.0 Appropriateness and Strategy Evaluation: The Next Generation Richard Rumelt, by comparison, is an analyst of an entirely different sort. As mentioned above, Rumelt was a member of that first group of strategy analysts to actually train and specialize in the field of corporate strategy as the discipline to which their careers would be devoted. In his introduction to "Fundamental Issues in Strategy", Rumelt emphasizes that corporate strategy as a relatively new discipline has roots in many other fields, particularly within the social sciences. Among the fields which Rumelt cites are history, economics, sociology, political science, and applied mathematics. With the more recent work of Theodore Modis, Michael Lissack, J. Doyne Farmer and Brian Arthur, we might also wish to add physics, pure mathematics, computer science and populat ion biology. In any case, Rumelt offers a variety of unique and powerfully phrased arguments designed to pinpoint not just the general issues facing the modern, large corporation, but instead aims to uncover the "first principles" of corporat e strategy. From the outset, Rumelt aims at building a different kind of model than Andrews. Where Andrews is largely descriptive and illustrative, Rumelt is analytical and evaluative. In fact the particular piece whose insights we draw from here was entitled "Evaluating Business Strategy'" and focuses , among other things on questions regarding whether a particular conventional methodology for measuring corporate performance is, in fact, the appropriate set of measures to be used. Also, where Andrews in an early strategist, seeking to define in his own way, if not first principles of strategy, then the basic model of corporate strategy, and ultimatel y locked into to a model which by its very nature becomes increa singly dated with the passage of time, Rumelt is not only an early researcher in the discipline , but he is also a contemporary author. In the article in question, Rumelt begins with the rather bold statement that "Strategy can neither be formulated nor adjusted to changing See for example, "Beyond capitalism" an interview with management expert and author Peter Drucker, by Nathan Gardels, New Perspectives Quarterly, Spring 1998 vIS i2 p4(9), see also 'Flashes of genius:' Peter Drucker on entrepreneurial complacency and delusions ... and the madness of always thinking you're number one." Inc. (Special Issue: The State of Small Business), Interview by George Gendron, May 15, 1996 vl8 n7 p30(8) 1 Richard RumeIt, "Eval uating Business Strategy", in Readings in the Strategy Process, Ed. Henry Mintzberg, Prentice Hall, (New Jersey: 1998) 6
141 circumstances without a process of strategy evaluation." Like Gordon Donaldson, his first target is the "shopping list strategy" model which is often so central to the preferences of Andrews' "top management" and which holds as its sacred cow, the value of growth. Foreshadowing later, prize-winning explanations by Michael Jensen and others, Rumelt is already raising the fundamental issue of "value creation" vs. simple growth models." Rumelt further argues that "strategy evaluation is an attempt to look beyond the obvious facts regarding the short-term health of a business, and instead to appraise those more fundamental factors and trends that govern success in the chosen field of endeavor.,,9 In choosing those factors, Rumelt identifies four critical areas consistency , consonance, feasibility and advantage. For the purposes of this paper, and the construction of the 3-C test: Consistency, Coherence and Consonance, we will focus primarily upon Rumelt's first two categories and treat feasibility and advantage only in passing. This is not to argue that feasibility and advantage are not important, but merely to point out that arguments about feasibility tend to be casespecific and argument about competitive advantage have probably been developed at greater length by Michael Porter, and then by Hamel and Prahalad than by anyone else. This is not to slight Rumelt 's efforts in this area, but simply to point the reader to those areas of Richard Rumelt's work which are his strongest, and presumably both his most enduring and his most useful for immediate practical application .
4.0 The Process View In the early part of his essay, Rumelt defines four key characteristics: 10 Consistency: The strategy must not present mutually inconsistent goals and policies. Consonance: The strategy must represent an adaptive response to the environment and the critical changes occurring within it. Advantage: The strategy must provide for the creation and/or maintenance of a competitive advantage in the selected area of activity Feasibility: The strategy must neither overtax valuable resources nor create unresolvable sub problems. In explaining consistency, Rumelt argues that a first reading gives the impression that such bold flaws are unlikely to be found at the lofty levels of corporate strategy formulation and implementation (although he does not define them quite so separately as does Andrews). Rumelt relates his consistency argument to the implicit organizational coherence suggest by Andrews and even goes so far as to argue that the purpose of consistent organizational policy is to create a coherent pattern of
The most lucid explanation of this process is probably given in "The Search for Value: Measuring the Company's Cost of Capital" , by Michael C. Ehrhardt, Harvard Business School Press, 1994. 9 Ibid. No. 12 10 Ibid.
8
142 organizational action. Rumelt then offers three diagnostic techniques for determining ifthere are structural inconsistencies in an organization 's strategy: 11 1. 2.
3.
If problems in coordination and planning continue despite changes in personnel and tend to be issue based rather than people based. If success for one organizational department means, or is interpreted to mean, failure for another department, either the basic objective structure is inconsistent or the organizational structure is wastefully duplicative. If, despite attempts to delegate authority, operating problems continue to be brought to the top for the resolution of policy issues, the basic strategy is probably inconsistent.
We can see from the very phrasing of Rumelt's arguments that he raises powerful issues. Also in addressing consistency as a first principle of corporate strategy he has changed the approach to top management. In Andrews ' world, the top management was the critical locus of corporate strategic planning . Among other things, as Gordon Donaldson points out, they had probably not yet learned that corporations do not necessarily possess the inalienable right to dream the impossible dream. 12 In fact, corporations dreaming the impossible dream go bankrupt on a regular basis. Donaldson makes a very strong case that corporation s following even moderately inconsistent goals are bound to both under-perform in the short term and threaten the stability of their financial position in the long term. ':' Thus the consistency issue is one of tremendous importance.
4.1 Organizational Learning: A First Cut At the same time, it is a very topical issue, and is closely related to process reengineering, organizational learning and modern organizational design. One of the fundamental goals of all these enterprises is to get rid of these inconsistencies , which Rumelt, using mathematical phraseology in his more recent writings would call the result of in-built zero-sum game strategies. What was a standard modus operandi for the post-war, industrial, large-scale divisionally organized corporation , has long since turned into a competitive disadvantage in the modern demand driven world. As Evans and Wurster note, the creation of new technologies is often the precursor to the disappearance of entire industries. 14 When a company or a cluster of companies is built around structural inconsistencies, no matter how great the length of historical common practice, extinction is all but assured. In their words, a company's greatest assets may overnight become the company's greatest liabilities. If Rumelt's
" Ibid. Gordon Donaldson " Financial Goals and Strategic Consequences", Harvard Business Review, MayJune, 1985. 13 Ibid. 14 "Strategy and the New Economics of Information", Philip Evans and Thomas Wurster, Harvard Business Review, September-October, 1997. 12
143 arguments are to be taken seriously, then consistency appears to be the first thing which one should look for in understanding corporate strategy.
5.0 Re-examining the Development and Evolution of the Corporation While earlier authors, such as Porter 15 argued that the investment in and ownership of core technologi es was a corporation' s principal strengths, others, such as Kenich i Ohmae have argued at least as convincingly that as technologies increase the rapidity with which they involve and as research and development costs mount exponentially, a strategy for manag ing a cluster of proprietary, licensed, and jointventure technologies is the most cost-effective way of securing a continually improving competitive position. What is interesting to note is that both group s of author s, that is the group following Porter's arguments' " as well as the groups centered around Ohmae and Hamel and Prahalad' ", indicated that even though technologies are rapidly evolving and a company has to stake out a credible future position now, the time horizon for strategic plann ing has, in fact, expanded, and that the ten year strategic plan is often more important than the annual review." In the context of the above typologi es, one can see where the role of the central strategic planning group may be more rather than less import ant for 21 SI century corporation s. The big difference between these groups and their predecessors, as has already been hinted at, is that through a flatter management structure, and a more empowered work force, feedback loops can be built into the plannin g process from the near, intermediate and far "ends" of the business (to borrow Porter' s "value-chain" terminology) in order to avoid costly future mistakes by a group which is excess ively distant from daily operation s and daily changes in the marketplace.
15 Porter treats the subject in a variety of fashions, most notably in his articles for the Harvard Business Review, " How Competitive Forces Shape Strategy" (1980), The Competitive Advantage of Nations ( 1989), "Ca pital Disadvantage: America's Failing Capital Investment System" and "W hat is Strategy" (1996). 16 See Slater, Stanley F., and Olson Eric M., "A Fresh Look and Industry and Market Analysis", Business Horizons, January-February, 2002 for an analysis of the ways in which Michael Porter' s Five Forces Model (Porter, 1979) can be modified to reflect subsequent globalization and the evolution of technology in which they develop an "augmented model for market analysis" (pp.15-16) In particular, they incorporate the additional features of "co mplementary competition" and "cornplementors", as well as "co mposite competition", "customers", " market growth" and "m arket turbulence". 17 Gary Hamel and C.K. Prahalad, "T he Core Competence of the Corporation", Harvard Business Review, May-June, 1990. 18 In a similar vein to No. 29, above, An article from " Fast Company" (Issue 49, August 200 I, pp. 108 ff.) by Jennifer Reingold, entitled "Ca n C.K. Prahalad Still Pass the Test?", explores some of the ways in which Prahalad has adapted the strategies based around the concept of core competence (which he and Gary Hamel developed in the 1980' s and 1990' s) for the 2 1st century. In particular, Prahalad explains that his newer approach is based around universal inter-connectivity and breaking the mold of the print driven society. In describing his new approach, for which he has drawn on the Sanskrit word " Praja" meaning "t he assembly" or "the common people" (alternatively , "the people in common"), he argues that the mass availability of data and the ability of consumers to personalize their experie nce over the internet is a fundamental driver of changes so profound that he calls them "cosmic" (with reference to their paradigmatic scope") .
144
6.0 Strategy and the Future Complexity strategist Lissack draws on the literature of population biology to describe the competitive environment as a "fitness landscape". Following Kauffman, "a fitness landscape is a mountainous terrain showing the location of the global maximum (highest peak) and global minimum (lowest valley)", where the height of a feature is a measure of its fitness." What is interesting about the fitness landscape model is that it is a dynamic rather than a static model. Kauffman argues that real fitness landscapes and environments are not fixed but are constantly changing. The change is a product of the activities of the various species on the landscape. In this sense there is a remarkable degree of congruity between the fitness landscape metaphor and many models of product/market competitiveness. Kauffman and Lissack describe these changes as "deformations" and ascribe them to the same kinds of forces which Michael Porter describes as "jockeying for position". In more precise terms, Kauffman and Macready argue, "real fitness landscapes in evolution and economies are not fixed, but continually deforming. Such deformations occur because the outside world alters, because existing players and technologies change and impact one another, and because new players, species, technologies, or organizational innovations enter the field. Kauffman and Macready argue that a jagged fitness landscape means that solutions tend to be local and incremental rather than global and sweeping. Because we are considering a complex metaphor here, it is important to specify that local and global are mathematical terms referring to the character of the competitive environment and not geographical terms. In drawing metaphors from the literature of complexity theory, Lissack also uses the concept of "basin of attraction" , familiar to those who have read some chaos theory as "strange attractors" . A major portion of Lissack's argument is first devoted to clearing up the common misconception that a strange attractor is a thing. An attractor, in the mathematical sense, defines a solution basin - a place where solutions to particular problems or iterated equations are likely to occur. What's important from a business sense, and where the attractor metaphor has value in relation to the concept of fitness landscape. In this case, an attractor (chaotic or regular) is most important for its passive nature. It pulls participants on the fitness landscape into particular solution basins. A simple example of this is the "network" effect visible with the growth of the internet. In a networked commerce system, the value of the network is proportional to the number of participants in the 20 network.
7.0 Conclusion: Strategic Evolution In this sense, Lissack argues that modern strategy must become something very different from the earlier models which relied on control and control mechanisms. 19 Stuart Kauffman, At Home in the Universe, Oxford University Press, 1995, also Kauffman, S. and Macready, W. "Technological Evolution and Adaptive Organizations, Complexity, Vol. I, No.2, 26-43 20 See Hal Varian and Carl Shapiro,"lnformation Rules : A Strategic Guide to the Network Economy" (Harvard Business School Press, 1998)
145 He argues that not only are strategy and control different but that their relationship must change as well. Under such conditions he argues strategy is as much an attempt to understand control (hence the earlier notion of "sussing out the market") as it is to exercise control , noting:" Thus in a complex world, strategy is a set of processes for monitoring the behaviors of both the world and of the agents of the organization, observing where attractors are and attempting to supply resources and incentives for future moves . Command and control are impossible (at least in the aggregate), but the manager does retain the ability to influence the shape of the fitness landscape. Lissack fine tunes this analysis by citing Kauffman's maxim that "adaptive organizations need to develop flexible internal structures that optimize learning . That flexibility can be achieved, in part, by structures, internal boundaries and incentives that allow some of the constraints to be ignored some of the time . Properly done, such flexibility may help achieve higher peaks on fixed landscapes and optimize tracking on a deforming landscape. f Both Kauffman and Lissack propose a variety of techniques for moving firms off local minima and getting companies to higher maxima on both the static and deforming fitness landscape. Although the scope of this paper is too brief to consider these approaches, such as "simulated annealing" and "patches". Suffice it to say that not only has chaos and complexity theory expanded our view of the strategy process and moved the consideration of corporate strategy from statics to dynamics , but it has also opened up an entirely new range of strategic possib ilities for responding to the uncertainties which are an intrinsic part of the strategy process. In our own work at Southern New Hampshire University, we have employed these kinds of complex adaptive systems research tools to study technology succession and strategies for dealing with complex market adaptation and changes. While this work is too lengthy to cite in detail here, it can be found at length in both the on-line proceedings of the s", 6th and 7th International Conferences on Complex Systems as well as in InterJournal, A Publication of the New England Complex Systems Institute .P
Michael R. Lissack, "Chao s and Complexity -What does that have to do with management ?", Working Papers, Henley Management College, U.K. (2000) 22 Ibid. No. 26 23 See, Fellman, Post, Wright and Dasari, "Adaptation and Coevolution on an Emergent Global Competitive Landscape", InterJournal Complex Systems, 1001,2004; Mertz, Groothuis and Fellman, "Dynamic Modeling of New Technology Succession: Projecting the Impact of Macro Events and Micro Behaviors on Software Market Cycles" InterJournal Complex Systems, 1702,2006; and Mertz, Fellman, Groothuis and Wright, "Multi-Agent Systems Modeling of Techno logy Succession with Short Cycles", On-line proceedings of the 7th International Conference on Complex Systems, hltp://necsi .org/events/iccs7/viewpaper.php?id-41 21
Chapter 19
Towards an evaluation framework for complex social systems Diane M McDonald,,2 and Nigel Kay' Information Resources Directorate' ,Computer and Information Sciences' University of Strathc1yde [email protected], [email protected]
1.
Managing and evaluating complex systems
While there is growing realisation that the world in which we live in is highly complex with multiple interdependencies and irreducibly open to outside influence, how to make these 'systems' more manageable is still a significant outstanding issue. As Bar-Yam (2004) suggests, applying the theoretical principles of Complex Systems may help solve complex problems in this complex world. While Bar-Yam provides examples of forward-thinking organisations which have begun to see the relevance of complex systems principles, for many organisations the language and concepts of complexity science such as self-organisation and unpredictability while they make theoretical sense offer no practical or acceptable method of implementation to those more familiar with definitive facts and classical hierarchical, deterministic approaches to control. Complexity Science explains why designed systems or interventions may not function as anticipated in differing environments, without providing a silver bullet which enables control or engineering of the system to ensure the desired results. One familiar process
147
which might, if implemented with complex systems in mind, provide the basis of an accessible and understandable framework that enables policy makers and practitioners to better design and manage complex socio-technical systems is that of evaluation. Evaluation approaches, according to Stufflebeam (2001), have been driven by need to show accountability to funders and policy makers examining excellence and value for money in programmes and more recently internally within organisations to ensure quality, competitiveness and equity in service delivery. Another often used broad categorisation is that of requirement analysis, milestone achievement and impact analysis. Evaluation is therefore a collective term with differing objectives, applications and methods involved. The UK Evaluation Society (www.evaluation.org.uk) defines evaluation as "[a]n in-depth study which takes place at a discrete point in time, and in which recognised research procedures are used in a systematic and analytically defensible fashion to form a judgement on the value of an intervention". Others, however, take a less time dependent approach. For example, on Wikipedia evaluation is defined as "the systematic determination of merit, worth, and significance of something or someone". Evaluation is distinct from assessment which measures and assesses systems be it learning, research or development without making any value judgement. Other, terms often applied to designed systems are validation, which measures 'fitness for purpose' and verification which measures the fitness or correctness of the system itself. Within the context of this paper, we take evaluation to mean the measurement and assessment of the state of a system, in order to form a judgement on the 'value' or 'appropriateness' of the actions being taken by the system. How these different types of evaluation are implemented in practice depends on specific context and objectives. Increasingly a mix and match approach is adopted with appropriate evaluation tools being selected as required. See Snufflebeam (2001) for programme evaluation techniques and LTNI (1998) for general evaluation methods; House (1978) and Stufflebeam & Webster(1980) compare different methods. Evaluation of Complex Systems is of course not new - educational systems, social initiatives and government interventions are complex social systems where effective evaluation is seen as a key process in measuring success. More recently, there has been an increasing recognition that for evaluation of complex social systems to be effective, the evaluation process must take into account the theoretical understanding of complex systems. For example, Eoyang and Berkas (1998) suggest, "[t]o be effective, however, an evaluation program must match the dynamics of the system to which it is applied." Sproles (2000) emphasises that most systems that are part of a 'test & evaluation' process are actually socio-technical system and he therefore argues qualitative data collection and analysis must play an important role. Practical examples also encompass complexity theory into the design of evaluation programmes (Flynn Research 2003; Webb & Lettice 2005). Classical evaluation is [a series] of evidence-based value judgements in time. Evaluation determines the merit of a programme or particular artefact or process related to specific stakeholder requirements. It does not necessarily follow that an evaluated system obtains its desired goals or capitalises on its inherent creativity. This leads to the research question - Can an embedded evaluation system be used as a generalisable framework to improve the success of Complex Social Systems? To achieve this suggests a non-classical approach to evaluation where the evaluation forms part of a holistic complex feedback and adaption framework. Such an approach would require a number of issues to be addressed. For example, in practice, evaluation within any complex
148
social system tends not to be approached holistically, as individual evaluations are for differing stakeholder; programme level evaluations tend to be distinct from internal evaluation of specific software or processes. While all stakeholder evaluations may show satisfaction, this does not necessarily enable detection or realisation of system potential. Also, the multiple contexts within many complex social systems mean that evaluations in one context do not necessarily transfer to another. And, importantly, evaluation like any measurement changes the system it is evaluating. This paper reports on a preliminary investigation into the requirements and issues surrounding the applicability of an embedded evaluation framework as a practitioner and policy-maker friendly framework for managing complex social systems. The case study methodology used is outlined in section 2. An analysis of the characteristics of evaluation found to date is provided in section 3. In section 4, a conceptual model for exploring evaluation as a tool for managing complex social systems is present and discussed. This highlights concepts, processes and issues that require further detailed examination if a generalisable evaluation framework for improving the success of complex social system is to be developed. The paper concludes in section 5 by summarising findings and identifying novelty and future research steps.
2.
Overview of methodology
To investigate the feasibility of developing an embedded evaluation framework and to develop a conceptual model, a preliminary exploratory study was undertaken. This study examined common themes and differences in evaluation practice across three complex social systems. These case studies were selectively chosen to provide information rich cases that might inform further detailed research. In particular, cases which were using evaluation in a core, unusual or innovative way were chosen. The cases introduced here - criminal evidence systems and learning communities are briefly summarised in section 3.1 below. We also draw on some initial anecdotal evidence from innovation systems, which are also being investigated as part of this work. This part of the study is not sufficiently advanced to merit case study status. The case studies themselves were carried out using a semi-structured interview technique which was supplemented and triangulated using additional documentation relevant to the cases. The interview questions were developed through a literature survey and synthesis and were designed to explore the specification of needs and identification of issues of a new framework approach as well as how evaluation was currently used. Respondents were selected from practitioners and policy makers, as this is the target audience for the research. While the work is ongoing, data from 2 respondents per case study as well as secondary documentation is drawn upon to present preliminary observations.
3. Analysis of the characteristics of evaluation in the case studies 3.1
Overview of the cases studied
3.1.1
Criminal evidence system
In the criminal evidence system, the processes, actors and environment involved from the initial crime scene investigation through the evidence analysis to the presentation of
149 evidence in court proceedings were investigated. The main case used was the Scottish criminal evidence system which is underpinned by Scots Law, which is based on Roman law, combining features of codified civil law with elements of uncodified common law. As the nature of Scots Law - it is an adversarial system - will dictate many of the system processes, the implications of an inquisitorial-based system were also considered. A number of actors are involved in the various stages ; police, scene of crime officers, forensic scientist s (defence and prosecution) , lawyers (defence and prosecution), Procurator Fiscal , judge, defendant and jury.
3.1.2
Learning communities
Learning communities provided a second case study. These educational systems include learners , tutors or mentors and development staff including educational and technology development. As a focus, the learning communities surrounding the DIDET (www.didet.ac.uk) project, which spanned both UK and US universities and the Transactional Learning Environment project (technologies.law.strath.ac.uk/tle2) were examined. In each of the projects, both the novelty of the learning interventions and the project or transaction based learning scenarios meant that innovative assessment and evaluation techniques were required. Data from more traditional learning scenarios were used to supplement and compare findings.
3.2
Micro and macro evaluation
A variation in the 'level' at which evaluation took place was evident across the case studies. Within the criminal evidence system, evaluation was core to the daily activity of the various professionals involved. For example, police officers normally evaluate finds and other facts at the scenes to direct what is to be collected rather than instigating a blanket collection. Similarly, forensic scientists must weigh up the possibilities that the evidence collected could have been generated through competing scenarios and finally the jury must evaluate the evidence as presented by the various experts. Evaluation is at the core of the criminal evidence system . Similarly , within innovation systems evaluation is part of the day to day processes of creating and developing new products or processes. In learning communities, traditionally assessment rather than evaluation is much more prominent at the individual learner (micro) level. Evaluation tends to take place at early in and at end of learning programmes to assess the success of the learning intervention and adapt it as required. Increasingly however, an element of learner evaluation is being introduced as part of self-reflection practice within learning communities. Macro-level evaluation of the system as a whole was also important in both the innovation system and learning communities. Macro-level evaluation of interventions or programmes was however in general independent of the micro-level evaluations, although in qualitative evaluation of learning , there was some use of micro-level reflections of both learners and practitioners.
3.3
The role of context
Two significant issues arose from varying contexts within the case studies. Within the innovation system , a sudden increase in the amount of patents applications - one of the innovation measures - was observed. However, to observers on the ground, the overall
150 innovation system did not appear to have dramatically changed. Further investigation revealed that patenting had dramatically increased because of another separate initiative had spare money which it used to help organisations take out patents. The system as a whole was not intrinsically more innovative as a result although the information available had increased. The problem of differing contexts was also an issue in the learning community case study. While the supporting technology had been highly successful within the learning context, its design had followed an unorthodox approach. Concern regarding how external peers within the technology development community might judge the developments, initially at least, restricted the ability to evaluate the technological tool. Thus, as with complex systems in general, the 'measures' can be influenced by different contexts and uses within or outwith the complex system. While evaluation can and perhaps should be ongoing, due to the dynamic nature of complex social systems, any measurement and judgement is only relevant at a specific point in time. 3.4
Complexity
The complexity of the national innovation system gave rise to issues. For example, while evaluation of the various initiatives were always required by the funders and policy makers, a holistic approach was often missing . How different initiatives affected others and how the lessons learnt from current evaluations could be fed back into other ongoing initiatives was often missing. Complexity was also a significant issue within the criminal evidence system with much difficulty in identifying single linear cause and effect relations. For example, it is not uncommon for there to be several competing scenarios as to how particular forensic evidence came to be present. Similarly, in one of the learning communities, lack of uptake of a particular technology was not only due to lack of student eLiteracy training as evaluation first indicated. Subsequent evaluations showed that staff eLiteracy issues were also a strong contributing factor. So while evidence was found to substantiate the claim that evaluation did have interdependencies, feedback between micro and macro evaluation was in general lacking. Without such feedback, the potential for emergence is extremely limited.
4
Designing an evaluation system for complex social systems
4.1
Conceptual model for evaluation & management of complex social systems
' - - -(
I
-suggests
,
----' -
-
evaluation as a success fram ework ) - Ie"ds to I
suggests
;. ( framework components )
.
reqU'feS
complex dynamics
1
( co-evolution )
I
Risk & experimentation
evaluation Is a complex system?
151
Figure 1: Conceptual model for investigating management of complex social systems using evaluation From the analysis of the exploratory case studies and synthesis from the literature review, the conceptual model for exploring the design and management of evaluation systems for management of complex social systems illustrated in Figure 1 above was developed . The main features of the conceptual model are discussed below.
4.2
Embedded, co-evolving evaluation framework with feedback
The issues surrounding innovation system evaluation highlighted in section 3.4 illustrate the need for embedding the evaluation framework within the complex system itself; programme end evaluation cannot improve current programmes . Embedding, however, means more than ensuring ongoing evaluation activity. The evaluation framework needs to be able to detect changes and new system potential. The evaluation must also remain relevant and timely; as the system changes through experimentation, the evaluation must also adapt, as the evaluation measures may no longer reasonably represent the desired characteristic, as the patent example of section 3.3 illustrated. For evaluation to be an effective management tool, the results should be fed back to inform general system behaviour and specific interventions. While this means that evaluation will change the system, the converse should also hold - system changes require change in the evaluation criteria and practices. The evaluation system should co-evolve with its complex social system. Applying complex systems thinking to evaluation also suggests the potential advantages of linking micro and macro level evaluation feedback. While such feedback helps build buy-in and a sense that individuals can make a difference, from a complex systems perspective, it is such micro-macro linkages that leads to novelty - the emergence of sustainable new patterns of activity (McDonald & Weir 2006). The potential of micro macro feedback requires further exploration.
4.3
Evaluation history and critical evaluation point identification
The case studies highlighted the run up to new funding opportunities or new development phases as critical evaluation points from the practitioner and policy-maker perspective. Such points tend to be snapshots in time and do not necessarily identify developing patterns. For a fuller picture, evaluation requires to be ongoing. Continuous evaluation however is both a large overhead and potentially counter productive. For example, one of the case studies reported evaluation skewing the development - the evaluation schedule drove development rather than the actual system requirements . Additionally, participating in evaluation was viewed as time consuming. The richest feedback was obtained when participants understood how the information being gathered would be fed back to improve the system. While impact type evaluations aim to provide reassurance of quality and meeting of objectives, it is where evaluation outcomes differ from expectations that will be indicative of the future direction of the complex system. It is at these points that either corrective intervention is required in the case of undesired behaviour or new potentials may be capitalised upon. Identification of these bifurcations points however requires knowledge of existing patterns. The sharing - often informally - of reflective practice of participants own experiences were viewed as an effective way to identify emerging
152
phenomena and patterns. Such a second layer communal reflection may help minimise personal biases, enabling patterns unseen by one participant to be identified by another due to differing perspectives. The use of historical evaluation records for pattern identification and scenario mapping also requires further investigation. Thus, while change processes often dictate evaluation points, a second layer of reflective evaluation potentially offers a more complete picture, enabling critical points to be identified and interventions designed. A framework which supports the feedback of these evaluations in a timely manner is critical. Further work is required to help identify the parameters associated with optimal evaluation and feedback points.
4.4
Risk and experimentation
Complex adaptive systems by nature 'experiment', with various adaptions being selected over time. While such systems can be highly creative, this is not always desirable. Successful management of systems often involves a trade-off between experimentation and the risks of undesirable behaviour. Within the innovation system, a certain amount of risk was viewed as highly desirable, but this needed to be balanced against overall productivity. Within the criminal justice system, with its need to provide detailed forensic evidence, the scope for experimentation was extremely limited. Experimentation, such as new forensic techniques, was usually thoroughly tested outside criminal proceedings before they were accepted. Adaption and variation does take place however, with new combinations of techniques being used, which must 'compete' against the other sides methods. This suggests the possibility of developing an evaluation framework based on different classes of risk-experimentation trade-off. Classifications of a system may also vary depending on conditions. For example, in the criminal investigation case study, it was reported that the normal 'rules of the game ' may change in extremely distressing, high profile cases such as child murders. Suddenly, the realisation that the evidence will be reported to the child's parents greatly increases the need to minimise risk. This suggests a dynamic trade-off between risk, and experimentation. The potential of a classification, based risk-experimentation tradeoff, which takes into account system purpose, constraints, contexts and dynamics requires further investigation.
4.5
Discussion
As Wang et al (2004) observe (in relation to learning environments) the complex nature of artefacts in a "threat to classically ideal evaluation design where all the variables are well controlled except the [facility under evaluation]". But these interactions that we have identified here hint at another important point worth consideration - is an evaluation system designed with complexity in mind in fact a complex system itself? If so, this leads to the possibility that self-organisation and emergence could occur. Again, self-organisation and emergence may not potentially follow the desired path although the effect of coupling through embedding with the social system itself may provide the guidance required. This is analogous to Ottino's (2004) suggestion of "intelligently guid[ing] systems that design themselves in an intelligent manner". Similarly, unpredictability which we are trying to solve may potentially occur, but coupling may keep it pointed in the right direction. Detailed research on the true dynamics of complex evaluation systems is required.
5
Summary, novelty and next steps
153
In this paper, we have presented insights from a preliminary investigation of evaluation in Complex Social Systems which examined correlations and differences between three differing complex social systems. The aim of this study was to develop a conceptual model of evaluation and control of complex socio-technical systems which could then be used for further detailed study ultimately to produce a policy and practitioner relevant evaluation and control framework for complex social systems. The preliminary insights derived were: (i) Embedded evaluation system is required which co-evolves with its complex social system, coupling micro and macro level evaluations , (ii) Evaluation may be most effective when there is a feedback between micro and macro evaluation; (iii) Context varies both within and outwith the evaluation system perturbing the evaluation and (iv) Different classes of evaluation system may be appropriate to deal with trade-off between purpose, constraints and experimentation . The novelty of this work lies in the setting of a future research agenda for exploring evaluation as a success mechanism for complex systems and the identification of the additional issues which arise when evaluation is applied within a complex systems framework. The next steps are to undertake a detailed investigation based on conceptual model developed (Figure 1), identify critical evaluation points and explore the complex dynamics of evaluation. This will ultimately lead to a policy maker and practitioner friendly complex evaluation and management framework for complex social systems.
References Bar-Yam, Y., 2004, Making things work: solving complex problems in a complex world, NESCSI , Knowledge Press. Eoyang, G.H. and Berkas, T.R., 1998, Evaluating performance in a CAS. Flynn Research, 2003, Complexity Science: A conceptual framework for making connections Denver. Flynn Research. House, E.R., 1978, Assumptions underlying evaluation models. Educational Researcher Vol 7, nUID 3, pp4-12. LTDI, 1998, Evaluation Cookbook. Learning Technology Dissemination Initiative http://www.icbl.hw.ac.uk/ltdi/cookbook/cookbook.pdf (accessed 14/08/2006) McDonald , D.M. and Weir, G.R.S, 2006, Developing a conceptual model for exploring emergence lCCS 2006 (Boston). Ottino, lM., 2004, Engineering complex systems. Nature Vol 427, p389. Sproles, N., 2000, Complex Systems, Soft Science, and Test & evaluation Or 'Real men don't collect soft data' - Proceedings of the SETE2000 Conference . Stufflebeam, D.L. 2001, Evaluation Models New directions for evaluation, num89. Stufflebeam, D.L. and Webster, W,J., 1980, An analysis of alternative approaches to evaluation . Education evaluation and policy analysis, Vol 2, num 3, pp5-19. Wang, H.-C., Li, T.-Y. and Chang, C.-Y., 2004, Analyzing Empirical Evaluation of Advanced Learning Environments : Complex Systems and Confounding Factors. IEEE Learning Technology Newsletter, Vol 6, num 4, pp39-41. Webb, C. and Lettice, F., 2005, Performance measurement, intangibles, and six complexity science principles - International Conference of Manufacturing Research, Cranfield.
Chapter 20
Operational Synchronization Kevin Brandt MITRE Corporation [email protected]
Complex systems incorporatemany elements, links, and actions. OpSync describes adaptive control techniques within complex systems to stimulate coherent synchronization. This approach fuses concepts from complexity theory, network theory, and non-cooperative game theory.
1.0
Coherent Synchronization in Complex Systems
This paper defines coherent synchronization as "the relative and absolute sequencing and adaptive re-sequencing of relevant actions or objects in time and space and their alignment with intent, objectives, or policy in a complex, dynamic environment."
1.1
Emergent Behavior
Complex systems exhibit an array of behaviors. A wide range may emerge from simple interactions of system components. Emergent behavior can be simple, complex, chaotic, or random . Many assume that only desired behaviors will emerge -a fallacy . Most operational concepts recognize the need for synchronization of actions. Given improved information flows and shared situational awareness, some concepts assert sync always emerges. Advocates cite examples of coherent behavior in nature. These assertions of emergence of axiomatic coherent behavior err in the use of inductive arguments given abundant counter-examples [e.g., Stogratz] . Synchronization is one type of coherent behavior. Coherence may emerge from direct or indirect interactions among components; but external factors may induce it. Emergent synchroni zation of coupled oscillators is possible - like fireflies flashing or
155
clocks ticking in unison - and in special cases, it is certain [Stogratz]. Likewise , external action may yield coherent behavior - as an electric current induces a magnetic field . In complex systems , elements may align some attributes while other properties remain disordered. In these cases, the order is less apparent. Moreover, some elements may remain out of sync. Thus, synchronization is not universal and emergent sync in complex systems is not certain . [Manrubia] Complexity studies conclude, "Individual processes in different parts of a system [must bel coordinated Ito enable 1 the system to display coherent performance." IManrubia] However, central coordination is neither necessary nor sufficient for coherent behavior . Yet, complex systems with coherent behavior exhibit organized complexity .
1.2
Organized Complexity
In the development of information theory, Charles Bennett identified two components of organized complexity . First, algorithmic complexity describes the minimal model, formula, program, or elements needed to support a result. Second, logical depth captures the processes needed to generate the result given the initial minimal program [Davies, 19921 . Complex systems exhibiting deep logical depth , like synchronization, may arise after many cycles through the minimal program and thus they appear to "emerge over time ." However, the same logical depth derives more quickly by starting with complex formulae containing information otherwise reconstructed. Mapping the cumulative interactions of independent agents to cycles of minimal programs suggests that since emergent sync is not a certain outcome, then starting with increased algorithmic complexity might be necessary to generate the behavior.
1.3
Emergent Synchronization
Emergent synchronization in simple systems evolves after many cycles in a minimal program. For example, Thai fireflies developed an adaptive response to external flashes via genetic selection; the firefly adjusts the timing of its signal to achieve sync [Stogratz]. These synchroni zed oscillators demonstrate four elements: a coupling agent (signal), an adjustment process (minimal program), feedback or selection process (fitnes s), and computational cycles (evolutionary time). Emergent synchronization in complex systems exhibit these elements; all of them pose challenges for control. Static guidance (intent) does not provide a dynamic coupling agent. Since regional actions must synchronize globally distributed elements operating over vastly different time horizons, simple signals may not suffice . The activities of elements are complex , not simple threshold reactions like those evidenced in nature. Hence, a simple adjustment process may not suffice . Selection processes that operate for biological ecosystems are not available (mating behaviors) or desirable (predator-prey) in most control systems. Moreover , the time required for an emergent selection processes (biological computations) is not sufficiently responsive . The implications are stark: coherent synchronization will not rapidly emerge and adapt without a foundation and emergent self-sync does not provide requisite capability reliably . Studies in other domains support this conclusion with the observation that
156 "when adaptive speed is warranted, initial organizational structure is critical." [Manrubia] "It's a basic principle: Structure always affects function." [Stogratz] Operational Synchronization increases algorithmic complexity to provide sufficient structure.
2.0
Operational Synchronization
2.1
Threaded Synchronization
A software experiment developed and demonstrated a feasible approach to adaptive sync. It yielded a structural model embodied in this experimental Synchronization. Adaptation. Coordination. and Assessment (SACA) tool. The structure and performance provide direct evidence of the feasibility of a structured approach to coherent sync . A foundation for adaptive, coherent sync evolved during spiral development. This framework extends beyond time to align a wealth of activities along the three axes: • Vertical (motivation) - cascading objectives or policies • Horizontal (association) - interrelationships between objectives or objects • Temporal-spatial (location) - relative scheduling of activities The vertical axis aligns missions and actions with objectives, end states, policies, and needs. These links correspond to mission threads expanded to include intent, constraints, priorities, limits, and resources. In the information technology domain, they might correspond to information services aligned to business services supporting business processes. Initial branching of these threads leads to a traditional, decision tree structure tied to cascading objectives, goals, and tasks [Keeney]. However, complex relationships between successive and supportive objectives force the use of a network of nodes and links . This graph structure proves more robust and allows multiple links to merge and diverge at successive levels. It eliminates the assumption of vertical linearity of cascading objectives or horizontal independence between objectives. Finally, it retains lineage and exposes impacts of actions with complex, multiple inheritance. The horizontal axis links nodes of related activities - a cluster of activity nodes. These core clusters enable the dynamic, adaptive use of assets as envisioned by network-centric warfare concept [Cares] or group composable information technology services to deliver business services to meet functional needs. Horizontal links provide structure to fuse and align activities that precede, support, reinforce, or follow primary actions. They capture inter-dependencies between objectives at the same level of organization or between subsequent activities. These connections represent physical or logical ties between elements independent of subsequent scheduling of activities - a precursor vice result of the derived schedule. Activities may extend across domains and incorporate all available resources. Moreover, they may incorporate Boolean logic .
In military operations, horizontal links represent a group of aircraft containing strike aircraft, reconnaissance aircraft, unmanned vehicles, electronic warfare assets, and control aircraft. On the ground, similar relationships bind a maneuver force, its screening force, and supporting fires. In the commercial sector, horizontal sets may
include regional flights into a hub that feeds long-haul air routes.
157
The third axis , location , encompasses space and time. This axis schedules blocks or sets of activities sequenced and emplaced in space-time: action frames . The organizational separation between the previous horizontal axis (associations) and this sequencing is critical - it enables adaptive scheduling and dynamic rescheduling . Diverging from a traditional command-directed approach (command economy), the third axis envisions - but does not require - the formulation of a marketplace of options that combine and recombine with others in the selection and scheduling process: a free market of " services" . This approach expands flexibility beyond a decision tree structure into a network of rich, dynamic arrays of options. This richness provides adaptive depth and robustness. The experimental synchronization tool, SACA, established the feasibility of this threaded sync process , Figure I. It developed and used a constrained, sorting genetic algorithm (GA) to construct and select action frames . SACA demonstrated the practicality of structured sync by constructing hundreds of alternative action frames that incorporate activities across all elements of the marketplace of national power and are ALL equally feasible (but not all equally desirable) . [Ritter]
•
Threaded synchronization structures a complicated problem in many variables.
A genetic algorithm (GA) builds mixes of activities and schedules thatmeet established constraints. • Keeps the best (non-dominated) solutions; these are all feasible solutions that meet identified constraints AND provide a solution where one objective cannot be improved without making one or more other objectives worse (Pareto Optimal). Figure 1: Threaded Synchronization
Each action frame thus represents an option, a branch, a course of action, or a segment of a larger, connected , continuous storyboard . If one computes the number of options of an unstructured set of activ ity nodes, one quickly reaches a point where the number of options is quite large and intractable. By structuring the activities in objective-goal-action threads and bounding the range of options to feasible and desirable regions, the constrained-sorting genetic algorithm used for scheduling quickly converges on optimal solutions (if any exist) . This approach does NOT generate a
158 single point optimum; it returns the Pareto Optimal solutions. One such action frame could encompass an Air Tasking Order. In entertainment, a set might sequence different segments of a film into alternate storylines. In the commercial realm, these blocks could represent a market portfolio of stocks or commercial transportation assets (air, land, or sea). The performance of the software using this approach exceeded initial expectations. The process developed Pareto optimal solution sets for 10,000 activities within 15 minutes. A dynamic re-planning test held most activities constant and forced 300 activities into adaptive rescheduling; the computation converged in 15 seconds. These initial tests of the tool used a 1.5 GHz single-processor Linux workstation. Retested on a 12-node cluster, the runtime for 10,000 actions dropped to under a minute. Implications reach beyond the rapid production of activity schedules to the core processes for course-of-action development and decision analysis. SACA enabled dynamic synchronization for tactical actions. However, threaded synchronization alone falls short of providing the same capability to leaders at operational and strategic levels. They need a viable process that enables coherent adaptation of the action frames - to sequence and shape subordinate actions and supporting plans and operations. Operational Synchronization (OpSync) evolved from threaded synchronization to meet the need.
2.2
Operational Synchronization
Threaded synchronization produces a wealth of independent action frames. Combining and sequencing action frames on a storyboard produces strategic alternatives. Each path embodies a coherent, feasible campaign plan with branches and junctions. However, simply having more alternatives does not provide adaptation if each path is rigid. Hence, branches in paths may spawn alternative paths; separate routes may merge to share a series of action frames before splitting apart again; and completely separate paths may use unique action frames. However, continuity is required; paths may change direction or terminate but discontinuous leaps between pathways cannot occur. The reason for this constraint ties back to the thread that provides resource, action, and intent traceability. While this framework is new, the results are familiar. OpSync is a manifestation of an idealized war game. For example, in the classic movie "WarGames," [Badham] the WOPR computer simultaneously constructs [in OpSync terms] multiple action frames along different paths to explore and evaluate alternative war scenarios and develop a winning strategy. The alternate strategies represented by these paths need not be concurrent (synchronous in time) except (possibly) at junctions of two or more paths. As an example, one approach might embody mostly economic actions that lead to the goal over years while a diplomatic approach might reach the same goal in days. It should be evident that the concept permits the concurrent pursuit of multiple strategies in a single, coherent framework.
159 Thus, operational synchronization extends beyond threaded synchronization to align activities concurrently along four axes: • • • •
3.0
Vertical (motivation) - mission threads Horizontal (association) - mission sets Temporal-spatial (time & space) - action frames Derivative (alternate strategies) - sequenced action frames
Conceptual Applications
The SACA experiment developed a conceptual architecture to use the tool in a control system . Details are beyond the scope of this paper, but the collaborative structure and control feedback envisioned hold promise.
3.1
Modulated Self-Synchronization
In a push to empower users , a network of OpSync hubs could produce and transmit a global sync signal. Akin to the Global Positioning System signals or cellular networks, distributed users could receive wireless OpSync signals on a personal handset or OpSync watch - a logical extension of today's wristwatches [Clark]. These devices would provide time, location, orientation within current frames, and alignment on derivative paths . Armed with this data , users could conform their actions to an adaptive execution scheme and input local status and intended actions for integration into OpSync action frames. In this vein , modulated self-synchronization is practicable.
3.2
Decision Support
Generally, people make decisions based on either instinctive analysis or structured analysis [Jones). Instinctive decisions are marked by their rapidity and by identification and execution of a single satisfactory solution: "It's good enough! " The instinctive process works well when the problems are straightforward and immediate response is critical. This "single track" approach frequently leads to sub-optimal or partial solutions to complex problems or to even larger blunders [Horgan I. Structured analysis takes longer, but considers more factors and more options. Structured analysis begins with the identification of the problem(s) and the major factors and issues: decision variables. Next, a divergent process, brainstorming, yields an array of possible solutions. Generally, as many as three "different" courses are considered. In practice, however, the differences between these courses may not be very great. Finally , a convergent process reduces the number to a single preferred solution. A structured process helps the mind cope with complexity. "We settle for partial solutions because our minds simply can't digest or cope with all of the intricacies of complex problems ... " [Jones) OpSync is structured analysis on steroids. The four-axis structure identifies problems and decision variables being resolved. The genetic algorithm produces massive options in an unequalled, divergent process that then converges from millions of considered options to tens of non-dominated solutions. The final selection of the
160 specific solution(s) resides with the operational decision maker, who is thus empowered to see multiple options within the context of operational level decision factors . Liberated from the tyranny of building schedules, decision makers can refocus time and attention on higher-level decision factors that dominate their thinking and shape the conflict space . By mapping decision attributes into multi-dimensional decisonscapes, the decision maker might see the topography of the complex environment. Using visual or analytic clues, a commander may be able to avoid cusps or boundaries where phase changes abound and chaos reigns. [Casti]
4.0
Mathematical Foundations
4.1
Current Constraints
Most synchronization techniques use one of three general approaches. 1. "Post and avoid" strategies dominate efforts to de-conflict actions. For example, operators post actions on a sync matrix (or activity board) aligned within a desired timeframe and then manually check for conflicts. Planners avoid new conflicts as they add more actions . This method is slow, constrained, fragile, and vulnerable. 2. Linear programs find favor in some areas since they reduce process times and produce the "optimum" solution. However, most operational schedules are not linear. 3. "Advanced tools" use non-linear gradient search techniques to optimize non-linear schedules. These search techniques use complex algorithms that posit solutions lie in a convex set. Operational experience , empirical evidence, and complexity science suggest that solutions may be in regions with sharp discontinuities and cusp geometries. [Casti] Hence, the basic conditions for use of gradient search techniques are not satisfied .
4.2
Applying Non-cooperative Game Theory
Operational synchronization combines aspects from three fields of study: network science , non-linear algorithms, and multi-player non-cooperative game theory . Mission threads link objective (end state) nodes with activity nodes in a regular lattice structure (network) . Objectives: (a) tie to multiple activities; (b) set constraints and bounds for supporting activities; and (c) allocate resources to supporting activities. Activities can link to and receive resource allocations from multiple objectives. Intermediate layers in the lattice establish hubs and form the backbone of the network. Thus, partitioned graphs provide a mathematical foundation for these mission threads and networked interactions support the required dynamic adaptation. The non-linear search technique uses a constrained, sorting GA to construct, assess, and evolve feasible solutions. This approach makes no a priori assumptions about the shape of the solution space. Other search techniques may be feasible . Note that GA searches may not produce the same answers each time and may not "discover" the optimal solution (schedule), but in practice, the results have been robust and responsive .
161 Objective functions for the GA stem from the linked objectives (end states) and not directly from the attributes of individual activities. Sets of all possible activity schedules thus constitute alternative strategies in a theoretical multi-player game with competing objectives being the "players ." Thus, the dominant options found will approach the idealized points predicted by the Nash equilibrium [Kuhn] .
In Summary, this paper illustrates the limits inherent in relying on emergent selfsync in control systems . It identifies the need for fundamental structure - algorithmic complexity - as a foundation for modulated sync and documents the feasibility of using operational synchronization techniques to achieve coherent sync. OpSync overlays objectives and actions on a tiered network - a partitioned graph - and uses non-linear optimization techniques - a constrained sorting genetic algorithm - to identify feasible, non-dominated options . Finally, OpSync leverages the Nash equilibrium points defined by competing decision criteria to return a bounded set of non-dominated options . The components of OpSync reflect small advancements in prior art, but it is the assemblage of those components into the whole that establishes this new capability: Operational Synchronization (OpSync).
Bibliography [1]
[2] [3J
14]
[5] [6]
[7) [8] 19]
[10)
[11]
[I2J
Badham , J. (director) , 1983, Film: "WarGames." MGM Studios (LosAngeles) Cares, J ., 2005, Distributed Networked Operations; The Foundations of Network Centric Warfare, Alidade Press (Newport, RI) Casti, J., 1994, COMPLEXification-Explaining a Paradoxical World Through the Science of Surprise . HarperCollins (New York) Clark, A., 2003, Natural-Born Cyborgs-Minds , Technologies, and the Future of Human Intelligence, Oxford University Press (New York) Davies, P., 1992, The Mind of God-The Scientific Basis for a Rational World, Simon & Schuster (New York) Horgan , J., November 1992, Scientific American, "Profile: Karl R. Popper." Scientific American (New York) Jones , M., 1998, The Thinker's Toolkit, Three Rivers Press (New York) Keeney, R., 1992, Vlaue-Focused Thinking A Path to Creative Decisionmaking, Harvard University Press (Cambridge, MA) Kuhn, H., & Nasar, S., 2002 , The Essential John Nash, Princeton University Press (Princeton, NJ). Manrubia, S., Mikhailov , A., & Zanette, D., 2004 , Emergence of Dynamical Order-Synchronization Phenomena in Complex Systems, World Scientific Lecture Notes in Complex Systems-Volume 2, World Scientific Publishing (Singapore) Ridder, J. , & HandUber, J ., 2005 , Evolutionary Computational Methods for Synchronizaton of Effects Based Operations, In Genetic & Evolutionary Computation Conference Proceedings, ACM Strogatz, S., 2003, SYNC-The Emerging Science of Spontaneous Order, I Sl Ed, Hyperion (New York)
Chapter 21
The Complexity of Terrorist Networks Philip Vos Fellman Southern New Hampshire University
Complexity science affords a number of novel tools for examining terrorism, particularly network analysis and NK-Boolean fitness landscapes . The following paper explores various aspects of terrorist networks which can be illuminated through applications of non-linear dynamical systems modeling to terrorist network structures . Of particular interest are some of the emergent properties of terrorist networks as typified by the 9-11 hijackers network , properties of centrality, hierarchy and distance, as well as ways in which attempts to disrupt the transmission of information through terrorist networks may be expected to produce greater or lesser levels of fitness in those organizations .
163
1 Introduction Open source acquisition of information regarding terrorist networks offers a surprising array of data which social network analysis, dynamic network analysis (DNA) and NK-Boolean fitness landscape modeling can transform into potent tools for mapping covert, dynamic terrorist networks and for providing early stage tools for the interdiction of terrorist activities.
2 Mapping Terrorist Networks One of the most useful tools for mapping terrorist organizations has been network analysis. A variety of maps and mapping techniques have emerged in the post 9-11 . World. One of the earliest and most influential maps was developed by Valdis Krebs (Krebs, 2001) shortly after 9/l1 [11 :
• • • • •
FIOghtM f1 1 • Craallod Into WTC North Fl lgh l AI. nT· Cnallod In fo P""!alJon FIIIilI UA"3 · Crashod In PenneylYan a Fl igh t UA .175· Cras h d Into WT C South Oth or A oc l. IOO of Hijacko
CDP''' V'lI ~ 7I01. VIICH MI'f!M
164
This map yields a number of interesting properties . Using measures of centrality, Krebs' work analyzes the dynamics of the network. In this regard, he also illuminates the centrality measure's sensitivity to changes in nodes and links. In terms of utility as a counter-intelligence tool, the mapping exposes a concentration of links around the pilots, an organizational weakness which could have been used against the hijackers had the mapping been available prior to, rather than after the disaster, suggesting the utility of developing these tools as an ongoing mechanism for combating terrorism.
3 Carley's DyNet Model The most developed version of these tools is probably the Dynamic Network Analysis (DNA) model developed by Carley et al. [2]
Database of Organizational Scenarios Characteristics Of known or Hypothetical Network or Cellular Organization
Netv ork Profile
DYNET
--=
Attack Scenario
Critical Individuals Observe Dynamic s
.-..-------- -- -
D\ iET: A de ktop 10 0 1 for reasoning about dynamic networked and cellular orenmzatrons. SCt>ta ~_lC n ""-'j.".{Jri£ ~n
QvlrJ.
""lilt...1>' ~ 71"urf. I':~ Cb">"p
,0;;"
v.nv....,
As indicated in the figure above, Models of the 9-11 Network, the Al Qaeda network which bombed the U.S. Embassy in Tanzania and other terrorist networks also show a characteristic emergent structure, known in the vernacular as -the dragon". One element of this structure is relatively large distance [3] between a number of its covert elements [4] which suggests that some elements of the network, particularly certain key nodes [5] may be relatively easy to remove through a process of over-compartmentation or successive isolation, thus rendering the organization incapable of transmitting commands across various echelons.
4 Symmetries and Redundancies On the other hand, the 9-11 Network also demonstrates a high degree of redundancy as illustrated by the following diagram [6].
165
The symmetries of terror
20
RI)..... 1 Crn l1,d
Row2 A"llno.t Raw3 Fltghl "~ &c.r.."h lim.. />gIlI!1'''Y'''''' ""'",.,. ptON b/o bUf'JOl (M. "' ~'
R O"h" oS HlJil 1:~~ r
R... 5 Ln ' .b od. RDW' 6 Piloillair)lrli 01 pilol Copyright @200 2, Mark Slra /hem, Cranfield School of Msn agement
This mapping suggests, in tum, that while the network may be highly distributed [7], the redundancies built into it suggest cohesion at a number of levels as well as an hierarchical organization [8,]. A number of systems analysis tools [9] have been developed to deal with the problem of covert networks [10] and incomplete information [11].
5 Isolation and Removal of Network Nodes DNA, for example, provides a number of insights into what happens when leadership nodes are removed from different kinds of networks [12], yielding very different kinds of results depending upon whether the network is cohesive or adhesive [2]. In particular, Carley et al. [4] demonstrate how isolation strategies (also suggested in Fellman and Wright [3]) will yield different results based on the nature of the network involved. This is shown below in the comparison of node isolation impact in standard vs. cellular networks [2].
166
hpd d 19::1ai'----I - - l.lu:B" t - --
9:>........_
- --
.........
---I
-
Clrt'd
,...;
,- -..----
... COI() N and < E:!, B2 , C2 > are triples of environment, behaviour and
261
cogrnu ve system, respecti vely, such that the behaviours B, are adequate for the respect ive environment 1; and realised by the cogn itive system Cj • Then the Complexity Monotonicity Thesis states that E1 S C Ez •• B, So B2 & B1 S C B2 • • C, S C C2 Here S c is a partial orderin g in complexity, where X S c Y indicates that Y is more complex than X. A special case is when the co mplexity ordering is assumed to be a total ordering where for every two elements X, Y either X S c Y or Y So X (i.e., they are comparable), and when some complexity measure cm is ava ilable , assigning degree s of complexity to environments, behavio urs and cognitive systems, such that X S c Y ¢> cm(X) s cm(Y) where s is the stand ard ordering relation on (real or natu ral) numbers. In this case the Complexity Monotonicity The sis can be reformul ated as cm(E1) s cm(Ez) •• cm(B1) s cm(B2 ) & cm(B1) s cm(B2) • • cm(C 1) s cm(C2) Th e Temporal Complexity Monotonicit y Thesis can be used to explain increase of compl exity during evolution in the following manner. Make the following assumption on Addition of Environmental Complexity by Adaptation: "adaptation of a species to an environment adds complexity to this environment" . Suppose an initial environment is described by ESO, and the ada pted species by BSO. Then this transform s ESO into a more compl ex environmental description ESI . Based on ES I , the adapted species will have description BS I. As ES I is more co mplex than ESO, by the Complexity Monotonicity The sis it follow s that this BS I is more complex than BSO: ESO ::::: ES I • • BSO ::::: BS I. Therefore BS I again adds complexity to the envi ronment, leadin g to ES2, which is more complex than ES I, et cetera. This argument shows that the increase of complexity during evoluti on can be related to and explained by two assumptions: the Complexity Monotonicity Thesis, and the Addition of Enviro nmental Co mplexity by Adaptation assumpti on . Thi s paper focuses on the former assumption.
3 Variations in Behaviour and Environment To evaluate the approach put forw ard, a number of cases of increasing complexity are analysed , starting from very simple stimulus-response behaviour solely depending on stimuli the agent gets as input at a given point in time. This can be described by a very simple temporal structure: direct association s between the input state at one time point and the (behavi oural) output state at a next time point. A next class of behaviours, with slightly higher compl exity, analysed is delayed response behaviour. behaviour that not only depends on the current stimuli, but also may depend on input of the agent in the past. Thi s pattern of behaviour cann ot be described by direct functi onal associations between one input state and one output state; it increases temporal complexity compared to stimulus-response behaviour. For this case, the descript ion relating input states and output states necessaril y needs a referen ce to input s received in the past. Viewed from an intern al perspective, to describe mental capabilities gene rating such a behaviour, often it is assumed that it involves a
262
memory in the form of an internal model of the world state. Elements of this world state model mediate between the agent's input and output states. Other types of behaviour go beyond the types of reactive behaviour sketched above. For example, behaviour that depends in a more indirect manner on the agent's input in the present or in the past. Observed from the outside, this behaviour seems to come from within the agent itself, since no direct relation to current inputs is recognised. It may suggest that the agent is motivated by itself or acts in a goaldirected manner . For a study in goal-directed behaviour and foraging, see, for example , [Hill 2006]. Goal-directed behaviour to search for invisible food is a next case of behaviour analysed . In this case the temporal description of the externally observable behavioural dynamics may become still more complex, as it has to take into account more complex temporal relations to (more) events in the past, such as the positions already visited during a search process . Also the internal dynamics may become more complex . To describe mental capabilities generating such a type of behaviour from an internal perspective, a mental state property goal can be used. A goal may depend on a history of inputs. Finally, a fourth class of behaviour analysed, which also goes beyond reactive behaviour, is learning behaviour (e.g., conditioning) . In this case, depending on its history comprising a (possibly large) number of events, the agent's externally observable behaviour is tuned . As this history of events may relate to several time points during the learning process, this again adds temporal complexity to the specifications of the behaviour and of the internal dynamics . To analyse these four different types of behaviour in more detail, four cases of a food supplying environment are considered in which suitable food gathering behaviours are needed. These cases are chosen in such a way that they correspond to the types of behaviour mentioned above. For example , in case I it is expected that stimulus-response behaviour is sufficient to cope with the environment, whilst in case 2, 3 and 4 , respectively, delayed response behaviour , goal-directed behaviour, and learning behaviour is needed). The basic setup is inspired by experimental literature in animal behaviour such as [Tinklepaugh 19321 . The world consists of a number of positions which have distances to each other. The agent can walk over these positions . Time is partitioned in fixed periods (days) of a duration of d time units (hours) . Every day the environment generates food at certain positions, but this food mayor may not be visible , accessible and persistent at given points in time. The different types of environment with increasing temporal complexity considered are: (I) Food is always visible and accessible . It persists until it is taken. (2) Food is visible at least at one point in time and accessible at least at one later time point. It persists until it is taken. (3) Food either is visible at least at one point in time and accessible at least at one later time point, or it is invisible and accessible the whole day. It persists until it is taken . (4) One of the following cases holds: a) Food is visible at least at one point in time and accessible at least at one later time point. It persists until it is taken. b) Food is invisible and accessible the whole day. It persists until it is taken.
263
c) Food pieces can disappear, and later new pieces can appear, possibly at different positions. For every position where food appears, there are at least three different pieces in one day. Each piece that is present is visible. Each position will be accessible at least after the second food piece disappeared .
4 Modelling Approach For describing different variations in behaviour of an agent and environment , a formal modelling approach is needed. The simplest type of behaviour, stimulusresponse behaviour, can be formalised by a functional input-output association , i.e., a (mathematical) function F : inputstates --. Outputstates of the set of possible input states to the set of possible output states . A state at a certain point in time as it is used here is an indication of which of the state properties of the system and its environment are true (hold) at that time point. Note that according to this formalisation, stimulus-response behaviour is deterministic . Behaviour of this type does not depend on earlier processes, nor does it on (not observable) internal states. If also non-deterministic behaviour is taken into account, the function in the definition above can be replaced by a relation between input states and output states, which relates each input state to a number of alternatives of behaviour, i.e., R: lnputstates x Outputstates. For example , a simple behaviour of an animal that after seeing food at the position p goes to this position on condition that no obstacles are present, can be formalised using a functional association between an input state where it sees food at p and no obstacles , and an output state in which it goes to p. As opposed to stimulus-response behaviour, in less simple cases an agent's behaviour often takes into account previous processes in which it was involved ; for example , an agent that observed food in the past at position p may still go to p, although it does not observe it in the present. Instead of a description as a function or relation from the set of possible input states to the set of possible output states, in more general cases, a more appropriate descrip tion of behaviour by an input-output correlation is given in the following definition: a) A trace (or trajectory) is defined as a time-indexed sequence of states, where time points can be expressed, for example, by real or integer values. If these states are input states, such a trace is called an input trace. Similarly for an output trace. An interaction trace is a trace of (combined) states consisting of an input part and an output part. b) An input-output correlation is defined as a binary relation C : lnputjraces x Output_traces between the set of possible input traces and the set of possible output traces. c) A behavioural specification S is a set of dynami c properties in the form of temporal statement s on interaction traces . d) A given interaction trace 'Tfulfils or satisfies a behavioural specification S if all dynamic properties in S are true for the interaction trace 'T. e) A behavioural specification S is a specification ofan input-output correlation C if and only if for all interaction traces 'T input-output correlation C holds for 'Tif and only if 'Tfulfils S .
To express formal specifications for environmental, behavioural and cognitive dynamics for agents , the Temporal Trace Language (TTL, see [Bosse et al. 2006]) is used . This language is a variant of order-sorted predicate logic. In dynamic property
264
expressions, TIL allows explicit references to time points and traces . If a is a state property, then, for example state(y, t, input(agent» 1= a denotes that this state property holds in trace y at time point t in the input state of the agent. Based on such building blocks, dynamic properties can be formulated . For example, a dynamic property that describes stimulus-response behaviour of an agent that goes to food, observed in the past can be formalised as folIows: VtVx vp Vp' [ state(y. t, input(agent» 1= observed(at(agent, p))" observed(at(lood(x), p'» " observed(accessible(p')) •• state(y, t+1, outputtaqent) 1= perforrning_aclion(goto(p')) 1 Using this approach, the four variations in behaviour and environment have been formalised in detail. The results can be found in [Bosse et al. 2008] .
5 Formalisation of Temporal Complexity The Complexity Monoton icity Thesis discussed earlier involves environmental , behavioural and cognitive dynamics of living systems . In an earlier section it was shown that based on a given complexity measure em this thesis can be formalised in the folIowing manner: cm(E,) s cm(E:!) •• cm(B,) s cm(B 2) & cm(B,) :s cm(B 2) • • cm(C t ) :S cm(C 2 ) where < E" B" C, > and < Ez, Bz• Cz > are triples of environments, behaviours and cognitive systems , respectively, such that the behaviours B, are adequate for the respective environment Ej and realised by the cogn itive system Cj • What remains is the existence or choice of the complexity measure function em. To measure degrees of complexity for the three aspects considered, a temporal perspective is chosen : complexity in terms of the temporal relationships describing them . For example, if references have to be made to a larger number of events that happened at different time points in the past , the temporal complexity is higher . The temporal relationships have been formalised in the temporal language TIL based on predicate logic . This translates the question how to measure complexity to the question how to define complexity of syntactical expressions in such a language. In the literature an approach is available to define complexity of expressions in predicate logic in general by defining a function that assigns to every expression a size: [Huth and Ryan 2000] . To measure complexity, this approach was adopted and specialised to the case of the temporal language TIL. Roughly spoken , the complexity (or size) of an expression is (recursively) calculated as the sum of the complexities of its components plus I for the composing operator. In more details it runs as folIows . Similarly to standard predicate logic, predicates in TIL are defined as relations on terms. The size of a TIL-term t is a natural number s(t) recursively defined as: (I) s(x)=1 , for all variables x. (2) s(c)=1 , for all constantsymbolsc. (3) s(l(t1 ,..., In»= s(t1)+ ... + s(ln) + 1. for all function symbols I. For example, the size of the term observed(not(at(lood(x), p))) from the property BPI (see [Bosse et aI. 2008]) is equal to 6.
265
Furthermore, the size of a TIL-formula ljJ is a positive natural number s(ljJ) recursively defined as follows: (I) s(p(t, .... .J))= set,) + ... + sGt) +1, for all predicate symbols p. (2) s( ~q»=s((\fx) '1')= s((3x) '1') = s(q»+1, for all TTL-formulae 'I' and variables x. (3) s(q>&X) = s(q>lx) = s(q>"x) = s(q»+ s(x)+1, for all TTL-formulae '1', X.
In this way, for example, the complexity of behavioural property BPI amounts to 53, and the complexity of behavioural property BP2 is 32. As a result, the complexity of the complete behavioural specification for the stimulus-re sponse case (which is determined by BPI & BP2) is 85 (see [Bosse et al. 2008J for the properties) . Using this formalisation of a complexity measure, the complexity measures for environmental , internal cognitive, and behavioural dynamics for the considered cases of stimulus-response, delayed response, goal-directed and learning behaviours have been determined . Table I provides the results . Table, 1. Temporal complexity of environmental , behav ioural and cognitive dynamics. Case Stimulus-response Delayed response Goal-directed Learning
Environmental dvnamics 262 345 387 661
Behavioural dvnamics 85 119 234 476
Cognitive dvnamics 85 152 352 562
The data given in Table I confirm the Complexity Monotonicity Thesis put forward in this paper , that the more complex the environmental dynamics, the more complex the types of behaviour an organism needs to deal with the environmental complexity, and the more complex the behaviour, the more complex the internal cognitive dynamics.
6 Discussion In this paper, the temporal complexity of environmental, behavioural, and cognitive dynamics, and their mutual dependencies, were explored . As a refinement of Godfrey-Smith (1996)'s Environmental Complexity Thes is, the Complexity Monotonicity Thesis was formulated : for more complex environments, more complex behaviours are needed , and more complex behaviours need more complex internal cognitive dynamic s. A number of example scenarios were formalised in a temporal language, and the complexity of the different formalisations was measured. Complexity of environment, behaviour and cognition was taken as temporal complexity of dynamic s of these three aspects, and the formalisation of the measurement of this temporal complexity was based on the complexity of the syntactic expressions to characterise these dynamics in a predicate logic language, as known from, e.g., [Huth and Ryan 2000J. The outcome of this approach is that the results confirm the Complexity Monotonicity Thesis . In [Godfrey-Smith 1996J, in particular in chapters 7 and 8, mathematical models are discussed to support his Environmental Complexity Thesis, following, among others, [Sober 1994J. These models are made at an abstract level , abstracting from the
266
temporal dimension of the behaviour and the underlying cognitive architectures and processes. Therefore, the more detailed temporal complexity as addressed in this paper is not covered. Based on the model considered, Godfrey-Smith [1996, Ch 7, p. 216, see also p. 118] concludes that the flexibility to accommodate behaviour to environmental conditions, as offered by cognition , is favoured when the environment shows (i) unpredictabilit y in distal conditions of importance to the organism, and (ii) predictability in the links between (observable) proximal and distal . This conclusion has been confirmed to a large extent by the formal analysis described in this paper. Comparable claims on the evolutionary development of learning capabilities in animals are made by authors such as Stephens [1991]. According to these authors, learning is an adaptation to environmental change . All these are conclusions at a global level, compared to the more detailed types of temporal complexity considered here, where cognitive processes and behaviour extend over time, and their complexity can be measured in a detailed manner as temporal complexity of their dynamics .
Bibliography [I] Bosse, T., Jonker, e.M., Meij, L. van der, Sharpanskykh , A., & Treur, J. (2006) . Specification and Verification of Dynamics in Cognitive Agent Models. In: Proceedings of the Sixth Int. Conf. on Intelligent Agent Technology , lAT'06. IEEE Computer Society Press, 247-255 . [2] Bosse, T., Sharpanskykh, A., & Treur , J. (2008). On the Complexity Monotonicity Thesis for Environment, Behaviour and Cognition. In: Baldoni, M., Son , T.e., Riemsdijk, M.B. van, and Winikoff, M. (eds.), Proc. of the Fifth Int. Workshop on Declarative Agent Languages and Technologies, DALT'07. Lecture Notes in AI, vol. 4897. Springer Verlag , 2008, pp. 175-192. [3] Darwin, e. (1871). The Descent of Man. John Murray, London. [4] Godfrey-Smith, P., (1996). Complexity and the Function of Mind in Nature . Cambridge University Press. [5] Hill, T .T. (2006). Animal Foraging and the Evolution of Goal-Directed Cognition. Cognitive Science, vol. 30, pp. 3-41. [6] Huth, M. & Ryan, M. (2000) . Logic in Computer Science: Modelling and reasoning about computer systems, Cambridge University Press. [7] Sober, E. (1994) . The adaptive advantage of learning versus a priori prejustice . In: From a Biological Point of View. Cambridge University Press, Cambridge . [8] Stephens, D. (1991). Change, regularity and value in evolution of animal learning . Behavioral Ecology, vol. 2, pp. 77-89 . [9J Tinklepaugh, O.L. (1932). MUltiple delayed reaction with chimpanzees and monkeys . Journal of Comparative Psychology, 13, 1932, pp. 207-243. [10] Wilson , O. (1992). The Diversity of Life. Harvard University Press, Cambridge, Massachusetts .
Chapter 9
Complex Features in Lotka-Volterra Systems with Behavioral Adaptation Claudio Tebaldi 1 - Deborah Lacltlgnola' 'Department of Mathematics, Politecnico of Torino Corso Duca degli Abruzzi 24, Torino - Italy 2Department of Mathematics, University of Leece Via Provo Lecce-Amesano, Leece - Italy [email protected] [email protected]
1.1. Introduction Lotka-Volterra systems have played a fundamental role for mathematical modelling in many branches of theoretical biology and proved to describe, at least qualitatively, the essential features of many phenomena, see for example Murray [Murray 2002] . Furthermore models of that kind have been considered successfully also in quite different and less mathematically formalized context: Goodwin' s model of economic growth cycles [Goodwin 1967] and urban dynamics [Dendrinos 1992] are only two of a number of examples. Such systems can certainly be defined as complex ones and in fact the aim of modelling was essentially to clarify mechanims rather than to provide actual precise simulations and predictions . With regards to complex systems, we recall that one of their main feature, no matter of the specific definition one has in mind, is
adaptation, i. e. the ability to adjust.
268 Lotka-Volterra systems are a large class of models for interaction among species. Depending on such interactions competition, cooperation or predator-prey situations can occurr, giving rise to further classifications. The dynamics depends on parameters intrinsic to the species, tipically growth rate and carrying capacity, and on the coefficients of interaction among the species, which however are often more difficult to specify. Here we focus on competition among species and, differently from the classical case, we consider for them a kind of "learning skill": the ability to compete is proportional to the average number of contacts between species in their past, with a weak esponential delay kernel providing a "fade-out" memory effect. Adaptation in such a form is shown to be a mechanism able to establish the appearence of a variety of behaviors, different from equilibria, as distinct kinds of oscillations and chaotic patterns. Furthermore, even for given parameter values, the system can show striking features of multeplicity of attractors. This kind of "complexity" comes out as collective behavior emerging from the interactions among the species involved.
1.2. The model We consider the general competitive Lotka-Volterra system for n species
dN · N· _ I == r.[l _ _ l ]N ' - '" . It ·.N·N · I ki I L...J IJ I J dt (1)
with ();ij(t)
==
t oo N j(u)Nj(U)[(Tj (t - u)dtt
1::; i,i ::;
n, j ¥ i ,
(2)
N, (t) denotes the density of the i-species at time t, the positive parameters r, and k, stand respectively for the intrinsic growth rate and the carrying capacity of i-species . The positive continuous function aij represents the interaction coefficient between the j and i species . The delay kernel Kr is chosen as in [Noonburg 1986],
e -t/'f
KT ==--
. T.
as it provides a reasonable effect of short term memory. In this case, the set of integro-differential equations (1 )-(2) is equivalent to the following set of ordinary differential equations [Lacitignola & Tebaldi 2005],
269
1 ~ i, j
~
n, j:f:: i , i:f:: 1
(3)
where Cj
1 =kj
,I • . _ ·"1 IJ -
1'j rt i j
kj
i,j=l, ... ,n
Having the aim to discuss the role of the interactions, we consider species with the same adaptation rate T and the same carrying capacity k, except for one. Such a model can allow to investigatethe connectivityproblem [May 1973], strictly related with the role of interactions, also starting to take into account some kind of species differentiation. The n-species system with r, = 1, c,= c, T,= T, for all i =1,...., n has been extensively investigated [Bortone & Tebaldi 19981, IBarone & Tebaldi 2000]: even in the presence of such strong simmetry, i.e. when all the n species are characterized by the same ecological parameters, the system is able to provide patterns in which the species are differentiated. Coexistence can appear as dominance of one species on the others through a variety of forms, i.e. equilibria, periodic oscillations or even strange attractors. In this symmetric case, the existence of a family of invariant subspaces has been shown and a 4-dimensional model introduced, with n as a parameter. Such a reduced model is proven to give full account of existence and stability of the equilibria in the complete system. Correspondence between the reduced model and the complete one has been found for a large range of parameter values also in the time dependent regimes, even in the presence of strange attractors. Such striking reduction results, also with multiplicity of attractors, very useful in the study of competition phenomena involving a large number of species, are a consequence of the symmetry properties of the system. It was on the line to clarify this aspect that we have chosen to differentiate some species on the ground of both the characterizing parameters, carrying capacity and intrinsic growth rate [Lacitignola & Tebaldi 2003]. The analysis of the equilibria in (3) has been completely described according to the size of ecological advantage or disadvantage of the first species: the case C 1 « c exhibits the richest variety of equilibria, which have been investigated in full detail in [Lacitignola & Tebaldi 20041, also describing the phenomenology after their destabilization. The existence of a certain class of invariant subspaces for system (3) allows, also in this case, the introduction of a 7-dimensional reduced model, where n appears as a parameter: striking reduction properties are therefore still maintained [Lacitignola & Tebaldi 2005].
270 In this study, we focus on some interesting aspects of time dependent regimes and provide an example of coexistence in the form of complicated alternance between chaotic behavior and periodic one, in both cases with multiplicity of attractors.
1.2.1. The Equilibria Investigations on the structure and properties of the equilibria in (3) can be efficiently performed making use of the reduced model. By recalling the symmetry properties of the system, we remark that any solution in this reduced model corresponds in general to (n-I) such solutions in the complete system (3). Choosing the time scale, it is assumed r = I , observing that the condition r)= 1 means equal reproduction rates for all the species whereas r, < 1 or r, > 1 indicate that the first species reproduces respectively more slowly or faster than the remaining ones . In the reduced model we have at most five interior fixed points, i.e. with all non zero components, depending on the parameter r., c. and c , namely
R :X1 = u S ; X1 =
S' ; Xl
SI
:::::: ..12
B :Xi =b1 B ' ; X i = b2
b1 (' 2
¥ Xl ,X: l ¥ Xl,X~
As a consequence, in the complete system, we have the three internal equilibria R, S and s*
the (n-l) equilibria B, ", h, b X bl X bl ) ( Xbl Xbl b ( ·1 ' 1, " II , • • • ,~' II ' ~l , - " ' 1,·
X b,') ( ", hI Xbl Vbl b ) · · , -" , . . . ," 1 " II " " ' ""11 ' J
and the (n-l) equilibria B*i ~ (' '-""\' 1b2 , b" :} ~ .1\ h !
" 02) ' ( ·/\, ''1b2 ' X' h02 1 b: v lrJ) ).:2:~ ~ . • , ..'\. h , ...
• • • .,.I\.. h
1
(..i"\ 1°2,,,,\. "02 Lj ) h , ... , .,,\\' h02 , fJ-
where 2 < i < n. To characterize the equilibria, we report only the n-ple of Xi'S since a ij = Xi X, at the equilibrium. We also stress that any critical point of (3) with one or more components Xi = 0 is not considered here since it is unstable for all the values of the parameters. While the structure of the equilibria is essentially the same as in the symmetric case, their features depend both on the first species level of differentiation, the ecological conditions of the remaining species and their stability properties on the adaptation parameter T.
271
1.3. Complex Behavior in the Time Dependent Regimes In this section we focus on the adaptiv e competition among four species and discuss an interesting example of complex behavior which arises in the time dependent regimes as an effect of adaptation. We consider the following intervals for the relevant parameters: 0.1 s r 1 :5 3.5, 0.0l S C 1 :5 0.2, 0.2:5 c :5 0.8,0 < T f(u, v) for all u -I- v. Two kinds of continuous triadic game of conflict have proven especially amenable to analysis. The first kind of game, which we call Type I, is one in which strategies are intensities, variance of fighting strength is zero, and the set of all possible outcomes from the triadic interaction has a discrete probability distribution for every conceivable strategy combination (u,v). Let there be K such outcomes in all, let Wi (u, v) be the probability associated with outcome i and let Pi(u) be the corresponding payoff to the focal individual. Then K
L Wi(U, v)Pi(u, v)
f(u, v) =
i=1
K
with
L Wi(U, v)
= 1.
(1.1)
i=1
The second kind of game, which we call Type II, is one in which strategies are thresholds, variance of strength is non-zero and strength is continuously distributed with probability density function g on [0, 1]; nevertheless, for all (u, v) the sample space [0,1]3 of the triad's three strengths-assumed independentcan be decomposed into a finite number K of mutually exclusive events. Let ni(u, v) denote the i-th such event, and let Pi(X,Y, Z) denote the corresponding payoff to the focal individual when its strength is X and the other two strengths in the triad are Y and Z. Then K
f(u,v)
=
L .=1
111
Pi(x ,y,z)g(x)g(y)g(z)dxdydz.
(1.2)
( x ,y ,z) E 0i( u ,v)
We provide examples of each kind of game.
2
Victory displays
Victory displays, ranging from sporting laps of honor to military parades, are well known in human societies and have been reported in various other species (1), the best known example being the celebrated "triumph ceremony" of the greylag goose (3). Two models of such victory displays exemplify the Type I game. Bower (1) defined a victory display to be a display performed by the winner of a contest but not the loser. He proposed two explanations for their function . The "advertising" rationale is that victory displays are attempts to communicate victory to other members of a social group that do not pay attention to contests or cannot otherwise identify the winner. The "browbeating" rationale is that
285 victory displays are attempts to decrease the probability that the loser of a contest will initiate a future contest with the same individual. Our modelsdistinguished by A for advertising and B for browbeating-explore the logic of these rationales. Both models assume that the members of a triad participate in three pairwise contests, and that more intense victory displays are more costly to an individual but also more effective in terms of either being seen by conspecifics (Model A) or deterring further attack (Model B): at intensity s, the cost of signalling is c(s), and the probability of the desired effect is p(s) . In either model, dominating another individual increases fitness by a , and a contest in which neither individual dominates the other increases the fitness of each by ba, where b :=; 1. In Model A, we assume that a bystander that has seen an individual win will subsequently defer to it with fixed probability Ai, where i = 0, i = 1 or i = 2 according to whether the observer is an untested individual, a prior loser or a prior winner, respectively, with 0 :=; A2 :=; Ao :=; Al :=; 1. Deferring eliminates the cost of a fight, which we denote by Co. We also allow for a prior loser to defer to an observed loser with probability A3, and we allow for a potential "loser effect" (8): an (indirectly) observed loser subsequently loses against the observer with probability where 0 :=; l :=; 1. The reward function is most readily obtained by first recording the payoffs and probabilities associated with each outcome in a table having K rows; then (1.1) determines f. Because K = 36 for Model A, however, only excerpts are shown as Table 1: the full table appears in (5). As presented, the table assumes that displays are obligate. One could argue, however, that-at least among animals with sufficient cognitive ability-victory displays should be facultative: in a triadic interaction, there is no need to advertise after an individual's final contest, because th ere is no other individual that can be influenced by the display. Our model is readily adapted to deal with this possibility, as described in (5). For the sake of definiteness, we analyze the game with
lt l
c(s)
=
'YBas,
p(s) = E+ (1 - E) (1 - e- lI s )
(1.3)
where B (> 0) has the dimensions of INTENSITy-l, so that 'Y (> 0) is a dimensionless measure of the marginal cost of displaying, and 0 :=; E :=; 1. The analysis shows that for any values of the positive parameters Co, 'Y, l, b, Ao, Al' A2 and A3 (the last six of which cannot exceed 1), there is a unique ESS at which animals display when E-the baseline probability of observing victors in the absence of a display-lies below a critical value, but otherwise do not display. This critical value is zero if display cost 'Y is too large but otherwise positive; it decreases with respect to 'Y or l, increases with respect to any of the other six parameters and is higher for facultative than for obligate signallers (except that it is independent of A3 for facultative signallers) . For sub critical values of baseline probability of observation E, the intensity of signalling at the ESS decreases with respect to 'Y or l, increases with respect to any of the other six parameters and is higher for facultative than for obligate signallers (with the same exception as before). Moreover, it largely does not matter whether the effect of signalling is
286 interpreted as increasing the probability of being seen or of being deferred to . Ta ble 1 : Model A payoff to a focal individual F whose first and second opponents are 01 and 02 , respectively, conditional on participation in the last two of the three contests. Parentheses indicate a contest in which the focal individual is not involved. A bold letter indicates that the individual's opponent deferred. Note that 01 and 02 do not label specific individuals: 01 is whichever individual happens to be the focal individual's first opponent for a given order of interaction, the other individual is 02 .
CASE
WINNERS
3rd
PROBABILITY
PAYOFF
w;(u,v)
P;(u)
1st
2nd
F
F
iAop(u)
{2 - 2c(u) - cola
2
F
F
i2{1 - AOp(Un
{2 - 2c(u) - 2co}a
5
01
02
8
F
(02)
9
F
(02) F j 02
10
F
(02)
i2{1 F
F
i2{1
+ l p(vn
-2coa
+ l p(u)}{l - A2 p(VnA2 p(u) + l p(unAl p(v) p(u)
i2{1 f4{1
+ lp(u)}{l- A2P(u)}{1 -
{2 - 2c(u) - cola {I
+b -
c(u) - cola
A2 P(Vn
{2 - 2c(u) - 2co}a
29
(01)
F
02
f4{1 - Ao p(v)}{l - Al p(u)}{l - l p(vn
{I - c(u) - 2co}a
33
(02)
01
02
i2{1 - l p(Vn AI p(v)
-coa
36
(01)
01
Fj02
i2 {I - Ao p(v nAl
{b - cola
In Model B, we assume t hat contestants subordinate to a current winner with a probability that increases with the intensity of the victory disp lay, and we reinterpret € as the baseline probability of submission (i.e., the probability t hat a victor elicits permanent submission from a loser in the absence of a display) . As before, we construct a table of payoffs and associated probabilities . Because the orde r of interaction does not matter in this case, t here are fewer possible outcomes; specifically, K = 10 in (1.1). We again find t hat t here is a unique ESS with a crit ical value of € , above which winners do not display, below which intensity decreases with € (5). In this regard, predictions from the models are similar; however, there is also an important difference. In the case of advertising, the inte nsity of disp lay at t he ESS increases with respect to the parameter b, an inverse measure of the reproductive advantage of dominating an opponent compared to simply not subordinating; by contrast, in the case of browbeating, the intensity of display at the ESS decreases with respect to b, as illustrated by Fig. 1. Therefore, all other t hings being equal, th e intensity of advertising victory displays will be highest when there is little difference between dominating an opponent and not subordinating, a set of cond itions likely to generate low reproductive skew (as in monogomous species) . By contrast, the intensity of browbeating victory displays will be highest when t here are greater rewards to dominating an opponent, a set
287 Bv 3
~B 2
A '" .i - >
1 ••••••············A.··············
o :-o
---=-=-0.5
=b 1
Figure 1: Comparison of advertising and browbeating ESSs. Evolutionarily stable signalling intensity (scaled with respect to ~ to make it dimensionless) is plotted as a function of dominance advantage b. Values of the other parameters (all dimensionless) are CO = 0.1 for the fixed cost of a contest, 1 = 0.5 for the loser effect (i.e., the probability that an observedloser again loses is 0.75), 'Y = 0.05 for the marginal cost of displaying, Ai = 0.9 for all i for the probability of deference and E = 0.1 for the baseline probability of the desired effect (bystander attention to victor in Model A, submission to current opponent in Model B) in the absence of a display. For obligate signallers, the advertising ESS is shown dashed; for facultative signallers, it is shown dotted.
of conditions that is likely to generate high reproductive skew. These predictions appear to accord quite well with our current understanding of the taxonomic distribution of victory displays (1; 5).
3
Coalition formation
A model of coalition formation exemplifies the Type II game . We merely sketch this model here; full details are in (6). We assume that each member of a triad knows its own strength but not that of either partner. All three strengths are drawn from the same symmetric Beta distribution on [0,1] with variance a 2 • Stronger animals tend to escalate when involved in a fight, weaker animals tend to not to escalate. If an animal considers itself too weak to have a chance of being the alpha (dominant) individual in a dominance hierarchy, then it attempts to form a coalition with everyone else: coalition means a mutual defence pact and an equal share of benefits. Let A denote total group fitness . Then it costs 8A (2: 0) to attempt a coalition; the attempt may not be successful, but if all agree to it , then there are no fights. If there's a dominance hierarchy with three distinct ranks after fighting, then the alpha individual gets exA (where ex > !), the beta individual gets (1 - ex)A and the gamma individual gets zero. If there's a three-way coalition or if the animals fight one another and end up winning and losing a fight apiece , then each gets ~A ; however, in the second case they also incur a fighting cost. If a coalition of two defeats the third individual, then each member of the pair
288 Table 2: Payoff to a focal individual F of strength X whose partners are A and B with strengths Y and Z, respectively, with t:. = q{X + Z} - Y and ((X, Y, Z) = o:p(X - Y)p(X - Z) + Hp(X - Y)p(Z - X)p(Y - Z) + p(X - Z)p(Y - X)p(Z - Y)} + (1 - o:){p(X - Y)p(Z - X)p(Z - Y) + p(X - Z)p(Y - X)p(Y - Z)} . CASE
COALITION
EVENT
PAYOFF
STRU CTURE
Oi(U, v)
Pi (X, Y,Z)
{F,A,B}
X
2
{F,B}, {A}
X
3
{F, A }, { B }
X
{F}, {A, B}
X
4
5
{F} , {A} , {B} X
6
{F}, {A}, {B} X
7
{F} , {A} , { B } X
8
{F} , {A} , {B} X
< U, Y < U,Y < U, Y > U, Y < U, Y > U,Y > U, Y > U, Y
< v, Z < v > v, Z < v < v, Z > v < v, Z < v > v, Z > v > v, Z < v < v, Z > v > v, Z > v
H -O}A
Ho:p(~)
+1-
0:- 20 - c(~)}A
P2(X,Z, Y) {o:p(X - q{ Y
+ Z}) -
c(X - q{Y
+ Z})}A
-OA
+1Z) + 1 -
{(20: - l)p(X - Y)
0: - c(X - Y)}A
{(20: - l)p(X -
0:- c(X - Z)}A
{«(X, Y, Z) - c(X - Y) - c(X - Z)}A
obtains ~A while the individual obtains zero; and if the individual defeats the pair, then it obtains aA while each member of the pair obtains ~(1 - a)A. We assume that there is at least potentially a synergistic effect, so that the effective strength of a coalition of two whose individual strengths are 8 1 and 82 is not simply S1 + S2 but rather q{S1 + S2}, where q need not equal 1. Let p(.0.s) denote the probability of winning for a coalit ion (or individual) whose combined strengt h exceeds that of its opponent by .0.s; p increases sigmoidally with .0.s at a rate determined by a parameter r measuring the reliability of strength difference as a pred ictor of fight outcome. Note that p(.0.s) +p(-.0.s) = 1 with p(-2) = 0, p(O) = ~ and p(2) = 1. We assume t hat fighting costs are equa lly borne by all members of a coalition. Let c(.0.s)A be the cost of a fight between coalitions whose effective strengths differ by .0.s. Costs are greater for more closely matched opponents; so, from a maximu m Co, cost decreases nonlinearly with l.0.s l at a rate determined by a parameter k measuring sensitivity of cost to strength difference. Let u be the coalition threshold for Player 1, the potential mutant: if its strength fails to exceed this value, then it attempts to make a mutual defence pact with each of its conspecifics. Let v be the corresponding threshold for Player 2, who represents the popu latio n. Let X be the strength of the u-strategist, and let Y and Z be the strengths of the two v strategists. We can now describe the set of mutually exclusive events with associated payoffs as in Tab le 2, and the reward follows from (1.2) with K = 8. For this game, the evolut ionarily stable st rategy set depends on seven parameters, namely, Co (maximum fight ing cost), q (synergy multiplier) , () (pact cost), a (proportion of additional group fitness to a dominant), r (reliability of strength difference as pred ictor of fight outcome), k (sensitivity of cost to
289 strength difference) and (72 (variance) . It is a complicated dependence, but it enables us to calculate, among other things, the probability that two animals will make a pact against the third in an ESS population. Details appear in (6).
4
Eavesdropping
As noted in §2, animals can eavesdrop on the outcomes of contests between neighbors and modify their behavior towards observed winners and losers. A model of such eavesdropping (7) further exemplifies the Type II game, in this case with K = 28. The model, which extends the classic Hawk-Dove model of animal conflict to allow for both continuous variation in fighting ability and costs that are greater for more closely matched opponents (as in §3), was motivated by earlier work showing that eavesdropping actually increases the frequency of mutually aggressive contests (2). But that conclusion was predicated on zero variance of strength. To obtain a tractable model with non-zero variance, we had to
o
~----~-----~- (T2
o
0.04
0.08
Figure 2: The evolutionarily stable aggression threshold under eavesdropping (V* , solid curve) as a function of variance (j2 for various values of the dominance advantage parameter b when strength has a symmetric Beta distribution on [0,1] and the cost of a fight between animals whose stengths differ by D.s is 1-ID.slo.2 for D.s E [-1, 1J. In each case, the corresponding basic threshold (v*, dashed curve) is also shown.
make several simplifying assumptions. In particular, we assumed that fights are always won by the stronger animal (the limit of §3 as reliability parameter r -----+ 00). Furthermore, we first determined a basic aggression threshold for animals that do not eavesdrop, and then considered eavesdropping only among animals whose strengths at least equal that basic threshold. Thus the question becomes whether eavesdropping raises the threshold. We found that it always does, suggesting that eavesdropping reduces rather than increases aggressive behavior in Hawk-Dove games. Typical results are shown in Fig. 2, where the parameter b has the same meaning as in §2. Details appear in (7).
290
5
Conclusion
We have shown how to obtain insights on animal network phenomena by studying them in their simplest possible setting, namely, a triad. Our analysis of victory displays (Type I, strategies as intensities) has confirmed that such behavior can occur either as an advertisement to bystanders or to browbeat a current opponent. Our analyses of coalition formation and eavesdropping (Type II, strategies as thresholds) have helped elucidate the fundamental conditions under which coalitions will form, and indicate for the first time that eavesdropping acts to reduce the frequency of escalated fighting in Hawk-Dove models. We hope that our analyses of these triadic interactions serve as important benchmarks for understanding analogous phenomena in larger networks.
6
Acknowledgments
This research was supported by National Science Foundation award DMS0421827 to MM-G and NSERC Discovery grant to TNS.
Bibliography [1] BOWER, J. L., "The occurrence and function of victory displays within communication networks", Animal Communication Networks, (P . McGREGOR ed.). Cambridge University Press Cambridge (2005), pp. 114-126. [2] JOHNSTONE, R. A., "Eavesdropping and animal conflict", Proceedings of the National Academy of Sciences USA 98 (2001), 9177-9180. [3] LORENZ, K. Z., "The triumph ceremony of the greylag goose, Anser anser L.", Philosophical Transactions of the Royal Society of London B 251 (1966), 477-478. [4] MAYNARD SMITH, J. , Evolution and the Theory of Games, Cambridge University Press Cambridge (1982). [5] MESTERTON-GIBBONS, M., and T. N. SHERRATT, "Victory displays: a game-theoretic analysis " , Behavioral Ecology 17 (2006), 597-605. [6] MESTERTON-GIBBONS, M., and T. N. SHERRATT, "Coalition formation: a game-theoretic analysis", Behavioral Ecology 18 (2007), 277-286. [7] MESTERTON-GIBBONS, M., and T . N. SHERRATT, "Social eavesdropping: a game-theoretic analysis", Bulletin of Mathematical Biology 69 (2007), 1255-1276. [8] RUTTE, C., M. TABORSKY, and M. W. G. BRINKHOF, "What sets the odds of winning and losing?", Trends in Ecology and Evolution 21 (2006), 16-21.
Chapter 12
Endogenous Cooperation Network Formation S. Angus Deptarment of Economics, Monash University, Melbourne, Australia. [email protected].
This paper employs insights from Complex Systems literature to develop a computational model of endogenous strategic network formation . Artificial Adaptive Agents (AAAs), implemented as finite state automata, playa modified two-player Iterated Prisoner's Dilemma game with an option to further develop the interaction space as part of their strategy. Several insights result from this relatively minor modification: first , I find that network formation is a necessary condition for cooperation to be sustainable but that both the frequency of interaction and the degree to which edge formation impacts agent mixing are both necessary conditions for cooperative networks. Second , within the FSA-modified IPD frame-work, a rich ecology of agents and network topologies is observed , with consequent payoff symmetry and network ' purity' seen to be further contributors to robust cooperative networks . Third , the dynamics of the strategic system under network formation show that initially simple dynamics with small interaction length between agents gives way to complex, a-periodic dynamics when interaction lengths are increased by a single step.
1
Introduction
The strategic literature has seen a long-standing interest in the nature of cooperation, with many contributions considering the simple but insightful two-player Prisoner's Dilemma (PD) game. Traditionally, such games were analysed under an uniform interaction specification such that agents met equiprobably to playa single (or repeated) two-player game. More recently however, authors have relaxed this condition , and have analysed strategic games of cooperation and coordination under both non-uniform interaction and non-uniform learning
292 environments [3, 1]. The topological significance of the interacting space has been stressed by these authors as it appears to influence t he degree to which cooperation can be sustained. In the present work, constraints concerning agent rationality and rigid agent interactions are relaxed within a fundamentally agent-based modelling framework. Moreover, in contrast to one related approach in the literature [5] , agents are given strategic abilities to change the interaction space themselves (i.e. to change interaction probabilities) during pair-wise game-play. It is in this sense that a 'network' arises in the model, and hence, such a network is said to be a truly endogenous feature of the modelling framework; a feature which to this author's knowledge has not been previously handled with boundedly rational adaptive agents . The key insights of the present work can be summarised as follows: first, an analytic analysis without network formation reveals that the modification to the standard iterated PD (IPD) framework introduced below does not change the canonical behaviour of the system ; second, that when network formation is afforded, stable cooperation networks are observed, but only if both a typeselection and enhanced 'act ivity' benefit of the network is present ; third , that the extended system under certain interaction lengths is inherently self-defeating, with both cooperation and defection networks transiently observed in a longrun specification; and fourth , that the network formation process displays selforganized criticality and thus appears to drive the complex dynamics observed in the long-run .
2
The Model
Let N = {I , . . . , n} be a constant population of agents and denote by i and j two representative members of the population. Initially, members of the population are uniformly paired to play the modified IPD game 9 described below. When two agents are paired together, they are said to have an interaction. Within an interaction, agents play the IPD for up to a maximum of T iterations, receiving a payoff equal to the sum of the individual payoffs they receive in each iteration of the IPD . An interaction ends prematurely if either player plays a 'signal' thus unilaterally stopping the interaction. A strategy for a player s describes a complete plan of action for their play within an interaction, to be explained presently. In addition to the normal moves of cooperate (C) and defect (D), an agent can also play one of two signal actions, # 8 and #w respectively. Thus, in anyone iteration of the IPD, the action-set for an agents is {C, D, # 8, #w}. As mentioned above, the playing of a signal by either player leads to the interaction stopping, possibly prior to T iterations being reached. The playing of a signal can thus serve as an exit move for a player. The interpretation of the two types of signal is as follows . Although initial pairing probabilities between all players are uniform random, agents can influence these interaction probabilities through the use of the signals. Formally, let
293 some agent i maintain a preference vector,
(1.1)
Ii
where is the preference status of agent i towards agent j and Ps > Po > Pw are natural and denote strengthen, untried and weaken preferences respective ly. Initially all entries are set to Po for all j E N / {i}. A probability vector ri for each agent is constructed from the preference vector by simple normalisation onto the real line,
.
r .
._
t
{ r . rj
-
I Ji
"£ Ii
if j E N / {i}} ,
(1.2)
such that each opponent occupies a finite, not-zero length on the line [0, 1] with arbitrary ordering. Since we study here a model of mutual network/trust formation, preferences can be strengthened only by mutual agreement . Specifically, if agents i and j are paired to play the IPD, then when the interaction ends in iteration t :::; T,
Ii = J! = {ps Pw
if s: = s{ else,
= #s ,
(1.3)
si
where denotes the play of agent i in iteration t. That is, in all cases other than mutual coordinated agreement , the two agents will lower their relative likelihood of being paired again (though the playing of #w might cause the interaction to end prematurely with the same result). Payoffs for each iteration of the PD are given by (1.4) below.
#s I
C D
#w
#s (0,0) (0,0)
c (3,3)
(5,0)
II D
(0,5) (1, 1)
#w
(0,0)
(1.4)
(0,0)
The playing of signals, is costly: the instantaneous cost for that period is the foregone payoff from a successful iteration of the IPD.
2.1
Game Play
In a period each agent is addressed once in uniformly random order to undergo m interactions with players drawn from the rest of the population (N/{i}) . An agent is paired randomly in accordance with their interaction probability vector r i with replacement after each interaction. Preference and probability vectors are updated after every interaction. Thus, it is possible that, having previously interacted with all agents , an agent retains only one preferred agent, whilst all othe rs are non-preferred, causing a high proportion (if not all) of their m interactions to be conducted with
294 their preferred partner. However, it is to be noted t hat t he value of m is only a minimum number of interactions for an agent in one period, since they will be on the 'receiving end' of other agents' intera ctions in the same period. In this way, agents who incur an immediate cost of tie st rengthening (foregoing iteration payoffs) can gain a long-term benefit t hrough further preferential interactions. At the end of T periods, t he populat ion undergoes selection. A fract ion B of the populat ion is retained (t he 'elites' ), whilst t he remainder (1- B) are replaced by new agents as described below. Selection is based on a ranking by total agent payoffs over the whole period. Where two agents have t he same total payoffin a period, the older player remains.'
2.2
Agent Modeling
Each agent is modeled as an k (maximum) st ate FSA. Since signals (#x) have only one public interpretati on, each state must include three transition responses: R(C) , R(D) and R( #) (or just two in the case of unilat eral stopping) .2 After each period, a fraction B will stay in the population , with the remaining agents being filled by new entra nts. Here, the process of imitation and innovation/mistake-making is implemented via two foundat ional processes from t he genetic algorit hm (GA) literature. Initially, two agents are randomly selected (with replacement ) from the elite popul ation. A one-point crossover operator is applied to each agent , and two new agents are formed. Th e strategy encoding (bit-strings) of t hese new agents t hen undergo point mutations at a pre-determined rate (5 bits per 1000). This process (random selection, crossover and mut at ion) continues until all the remaining spots are filled.
3 3.1
Results & Discussion Uniform interactions
To begin, we st udy a static uniform interaction space to check any unwant ed outcomes due to t he modified IPD set-up. In this situ at ion, rather t han agents upgrading t heir preference vector after each interaction, t he preference vector is uniform and unchanged th roughout the model. In this way, the effect of the modificati on to t he standard IPD framework can be analysed. Under such a scenario, the action set for each agent reduces to {C, D, #} since t he signal act ion # has no interaction space interpretation, but still provides a means of prematurely ending th e interaction (thus we may drop the sub-script ). To keep matters simple, we consider a model in which the maximum interact ion length T = 2, which yields a maximum FSA state count of k = 3. Under these conditions, a st rategy will be composed simply of a first play, and response plays to C and D . 1 Following SSA [5] . 2To facilitat e t he computational modeli ng of this environment, agent st rategies were encoded into binary form at. See [4J for an analagous description of t his meth od for FSA .
295 In this setting, no evolut ionary stable strategy will include # as a first play, since t he payoff for such a strategy with any ot her agent is 0.3 This leaves st rategies in t he form of a triplet , S :
{PI, R(C ), R(D )} ,
where PI E {C,D } and R(.) indicate subsequent plays in response to either C or D plays by the opponent R(.) E {C, D , # }. In all, 18 unique strategies can be const ructed. It is instructive to consider whet her cooperative st rategies might be evolut ionary stable in t his scenario. Clearly, a str ategy Se : { C, C, C } will yield st rictly worse payoffs than t he st rat egy S D : {D , D , D } in a mixed environment of t he two. However, it can be shown" that the st rategy SA : {C, C, #}, is uniquely evolutionary stable in an environment of SD only. However, SA is itself suscept ible to at tack by a ' mimic' agent such as SB = {C, D, D}, which itself will yield to the familiar S D . In t his way, even with the added facility of being able to end the interaction prematurely, the only evolut ionary stable strat egy with respect to th e full strategy space is that of SD . Any intermediate restin g place for t he community will soon falter and move to this end.
3.2
Uniform Interactions: Computational Results
Computational experiments were run under uniform mixing as described above as a method of model validat ion. As predicted above, t he model showed the clear dominance of SD under uniform mixing. Additionally, t he initi al 'shakeout ' periods (t < 30) gave rise to interesting wave-like strategic jostling. Agents playing cooperation first, and replying to D with # were the first to have an early peak, if short-lived, which is not unexpected, since playing th e signal is not t he best-r esponse to any subsequent play. T hereafter TFT-nice (C) peaked, but were soon overcome by the t urn-coat type (who dominates TFT-nice). However, as t he sto ck of C players diminish, ' turn-coat' too , yields to the D-resp type st rategies (such as TFT-nasty (D)). We may conclude then, t hat the presence of the signal play (#) does lit tle to affect st rategic outco mes in t he standard IPD set- up; defection st ill reigns supreme in th e uniform IPD environment . The impact of network formati on decisions by agents was parameterised in the computational experiments as follows,
Pw Ps
=
(1- TJ)2 (I + TJ f
and
(1.5) (1.6)
where TJ E [0,1 ). Th e choice of the expression is somewhat ar bitary, however, t he current specification retains symmet ry about Po = 1 for all values of TJ and by 3T he inte raction would end after the first it eration, and 9 { # x Iy) = 0 for all x E {s, w } and Y E {C,D, # (s), # (w)} . 4 P roofs available from t he aut hor on request.
296 taking the squared deviation from 1, th e ratio 'Trs/Pw could be easily varied over a wide range. To determine what condit ions are favourable for network formation, a second computational experiment was conducted, this time 't urning up' th e int eraction space impact of any signalling play by the agents. Specifically, the network tuning parameter TJ was varied in the range [0.2,0 .95] together with the minimum interaction parameter m over [2, 20]. It was found that necessary condit ions for sust ainabl e network formation were TJ ;::: 0.8 and m ;::: 10. In terms of t he population , these accord with a ratio of Ps to 'Trw (by (1.5),(1.6)) of around 80 t imes,5 and a minimum fraction of interactions per period of around 10% of t he popul at ion. Further, the fraction of mutual cooperative plays (of all PD plays) moved in an highly correlated way with degree. It would app ear , th erefore, t hat network format ion in th is model is due to agents who play C first, and P[R(C)] = #s .6 A closer look at the dynam ics of prevalent strategies under network forming conditions confirms this conclusion.
(a) 10
(b) 12
(c) 13
(d) 16
Figure 1: Example network dynamics (m = 20, TJ = 0.8): network state at end of indicated period; agent ID show next t o each node; ag en t colori ng as follows - (1'1) robust coope rat ive; (.) robust defection; ( .) opportunist ; a nd ( ) tex t for exp la nat ion).
'sucker' (see
To better understand th ese dyn amics, a series of network snapshots for one representat ive network formation trial under t he above condit ions is shown in Fig. 1. Here, at least four distin ct phases are discern able: Th e first phase, amorphous connected, saw the existence of many sucker typ es leading to a super network with high average degree. Second , the segregated connected phase saw th e network remain super connected, but clear segregation began to occur , such t hat agent-to-age nt edges become highly assort ative. Third, t he segregated di sjoint was characterised by the sucker type disappearing, leading to a 'shake-out' in the population - t he over-supply of opportunist types is recit ified , wit h only those who were able to integrate wit h the defective community able to survive . Finally, a homogeneous connected phase ensued, edges become highly dense, approaching a complete compone nt grap h due to high payoffs to intra-network community edges. The defective community disappears, with no p ossibilit y of infiltration into t he co operative co m m u n ity. 5T hat is, an agent is 80 tim es more likely to interact with a preferred agent rather than a disliked agent in a given period , based on a two-agent compari son. 6Recall, agents are free to form networks wit h any kind of behavioural basis.
297
3.3
Multiple Equilibria & the Long Run
In the previous section, conditions were identified in which stable networks were formed under parsimonious agent specification (7 = 2 implying k = 3) to enable correlation with established results in the analytic literature. Here, this constraint is relaxed and instead agents interactions of up to four iterations of the IPD game (7 = 4) are considered and their long-run dynamics studied. Recall, by increasing the length of the IPD game, the maximal FSA state count increases markedly: for 7 = {3,4} maximum state count k = {7, 15}. Previous conditions were retained , with 'T/ = 0.8 and m = 20, and each trial allowed to run for 1000periods . Since a full description of the state is not feasible" we consider an aggregate description of two fundamental state characteristics, f( C, C) - the fraction of plays in a period where mutual cooperation is observed (strategic behaviour) ; and (d) - mean agent degree (network formation) . Results are presented for five long-run trials in Fig. 2. Under low interaction length the system moves within 100 steps to one of two stable equilibria - either a stable cooperation network is formed (as was studied in the previous section) or no network arises and a stable defection population sets in. However, as the interaction length increases (and so the associated complexity of behaviour that each agent can display) , the dynamics become increasingly eratic , with multiple , apparently stable, equilibria visible in each case, but transient transitions between these equilibria observed. This situation is synonymous with that of complex system dynamics.
(a) k = 2, (d) (b) k 2,f(C , C )
(c) k = 3, (el) (d) k 3,f(C, C )
(e) k = 4, (el) (f)
k
4, f(C ,C)
Figure 2: Long-run system dynamics under different maximum interaction lengths indicating increasing complexity; five trials shown at each value of k (data smoothed over 20 steps).
Surprisingly, such complex dynamics arise in a relatively simple model of network formation. Recall, that the longest that any of the agent interactions can be in these studies was just two, three or four iterations of the modified Prisoner's Dilemma. To be very sure that such dynamics are not a consequence of the encoding of the automata themselves, an identical study was run with 7 = 4, but setting 'T/ = 0 such that all interactions would continue to be of 7 Consider that each time period, a population constitutes n x lsi bits, where lsi is the length of a string needed to represent each agent's strategy, and the network n(n - 1)/2 bits; taken together, gives rise to a possible 2n (n - l ) ! 2 + n ls l states, which for T = 2 is 29 x 1O"! (It is possible to reduce this number by conducting automata autopsies, but the problem remains .)
298 uniform prob abilities. However , in all cases, the syste m moved to a zero cooperat ion regime within t he first 100 periods and remained there. We conclude that endogeneity of network form ation is drivin g such complex dynamics as observed above.
4
Conclusions
In contrast to prevous attempts to capt ure the dynamics of strategic network formation (e.g.[5]) , t he present model provides a relatively simple foundation, but powerfully rich behaviour al and to pological environment within which to st udy the dynamics of st rategic network form ation. Analytical and subsequent computational components of the present paper indicate t hat in this simple modified IPD set-up, coopera tion is not sust ainable wit hout th e additional benefits conferred by the type-selection and type-protection network exte rnalities. Furth ermore, even with parsimonious descriptions of boundedly-ration al agent strat egies, complex dynamics are observed in this model, with multiple and transient st ationary locations a feature of th e state space," These dynamics increased in complexity with increasing agent 'intelligence' .
Bibliography [1] E LGAZZAR, A S, "A model for the evolut ion of economic syste ms in social networks" , Physica A : Statistical Mechanics and its A pplications 303, 3-4 (2002), 543-55 1. [2] LINDGREN, Krist ian, "Evolutionary phenomena in simple dynamics", A rtificial Life II (New Mexico, ) (C. G. LANGTON, C. TAYLOR, J. D. FARMER, AND S. RASMUSSEN eds.), vol. X of Santa Fe Institu te St udies in the Science of Complexity, Santa Fe Instit ute, Addison-Wesley (1992), Proceedings of t he Workshop on Art ificial Life Held February, 1990 in Santa Fe, New Mexico. [3] MASUDA, Naoki , and Kazuyuki Am ARA, "Spa tial prisoner's dilemma optimally played in small-world networks" , Physics Let ters A 313 (2003), 55-61. [4] MILLER, John H, Carte r T BUTTS , and David RODE, "Communication and cooperat ion", Journal of Economic Behavior & Organization 47 (2002), 179-195. [5] SMUCKER, M D, E A STANLEY, and D ASHLOCK, "Analyzing social network struct ures in t he iterated prisoner' s dilemma with choice and refusal" , Technical Report CS-TR-94-1259, University of Wisconsin-Madison, Department of Computer Sciences, 1210 West Dayton Str eet , Madison , WI (1994). 8Compare Lind gren 's classic paper wit h simila r conclusions [2].
Chapter 13
Mathematical model of conflict and cooperation with non-annihilating multi-opponent Khan Md. Mahb ubush Salam Department of Information Management Science, The University of Electro Communications, Tokyo, Japan [email protected] Kazuyuki Ikko Takahashi Department of Political Science, Meiji University, Tokyo, Japan [email protected]
We introduce first our multi-opponent conflict model and consider th e associated dynamical syst em for a finite collection of positions. Opponents have no strategic priority with respect to each ot her. Th e conflict interaction amon g th e opponents only produces a certain redistribution of common area of int erests. Th e limiting distribution of th e conflicting areas, as a result of 'infinite conflict interact ion for existe nce space , is investigated . Next we exte nd our conflict model and propose conflict and cooperat ion model, where some opponents cooperate wit h each other in th e conflict interaction. Here we investigate th e evolut ion of the redistribution of th e probabilities with respect t o th e conflict and cooperation composit ion, and det ermine invariant states by usin g compute r experiment.
1
Introduction
Decades of resear ch on social conflict has cont ributed to our understanding of a variety of key social, and community-based aspects of conflict escalat ion. How-
300 ever, the field has yet to put forth a formal theoretical model that links these components to the basic underlying mechanisms. This paper presents such models: dynamical-systems model of conflict and cooperation. We propose that it is particularly useful to conceptualize ongoing. In biology and social science, conflict theory states that the society or organization functions in a way that each individual participant and its groups struggle to maximize their benefits, which inevitably contributes to social change such as changes in politics and revolutions. This struggle generates conflict interaction. Usually conflict interaction takes place in micro level i.e in individual interaction or in semi-macro level i.e. in group interaction. Then these interactions give impact on macro level. Here we would like to highlight the relation between macro level phenomena and semi macro level dynamics . We construct a framework of conflict and cooperation model by using group dynamics. First we introduce a conflict composition for multi-opponent and consider the associated dynamical system for a finite collection of positions. Opponents have no strategic priority with respect to each other. The conflict interaction among the opponents only produces a certain redistribution of common area of interests. We have developed this model based on some recent papers by V. Koshmanenko, which describes a conflict model for non-annihilating two opponents. By means of conflict among races how segregation emerges in the society is shown. Next we extend our conflict model to conflict and cooperation model, where some opponents cooperate with each other in the conflict interaction. Here we investigate the evolution of the redistribution of the probabilities with respect to the conflict and cooperation composition, and determine invariant states.
2
Mathematical model of conflict with multiopponent
In some recent papers v. Koshmanenko (2003, 2004) describes a conflict model, for non-annihilating two opponent groups through their group dynamics. But we observe that there are many multi-opponent situations, in our social phenomena, where they are making conflicts to each other. For example, there are multi race (e.g., Black, White, Chinese, Hispanic, etc), multi religion (e.g., Islam, Christian, Hindu, etc) and different political opinions exist in the society and because of their differences they have conflicts to each other. Therefore it is very important to construct conflict model for multi-opponent situation to understand realistic conflict situations in the society. In order to give a good understanding of our model to the reader, we firstly explain it for the case of four opponents denoted by A l , A z, A3and A 4 and four positions. We denote by n = {Wl, Wz, W3, W4} the set of positions which A l , A z , A 3 and A 4 try to occupy. Hence Wl, Wz, W3 and W4 represents different positions in n. By a social scientific interpretation, each Wj, j = 1,2,3,4 represent an area of a big city n. Let flo, vo, 1'0 and'TIo denote the probability measures on n. We define the probability
301
that the opponents AI, A 2 , A3and A 4 occupy the position Wj,j = 1,2,3,4 with probabilities f-lo(Wj),vo(Wj), /'o(Wj) and TJo(Wj) respectively. As we are thinking about the probability measures and a priori the opponents are assumed to be non-annihilating, it holds that 4
Lf-lO(Wj) j=l
4
4
4
j=l
j=l
j=l
= 1, LVO(Wj) = 1, L /'0 (Wj) = 1, L
TJo(Wj)
= 1.
(1.1)
Since AI, A 2 , A3and A 4 are incompatible, this generates a conflicting interaction and we express this mathematically in a form of conflict composition. Namely, we define the conflict composition in terms of the conditional probability to occupy, for example, WI by each of the opponents. Therefore for the opponent Al this conditional probability should be proportional to the product, f-l0( {wI}) x vo({W2}, {W3}, {W4}) x /'0({W2}, {W3}, {W4}) x TJo( {W2}, {W3}, {W4}). (1.2) We note that this corresponds to the probability for Al to occupy WI and the probability for A 2, A 3 and A 4 to be absent in that position Wi. Similarly for the opponents A 2 , A3 and A 4 we define the corresponding quantities. As a result, we obtain a re-distribution of the conflicting areas . We can repeat the above described procedure for infinite number of times, which generates a trajectory of the conflicting dynamical system. The limiting distribution of the conflicting areas is investigated. The essence of the conflict is that the opponents AI, A 2 , A 3 and A 4 can not simultaneously occupy a questionable position Wj . Given the initial probability distribution: (0)
(
Pll (0) P21 (0) P3i (0) P4i
(0) P12 (0) P22 (0) P32 (0) P42
(0) P13 (0) P23 (0) P33 (0) P43
(0)) P14 (0) P24 (0) P34 (0) P44
(1.3)
the conflict interaction for each opponent for each position is defined as follows: (1)._
1
(0)(
(0))(
(0))( (0)) . 1 - P31 1 - P4i ,
(0)(
(0))(
1 - P32
Pll .- (OTPll 1 - P21 zl
(1)._
1 - P42 ,
(0)) .
(1.4)
LP~~)(l - p~~))(l - p~~))(l - p~~)) .
(1.5)
1
P12 .- (i1)P12 1 - P22 Zl
(0))(
and so on. where the normalizing coefficient
zi
O )
3
=
j=l
302 Thus after one conflict the probability distributions changes in the following way: WI Al
(
A2 A3 A4
Pll (01
(0) P21 (0) P31 (0) P41
W2 (0) P12 (0) P22 (0) P32 (0) P42
W3 (0) P13 (0) P23 (0) P33 (0) P43
W4 (0) P14 (0) P24 (0) P34 (0) P44
WI
)
Al -+
(
A2 A3 A4
Pll (II
(1) P21 (1) P31 (1) P41
W2 (1) P12 (1) P22 (1) P32 (1) P42
W3 (1) P13 (1) P23 (1) P33 (1) P43
W4 (1) P14 (1) P24 (1) P34 (1) P44
)
(1.6) Thus by induction after kth conflict the probability dist ributions changes in the following way: WI Al A2 A3 A4
(
Pll (HI
(k-1) P21 (k-1) P31 (k-1) P41
W2 W3 W4 (k-1) (k-1) (k -1) P12 P13 P14 (k -1) (k-1) (k-1) P22 P23 P24 (k-1) (k-1) (k-1) P32 P33 P34 (k-1) (k -1) (k-1) P42 P43 P44
WI
)
Al -+
A2 A3 A4
(
Pll (kl (k) P21 (k) P31 (k) P41
W2 (k) P12 (k) P22 (k) P32 (k) P42
W3 (k) P13 (k) P23 (k) P33 (k) P43
(1.7) The general formulation of this model for multi opponents and multi positions and its theorem for limiting distribution is given in our recent pape r Salam, Takahashi (2006). We also investigated th is model by using empirical data but because of page restriction we can not include that in this paper.
2. 1
Computer Experimental Results
In our simulation results M(O) is the initial matrix where row vectors represent the distribution of each races. There are four races white , black, Asian and Hispan ic denoted by AI, A 2, A 3 and A 4 respectively. W1 ,W2, .... , represents the districts of a city. Here all three races moving to occupy these districts, thus the conflict appear. Here M(oo) gives the convergent or equilibrium matrix. There are several graphs in each figure. Each graph shows the trajectory correspond to the each element of the matrix. In each graph x-axis represent the number of conflict and y-axis represent the probability to occupy that position. In result 1, which is given below, we observe that opponent Al has biggest probability in city WI and after 9 interaction it occupy this city. Opponent A 2 has bigger probability to occupy city W2 and W4. But in city W4 opponent A 4 has the biggest probability to occupy since the opponents are non-annihilating opponent A 2 gather in W2 and occupy this city after 9 conflict interactions and opponent A4 occupy the city W4 after 9 conflict interaction. Opponent A 3 also has bigger probability to occupy city W2 and W3. As opponents are non-annihilating and A 2 occupy W2, opponent A 3 occupy W3 after 9 conflict interaction. Thus each races segregated into each of the cities. This result shows how segregation appear due to conflict.
W4 (k) P14 (k) P24 (k) P34 (k) P44
303
1.
W,
OJ,
W,
A, 0.6 0.1 0.2 0.1 0.2 OJ 0.1 0.4 M'O ' = A, A, 0. 1 0.4 OJ 0.2 t\ OJ 0.1 0.1 0.5 w,
A,
A, 0 0 I 0 A, O 0 0 I)
!7 I!'u r~ 0 05
0 05 _
~
"I)
~
~
o
0
o
10
Q: 0
III
, ' >c . . . .,"
0
No.at CcnfIicl
;]
co
~ 05
~
0
I
Q: 0-' ''''--
10
III
No.of CcnfIicl -,
~
::'I
:I
0
0
0 (,)
..0
010
0
Q: 0
20
::]
~ 05 e 0. 0
.li o
Q: O
2
~
0
10
~ 051 I e I 0. 0 ~
lllO
lllO
..,
£ Oo~W ,~
10
.
A,
1 No.
or CcnfIicl
0
~ 051 ~
W
Q: O
\
o
No.
~
W
or CcnfIicl
Q: O" -
III
)0
,a.
W
0
No.
or CcnfIicl
-
III
1 No.of CcnfIicl
1
--" 10 III
lllO )0
a. o o 0
2
Ol - - 0 10 1~ No. of CcnfIicl
~2 051
~ 05 L
Ii:
20
-
1
o
of Q:dIid
05
.,;
o
£ 00 ,~
05
.,;
III
1 No.of CcnfIicl ~I
o
2
.,;
10
1~ 05 1 I e 0. 0
, r~ ) 0
0
.D
..0
' 10
w,
. 0 05.
~
~
r
w)
0 05,
Q: 0
A1
l
to,
0 0 01 A 0 I 0 0
M'
W1
~
w, w,
W,
W,
£ 00_\. --'>--10-
JkiNo.
III
III
of CcnfIicl
to 05'
2
-g
Q: o ~
III
o
10
No. of CcnfIiet
10 No. or CcnfIicl
III
304
3
Mathematical Model of Conflict and Cooperation
Suppose that Al and A 2 cooperate with each other in this conflict interaction. We express this mathematically in a form of conflict and cooperation composition . Namely, we define the conflict and cooperation composition in terms of the conditional probability to occupy, for example, WI by each of the opponents. Therefore for the opponent Al and A 2 this conditional probability should be proportional to the product,
[JLo( {wI} )+110 ({WI} )-JLo( {wI} X110 ({wI})] Xl'o( {W2} , {W3}, {W4}) x 1]0 ({W2}, {W3} , {W4}) (1.8) We note that this corresponds to the probability for Al and A 2 to occupy WI and the probability for A 3 and A4 to be absent in that position WI . For the opponent A3 this conditional probability should be proportional to the product, 1'0({wI}) x JLO({W2} ,{W3},{W4}) x IIO({W2},{W3},{W4}) x 1]0({W2}, {W3}, {W4})'
(1.9) Similarly for the opponent A 4 we define the corresponding quantities. As a result, we obtain a re-distribution of the conflicting areas. We can repeat the above described procedure for infinite number of times , which generates a trajectory of the conflict and cooperation dynamical system. The limiting distribution of the conflicting areas is investigated by using computer experiment. Given the initial probability distribution (1.3) the conflict and cooperation composition for each opponent for each position is defined as follows: (1)._ PH .-
(OJ"
p~i)
--kp~~)(1- p~~))(1 - p~~))(1 - p~~));
:=
1 (0) PH
z\
(0) + P21 -
(0) (0))( (0))( (0)) _ (1). PH P21 1 - P31 1 - P41 - P21 ,
z~ (0) (0) (0) (0) . P41 .- (OJ"P41 (1 - PH )(1 - P21 )(1 - P31 ), (1)._
(1.10)
Z4
and so on, ziO),s are the normalizing coefficients. Thus after one conflict the probability distributions changes as (1.6), but the quantities are different from the previous model and by induction after kth conflict the probability distributions changes as as (1.7), but the quantities are also different.
3.1
Computer Experimental Results
In this computer experimental result opponent At and A 2 cooperate each other. We observe that in position WI opponent Al has biggest probability to occupy this position. As opponent Al and A 2 cooperate each other, both of them occupy this position after 23 interactions. In position W3 opponent A 3 has the biggest probability to occupy but as Al and A 2 cooperate each other they occupy this position after 23 interactions. Since the opponents are non-annihilating opponent A 3 and A 4 occupy the positions W2 and W4 respectively.
305
(~
o A. [0.5 M"!= A, 0.5 o
f
f
_.~ Os ------, 0.
2040
Al
0: 00
f
AJ
a '~
,. 11No. of Conflict --' '' I ~ ~ ~ as ~ as
(\
20
~ OS J a
2040
No.of Conflict
0:
0'
a
40
,0:
1r: Nd. of Conflict---,
I~
~M
OO~
!
4iJ 0:
~M
00
.ci
~ o
lt o
cl o-
0
a
2040
~
o e
2040 No. of Conflict
~ 0
2
0: a
l
a
~ o: a " - . a
2040 No. of Conflict
0: 00
20
40
0..,.
e~
C: O
a
1::
j
2Ol,()
I
D.
~
of Conflict
t I
0
° 05
~
0
~
. D.ci 0 0
0: a '\ a
1
• 2040 No. of Confl ict
-
~~ I No. of ConfIict _
0
·° 05'
I...~
2040 No. of Conflict
2040
0
·° 05
02040
0
a
~
0: a
.0
,. 1 No.of Canflict
0
w,
~M
0
0
° 05 ' AI.J:io
a
,. I
I
L l r" -
40
~M
.D
,. 1 r-r- No.of Conflict
20
0
~ 05
I..-- Ha. of Conflict
J:J
o
2040
e~
0.5 0
~ OS
J-
~ as
1 No. of Conflict
-Jr
~~ 1, No. of Conflict
0
W.
0.5 0
I
A. 0
w,
(d,
o o o
A, 0
t\ OJ 0.1 0.1 0.5
o
w,
(~
A. 0.6 0.1 0.2 0,1] M'~ = A, 0.2 OJ 0.1 0.4 A, 0. 1 0.4 OJ 0.2
0: a
a
20 40 No. of CanfJict
306
4
Conclusion
Since social, biological, and environmental problems are often extremely complex, involving many hard-to-pinpoint variables, interacting in hard-to-pinpoint ways. Often it is necessary to make rather severe simplifying assumptions in order to be able to handle such problems, our model can be refine by including more parameters to include more broad conflict situations. Our conflict model did not have destructive effects. One way to alter this assumption is to make the population mortality rate grow with conflict efforts. We suspect these changes would dampen the dynamics. We observed that for multi-opponent conflict model each opponent can occupy only one position but because of cooperation two opponents who cooperate each other can occupy two positions with same initial distribution. We emphasize that our framework differ from traditional game theoretical approach. Game theory makes use of the payoff matrix reflecting the assumption that the set of outcomes is known. The Nash equilibrium, the main solution concept in analytical game theory, cannot make precise predictions about the outcome of repeated games. Nor can it tell us much about the dynamics by which a population of players moves from one equilibrium to another. These limitations have motivated us to use stochastic dynamics in our conflict model. Our framework also differ from Schelling's segregation model in several respects. Specially Schelling's results are derived from an extremely small population and his model is limited to only two race-ethnic groups . Unlike Schelling's model we do not suppose the individuals' choices here we consider group 's choice.
Bibliography [1] JONES,A. J. "Game theory: Mathematical models of conflict", Hoorwood Publishing (1980) . [2] KOSHMANENKO,V.D., "Theorem on conflicts for a pair of stochastic vectors" , Ukrainian Math. Journal 55, No.4 (2003),671-678. [3] KOSHMANENKO,V.D., "Theorem of conflicts for a pair of probability measures", Math . Met. Oper. Res 59 (2003),303-313. [4] SCHELLING,T.C., "Models of Segregation", American Economic Review,Papers and Proc. 59 (1969) ,488-493. [5] SALAM,K.M.M., Takahashi, K. "Mathematical model of conflict with nonannihilating multi-opponent", Journal of Interdisciplinary Mathematics in press (2006) . [6] SALAM,K.M.M., Takahashi, K. "Segregation through conflict", Journal of Theoretical Politics, submitted.
Chapter 14
Simulation of Pedestrian Agent Crowds, with Crisis M. Lyell, R. Flo*, M. Mejia-Tellez Intelligent Automation, Inc. [email protected] *Air Force Research Laboratory
1.1. Introduction Multiple application areas have an interest in pedestrian dynamics. These range from urban design of public areas to evacuation dynamics to effective product placement within a store. In Hoogendoorn et al [Hoogendoorn 2002] multiple abstractions utilized in simulations or calculations involving pedestrian agents include (I) cost models for selected route choice, (2) macroscopic pedestrian operations, and (3) microscopic behavior. A variety of mathematical and computational techniques have been used in studying aspects of pedestrian behavior, including regression models, queuing models that describe pedestrian movement from one node to another, macroscopic models that make use of Boltzmann-like equations, and microscop ic approaches. Microscopic approaches include social force models and cellular automata models. The 's ocial force ' models can involve ad hoc analogies to physical forces. For example, a floor may be viewed as having a 'repulsive' or 'attractive' force, depending on the amount of previous pedestrian traffic. Cellular automata models are based on pedestrian walking rules that have been gleaned from observations , such as those developed from Blue and Adler [2000]. Entities that constitute 'pedestrians' have undergone some development. Still [2000] includes additional rules on his cellular automata 'agents' in his modeling of crowd flow in stadiums and concourses. The agents in 'El Botellon' [Rowe and Gomez, 2003] have a ' bottle and conversation' tropism which was inspired by social phenomenon of
308 crowds of people "wandering the streets in search of a party". This work modeled city squares as a set of nodes in a graph. Agents on a square acquire a probability of moving to another square; this probability lessens if there are other agents or a bar in the agent's current square. We are interested in agents that are more reflective of 'real people' who are pedestrians in an urban area, with goals that reflect the reason for their presence in the city. In the course of their trip to the urban area, a fire (or some other crisis) occurs. With such agents, there is no single original goal location. While some of the pedestrians want to keep as far away from the fire as possible, others might be assessing their personal business needs versus their safety needs. Their different considerations and responses should reflect their personalities, beliefs, and logical assessments. We adopt a software agent approach to the modeling of pedestrian agents in crowds. The pedestrian agents that we have developed incorporate cognitive and locomotive abilities, and have personality and emotion. The locomotive abilities are based on a translation of the Blue and Adler [20001 cellular automata rules into a software agent framework. Our model utilizes the ace appraisal model which allows for emotional effects in decision making. Note that our emotion list involves a slight extension from those of the ace list. Our pedestrian software agent design also includes a belief structure into each agent that is consistent with their personality . The Five Factor personality model [Digman, 1990] provides the framework. These affective features are integrated into the agent's cognitive processes. Such a pedestrian software agent is hybrid in the sense that it also has physical locomotion ability. We note that coupling of psychological models with individual pedestrian agents in order to investigate crowd behavior is relatively new. In addition to our work, reported upon here and in Lyell and Becker [2005], we note the work of Pelechano and colleagues [20051. In their work, they couple previous work on behavior representation and performance moderators with a social force model We also include police officer agents as part of the simulation framework. These officer agents do not incorporate a psychological /emotional framework . Rather, they encapsulate rules that reflect their training and characteristics; however, they do have the locomotive capabilities consistent with the other pedestrian agent types. The pedestrian software agents are hosted in an agent-based development and execution environment. After an initial prototype effort, we are in the process of developing a simulation framework in which to host pedestrian software agents; this will facilitate studies of pedestrian agent crowds. The paper is organized as follows . Section 2 discusses our earlier work and results from the prototype effort . Section 3 discusses our current effort on the simulation framework.
309
1.2. Early Work: The Prototype Effort The focus of the prototype effort was three-fold: (1) incorporate both personality and emotion and locomotion frameworks into a pedestrian agent model, (2) conduct validation studies, and (3) conduct initial crowd simulation experiments. We briefly report on results in this section. Further details are found in Lyell and Becker [2005].
1.2.1.
Pedestrian Agents, Personality Caricatures, Goals and Locomotion
For the initial effort, we considered three personality types, two of which were caricatures, for use in pedestrian agents. An excessively, extremely fearful (neurotic) personality, an excessively open, extremely curious personality, and an agreeable, social personality were utilized. Both the curious and the fearful (neurotic) personalities were designed to be caricatures . Each of the agent personality types was supported by an emotion set and a goal set. The goal set included higher level goals, such as "seek safety" or "attend to business goal" . Not each personality type had the same goal set; the caricature personalities each had a sub-set of the possible goals. For example, an extremely fearful pedestrian did not have the "seek safety compassionately" goal as part of its possible goal set. Concrete actions that could be taken in support of a selected goal were (a) attempted movement to a new (calculated, specified) location and (b) message sending to another pedestrian agent (or agents) . Actual movement to a new location utilized the walking rules from Blue and Adler [2000] that had been re-cast into an agent framework.
1.2.2. Verification and Validation Efforts From the CMU Software Engineering Institute's web site ICMU-SEI 2006], verification asks "did you build the product right" (meet specs) and validation asks "did you build the right product". For the verification effort, we "turned off' the cognition in the pedestrian agents and recovered the walking behavior that was found from the Blue and Adler studies 120001. One aspect, that of spontaneous lane formation of pedestrians moving in the same direction , is shown in Figure 1. This is an example of emergent behavior. Additionally in verification effort, we also investigated separately the behavior of each of the three agent personality types, and found that they exhibited their expected behavior. Figure 2 shows this for the extremely curious agent type. The validation effort was in the early work provided by a 'reasonableness' check on the results. The next sub-section presents results for one of the initial investigations .
310
E
fl.,Q,9BJn Initial Placement ; Lower Density (20 Agents , 500 squa res) Be havior Res ult Figure I: Spon taneou s Lane Formation, Westward (red dot s) moving pede strians and eas tward (blue dots ) mo ving pedestrian s separate into lanes. ot on ly is there lane formation , the ent ire westward flow has a sharp boundary from the eastward moving edestrian flow.
f C,khOp, f ie FUl
rnm.
(wnI,
Figure I: Extremely Curious Pedest rian Agent Behav ior : " learning about a fire mean s view ing a fire". The agent s that are found at the goal sites (right hand side) are those that had traveled past the location before the fire had erup ted .
1.2.3. Initial Investigations We investigated several similar scenarios, each involving different pedestrian agent population mixes and different fire locations. Here, we present the result of one
311 investigation, shown in Figure 3. For each of these initial investigations, the following characteristics held: • Fearful Agent o I f1eams of fire , will seek safet y o Never pro-actively helpful with 'fire exi sts' messages o Will infrequently respond to direct questions from other agents • Social/ Agreeable Agents o Most complex emotional range o Proactively helpful- send 'fire exists' messages to nearby agents • Officer in all cases, mo ves towards fire , orbits fire , redirects adjacent pedestrians o All agent types obey police officer directive to leave area • City Area Description o City Grid 10 cells high , 50 cells wide o Agents enter left at metro o Business Goals at upper and lower city edge (right) o Fire Radius 2, Fire Appears at SimTime 200
50%A, 354 414 9(4 50%F "NlJI-t2-A---A..;.li..;.F-h-ea-d-e-d-to--------.;.r-=-.. ._p fficer R-directs 5Aat know of fire, not metro, multiple from officer A to BG
some A
•._ ...._..
BG
474 All agents Some from direct observation, know of some from officer, somefrom A fire agent messages
10% A, 90%F
100 %F 16 Agents do not know of fire • • ~ ••• • ¥ . ¥
¥ . ¥ • • • • • ¥ •• ••• •• • ¥ . ¥ • • • • • • ••• • • • • • • • • • • • • • • •• • • _
¥ • • • • • • • •• • • • • ••
y¥ y...........
.
f·.··· . Obseryatlon: .\gentsbene[lf fronlAagents' nles~ges6nfi..e ("'.~
_
N
I()~'ti~n
_
;
....•...:
Figure 3: Results for different population mixes of Agreeable and Fearful type pedestrian agents. The axis represents simulation time, and distinguished time points are shown. The results represent multiple runs for each population mix .
1.3. Framework for Simulation of Pedestrian Agent Crowds 1.3.1. Why a Framework? Among the drawbacks of the initi al effort were :
312
• • • • • •
goal selection (dependent upon the environmental state, the emotional state of the agent, and on the agent's history) was restricted to a single goal, caricatures were used for two of the pedestrian agent personality types, the developed pedestrian agents were not 'tune-able', the urban geometry was too simple, it was difficult to simulate ' excursions' on the primary scenario, much of the simulation particulars were hard-coded rather than selectable.
A motivation for the development of a simulation framework that allows the study of pedestrian crowds in an urban area with a crisis situation is to enable simulation studies for which the user/analyst does not have to engage in software development. Multiple agent personality types should be provided for use in simulation variations. The urban area design should be configurable. In our current simulation infrastructure development effort, the goal is to support the user/ analyst in devising, executing and analyzing simulations in the domain of pedestrian crowd modeling in an urban environment through the use of simulation framework services. In particular, the user/ analyst will be able to • develop the geometry of the urban area using a template , • develop realistic pedestrian agent personal ities using templates, • assign resources to the scenario. The resources include police officer agents. • construct variations of the scenario, for different simulation investigations. The variations may include: (a) utilization of different resources (objects or police officers), utilization of different pedestrian population mixes, (c) different densities of pedestrians in the city area, (d) variations in the details of the city area ( geometry, buildings, etc.). Of course, all of the simulation variations must be within the scope of "pedestrian agent crowds in crisis, with police officers". We are in the process of developing a software framework for simulating pedestrian agent crowds in an urban area. The control functionality for the crisis situation may be provided by the police officer agents and their interactions . The major framework elements are shown graphically in Figure 4. These include the aforementioned templates as well as the simulation application (simulation engine), which is layered over the Cybele agent platform . Note that there is an open source version of Cybele [CYB 2006]. Rule engine support for pedestrian agent emotional change and goal selection rules (developed using the agent builder template) is also provided.
313 Geometry Building Template
Agent Building Template
\.
\.
\.
t of Simulation Framework for Pedestrian A ent Crowd Studies
1.3.2. Agent Builder Templates for the User / Analyst
One of our challenges has been to develop specific pedestrian agent personalities in such a manner that the personalities are 'tune-able' , within reason. Guidelines on the extent that parameters may be varied are provided by the template. The psychological framework for the 'real personality' agents that underlies the agent builder template had to be developed; details are given in Lyell, Kambe-Gelke and Flo [2006]. The agent builder template presents to the user the aspects of an agent's beliefs, its allowable emotion set and range for each emotion, and initial emotional status. The emotion elicitors are presented in the context of situations that can occur for this simulation there. The user has the ability to develop rules for goal selection. Rule development is guided by the template. A rule for goal selection involves a situation, with a given context element, and the presence of emotions specified within a range. Lists of allowable situations, contexts, and emotions are presented to the user for potential selection. The variable elements of the template provide the 'tune-able' range for the particular agent personality. The fixed elements have been developed for each of six pedestrian agent personality types that are offered by the agent builder template : (l) Social (2) Egocentric, (3) Troublemaker, (4) Complainer, (5) Good and (6) Generic. Each of these types is consistent with the Five Factor personality model; each of their emotions and responses are consistent with the OCC model.
314
References Blue, V. and Adler, J. , 2000 , Cellular Automata Microsimulation of Bi-Directional Pedestrian Flows ", Journal of the Transportation Research Board, Vol. 1678, 135141. CYB Cybele Agent Platform http ://www.opencybele.org/ Retrieved May 22 , 2006 . CMU -SEI, http ://www.sei.cmu.edu/cmm i/presentations/euro- sepg-tutorial/tsldI23 .htm. Retrieved May 22 , 2006. Digman, J., 1990, Personality Structure: Emergence of the five factor model , Ann. Rev. Psychology, 41,417-40. Hoogendoorn S., Bovy, P., Daamen, W., 2002 , Pedestrian Wayfinding and Dynamics Modeling, in Pedestrian and Evacuation Dynamics, Berlin: Springer-Verlag. Eds. Schrekenberg, M. and Sharma, S.D. Lyell , M. and Becker, M., 2005, Simulation of Cognitive Pedestrian Agent Crowds in th Crisis Situations, In Proceedings ofthe 9 World Multiconference on Systemics, Cybernetics, and Informantics. Lyell, M., Kambe Gelke , G. and Flo, R. , 2006, Developing Pedestrian Agents for Crowd Simulations, Proc . BRIMS 2006 Conference, 301-302. Ortony, A. , Clore, G., Collins , A., 1988, The Cognitive Structure ofEmotions, Cambridge: Cambridge University Press . Pelechano, N. O'Brien,K., Silverman,B. Badler , N., 2005 , Crowd Simulation Incorporating Agent Psychological Models, Roles and Communication, CROWDS 05, First International Workshop on Crowd Simulation, Lausanne, Switzerland. Rowe, J.E . and Gomez, R., 2003 , EI Botellon: Modelling the movement of crowds in a city. Journal of Complex System s. Volume 14, Number 4. Still, O.K., 2000, Crowd Dynamic s. PhD Thesis, Mathematic s Departm ent, Warwick University
Chapter 15
Traffic flow in a spatial network model Michael T. Gastner Santa Fe Institute 1399 Hyde Park Road, Santa Fe, NM 87501 [email protected]
A quantity of practical importance in the design of an infrastructure network is th e amount of traffic along different parts in the network. Traffic patterns primarily depend on the users ' preference for short paths through the network and spatial constraints for building the necessary connections. Here we study the traffic distribution in a spatial network model which takes both of these considerations into account . Assuming users always travel along the shortest path available, the appropriate measure for traffic flow along the links is a generalization of the usual concept of "edge betweenness" . We find that for networks with a minimal total maintenance cost , a small number of connections must handle a disproportionate amount of traffic. However, if users can travel more directly between different points in the network, the maximum traffic can be greatly reduced.
1
Introduction
In the last few years there has been a broad interdisciplinary effort in the analysis and modeling of networked systems such as the world wide web, the Internet, and biological, social, and infrastructure networks [16]. A network in its simplest form is a set of nodes or vertices joined together in pairs by lines or edges. In many examples , such as biochemical networks and citation networks, the vertices exist only in an abstract "network space" without a meaningful geometric interpretation. But in many other cases, such as the Internet, transportation or communication networks , vertices have well-defined positions in literal physical space, such as computers in the the Internet, airports in airline networks , or cell phones in wireless communication networks.
316 The spatial structure of these networks is of great importance for a better understanding of the networks' function and topology. Recently, several authors have proposed network models which depend explicitly on geometric space [1, 3, 6, 7, 9, 12, 13, 15, 17, 18, 19, 20, 21, 22] . In all of these models, nearby vertices are more likely to be connected than vertices far apart. However, the importance of geometry manifests itself not only in the tendency to build short edges, but also in the traffic flow on the network: given a choice between different paths connecting two (not necessarily adjacent) vertices in the network, users will generally prefer the shortest path. With few exceptions [2, 4], the literature on spatial networks has rarely analyzed traffic patterns emerging from the various models. To address this issue, this paper takes a closer look at one particular model [10] and analyzes the distribution of traffic along the edges in the network.
2
A model for optimal spatial networks
Suppose we are given the positions of n vertices, e.g. cities or airports, and we are charged with designing a network connecting these vertices together, e.g. with roads or flights. The efficiency of the network, as we will consider it here, depends on two factors . On the one hand, the smaller the sum of the lengths of all edges, the cheaper the network is to construct and maintain. On the other hand, the shorter the distances through the network, the faster the network can perform its intended function (e.g., transportation of passengers between nodes or distribution of mail or cargo) . These two objectives generally oppose each other: a network with few and short connections will not provide many direct links between distant points and , consequently, paths through the network will tend to be circuitous, while a network with a large number of direct links is usually expensive to build and operate. The optimal solution lies somewhere between these extremes. Let us define lij to be the shortest geometric distance between two vertices i and j measured along the edges in the network. If there is no path between i and j, we formally set lij = 00 . Introducing the adjacency matrix A with elements A i j = 1 if there is an edge between i and j and A i j = 0 otherwise, we can write the total length of all edges as T = I:i 3, increasing a will drive M negative and generate overconstraint. Furthermore, larger a and f3 mean more complex parts and a more complex product. If the mechanism is to be exactly constrained, then M = 0 and we can solve for a to yield Equation (3):
a
=
3- 3n n(f3- 3)
3 3- f3
- - as n
gets large
This expression is based on assuming that the mechanism is planar. If it is spatial, like all those in Table I, then "3" is replaced by "6" and Eq (1) is called the Kutzbach criterion, but everything else stays the same. Table 2 evaluates Equation (3) for both planar and spatial mechanisms.
f3
a planar
a spatial
0
1
1
1
1.5
1.2
2
3
1.5
Table 2. Relationship Between Number of Liaisons Per Part and Number of Joint Freedoms for Exactly Constrained Mechanisms (M=O). Table 2 shows that a cannot be very large or else the mechanism will be overconstrained. If a planar mechanism has several two degree-of-freedom joints (pin-slot, for example) then a relatively large number of liaisons per part can be tolerated. But this is rare in typical assemblies. Otherwise, the numbers in this table confirm the data in Figure I. Most assemblies are exactly constrained or have one operating degree of freedom. Thus f3 = 0 or f3 = I, yielding small values for a, consistent with our data . The Chinese Puzzle is an outlier because it is highly over-constrained according to the Kutzbach criterion . It is possible to assemble only because its joints are deliberately made loose. Nonetheless , the overabundance of constraints is the reason why it has
only one assembly sequence , that is, why it is a puzzle .
336
2.1 Degree Correlation Why do mechanical assemblies in Table 1 (and, most likely, many other) mechanical assemblies have negative r't Here we offer a number of promising suggestions . First, from a numerical standpoint, negative r goes with the tendency for highly connected (high < k » nodes (sometimes called hubs) to link to weakly connected ones, and vice versa. This is true of all the assemblies studied so far. Many of the nodes attached to high degree nodes in an assembly are degree-one pendants, the presence of which tends to drive r negative . One contributing factor encouraging positive r is many tightly linked clusters. But such configurations are discouraged by the constraint conditions discussed in the previous section. From a formal network point of view, assemblies typically are hierarchical and similar in structure to trees, although they rarely are pure trees . It is straightforward to show that a balanced binary tree has r = -1/ 3 asymptotically as the tree grows, and that a balanced binary tree with nearest neighbor cross-linking at each hierarchical level has r = -1/5 asymptotically. Such trees are similar to most distributive systems such as water, electricity, and blood. From a functional/physical point of view, highly connected nodes in a mechanical assembly typically play either high load-carrying or load-distributing roles, or provide common alignment foundations for multiple parts, or both. The frames , pivot pins, and pedal arms of the walker, the frame and front fork of the bicycle, and the cylinder block, cylinder head, and crankshaft of the engine perform important load-carrying and alignment functions in their respective assemblies. Such highly connected parts are generally few in most assemblies, and they provide support for a much larger number of low degree parts. Almost without exception these highly connected nodes do not connect directly to each other . In high power assemblies like engines , there are always interface parts, such as bearings, shims, seals, and gaskets , between these parts to provide load-sharing, smoothing of surfaces, appropriate materials, prevention of leaks , or other services . Such interface parts are necessary and not gratuitous . In addition, because they are often big, so as to be able to withstand large loads , the high- k parts have extensive surfaces and can act as foundations for other parts that would otherwise have no place to hang on. On the engine, such parts include pipes, hoses, wires, pumps and other accessories, and so on . Several of these must be located accurately on the block with respect to the crank but many do not. These are the degree-2 and degree- I nodes that make up the majority in all the assemblies studied. Summarizing, most assemblies have only a few high- k foundational, load bearing, or common locating parts, and many other parts mate to them while mating with low probability to each other. Thus even if the few high k parts mated to each other, the assortativity calculation still would be overwhelmed by many (high k - low k) pairs, yielding negative values for r.
3.1 Functional Motifs of Assemblies In many technological networks, the motifs that generate function are closed loops. This is certainly true for both mechanical assemblies and electric/electronic circuits. The V-8 engine's main loops are shown in Figure 2. Some loops are contained entirely within the engine while others (air, output power) close only when other parts of the car
337
or its environment are included. Note that some of them stay within communities identified by the Girvan -Newman algorithm [Girvan and Newrnan'"] while others extend beyond or link and coordinate different communities. These loops cannot be drawn by inspecting the graph but require domain knowledge. The clustering coefficient of a network is obtained by counting triangles, that is, by enumerating the shortest possible loops. In general, the operative loops of an assembly are longer than three (typically 6 or 8) [xv I and thus do not contribute to the clustering coefficient. In fact, the conventionally defined clustering coefficient reveals nothing about the presence, number , or length of loops longer than three. Software for finding motifs" would be helpful here, but only some of the motifs thus found would be functionally significant, and domain knowledge would be needed to identify them. Since positive degree correlation is related to more kinematic constraint while negative degree correlation is related to less kinematic constraint, the degree correlation calculation, when applied to mechanical assemblies, can be thought of as a simplified cousin of the Griibler-Kutzbach criterion , because the former simply counts links and considers them of equal strength, whereas the latter uses more information about both links and nodes and can draw a more nuanced conclusion.
Figure 2. V-8 Engine with Five Main Functional Loops Indicated and Named .
4.1 Conclusions and Observations Different kinds of systems have different operating motifs, and to understand a system is in some sense to know what those motifs are and how to find them . All of the assemblies analyzed here have a number of hubs . These are obviously important parts but they do not perform any functions by themselves. Instead, the identified functional loops are the main motifs, and they seem to include at least one hub, perhaps more . In systems where large amounts of power are involved, hubs often act as absorbers or 4
http://www.weizmann.ac.il/mcb/UriAlon/groupNetworkMotifSW.html
338 distributors of that power or of static or dynamic mechanical loads. In other systems, the hubs can act as concentrators or distributors of material flow or information flow . Generally, material, mechanical loads, and power/energy/information all flow in closed loops in technological or energetic systems. All the assemblies analyzed display negative degree correlation. This follows from physical principles, the assembly's structure, or engineering design reasoning.
Acknowledgements This paper benefited substantially from discussions with Professor Christopher Magee and Dr David Alderson. The author thanks Professors Dan Braha and Carliss Baldwin for stimulating discussions , Prof Mark Newman for sharing his software for calculating degree correlation, and Dr Sergei Maslov for sharing his Matlab routines for network rewiring . Many of the calculations and diagrams in this paper were made using UCINET and Netdraw from Analytic Technologies, Harvard MA.
M. E. 1. Newman, SIAM Review 45, 167 (2003). R Albert and A-L Barabasi, Review of Modern Physics 24, 47 (2002). iii C. R. Myers, Physical Review E 68, 046116 (2003). iv L. Li, D. Alderson, W. Willinger, and J. Doyle , SIGCOMM '04, Portland OR, (2004). v R. Milo , S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, Science 298, 824 (2002). vi E. Ravasz, A. L. Somera, D. A. Mongru, Z. N. Oltvai , and A.-L. Barabasi, Science 297,1551 (2002). vii R. F. i Cancho, C. Janssen, and R. Sole, Physical Review E 64, 046119 (2001). viii A. Bourjault, "Contribution a une Approche Methodologique de I' Assemblage Automatise: Elaboration Automatique des Sequences Operatoires," Thesis to obtain Grade de Docteur des Sciences Physiques at l'Universite de Franche-Comte, Nov, 1984 h O. Bjorke, Computer-Aided Tolerancing (ASME Press, New York, 1989) x J. Phillips, Freedom in Machinery (Cambridge University Press, Cambridge, 1984 (vI); 1989 (v2» , Vol. 1 and 2 xi T. N. Whitehead, The Design and Use of Instruments and Accurate Mechanism (Dover Press , 1954) xii D. Blanding, Exact Constraint Design (AS ME Press, New York , 2001) xiii R. Konkar and M. Cutkosky, ASME Journal of Mechanical Design 117, 589 (1995). xiv G. Shukla and D. Whitney, IEEE Transactions on Automation Science and Technology 2 (2), 184 (2005). xv D. Whitney, Mechanical Assemblies : Their Design , Manufacture . and Role in Product Development (Oxford University Press, New York , 2004) xvi S. P. Borgatti, M . G . Everett, and L. C. Freeman, "Ucinet for Windows: Software for Social Network Analysis," Harvard MA: Analytic Technologies, 2002 . xvii M. Van Wie, J. Greer, M. Campbell, R. Stone , and K. Wood, ASME DETC, Pittsburgh, DETCOI/DTM-2 I 689, (2001). ASME Press, New York, (2001). "iii L. Am aral, A. Scala, M. Barthelemy, and H. Stanley, PNAS 97 , 11149 (2000). xix M. Girvan and M. E. 1. Newman, PNAS 99, 7821 (2002). i
ii
Chapter 18
Complex dynamic behavior on transition in a solid combustion model Jun Yu The University of Vermont [email protected] .edu
Laura K . Gross The University of Akron [email protected]
Christopher M. Danforth The University of Vermont chris.danfort [email protected]
Through examples in a free-boundary model of solid combustion, this st udy concerns nonlinear t ransition behavior of small disturbances of front propagation and tem perature as they evolve in time. This includes complex dynamics of period doubling , quadrupling, and six-folding, and it event ually leads to chaot ic oscillations. The mathematical problem is int eresting as solutions to t he linearized equat ions are unst able when a bifurcation paramet er related to th e acti vation energy passes t hrough a critical value. Therefore, it is crucial to account for t he cumulat ive effect of small nonlinearities to obtain a correct descrip tion of the evolut ion over long times. Both asymptotic and numerical solutions ar e st udied. We show that for special param et ers our method with some dominant modes ca pt ures t he formation of coherent structures. Weakly nonlinear analysis for a general case is difficult because of th e complex dynamics of th e prob lem, which lead to chaos. We discuss possible methods to improve our prediction of th e solutions in the chaotic case.
340
1
Introduction
We study the nonuniform dynamics of front propagation in solid combustion: a chemical reaction that converts a solid fuel directly into solid products with no intermediate gas phase formation. For example, in self-propagating hightemperature synthesis (SHS), a flame wave advancing through powdered ingredients leaves high-quality ceramic materials or metallic alloys in its wake. (See, for instance, [7].) The propagation results from the interplay between heat generation and heat diffusion in the medium. A balance exists between the two in some parametric regimes, producing a constant burning rate. In other cases, competition between reaction and diffusion results in a wide variety of nonuniform behaviors, some leading to chaos. In studying the nonlinear transition behavior of small disturbances of front propagation and temperature as they evolve in time, we compare quantitatively the results of weakly nonlinear analysis with direct simulations. We also propose techniques for the accurate simulation of chaotic solutions.
2
Mathematical analysis
We use a version of the sharp-interface model of solid combustion introduced by Matkowsky and Sivashinsky [6]. It includes the heat equation on a semi-infinite domain and a nonlinear kinetic condition imposed on the moving boundary. Specifically, we seek the temperature distribution u(x, t) in one spatial dimension and the interface position r(t) = {xix = f(t)} that satisfy the appropriately non-dimensionalized free-boundary problem
au a2u at
=
ax2'
x > f(t),
(1.1)
t > 0,
V=G(ul r) , t>O,
aul
ax r
=-V,
(1.2)
t>O.
(1.3)
°
Here V is the velocity of the rightward-traveling interface, i.e. V = df / dt. In addition, the temperature satisfies the condition u - t as x - t 00; that is, the ambient temperature is normalized to zero at infinity. To model solid combustion, we take the Arrhenius function as the kinetics function G in the non-equilibrium interface condition (1.2) [1, 8]. Then, with appropriate nondimensionalization, the velocity of propagation relates to the interface temperature as: V = exp
[(~) (T +~1-_1 (T)u]
(1.4)
at the interface r. Here v is inversely proportional to the activation energy of the exothermic chemical reaction that occurs at the interface, and < (T < 1
°
341 is t he ambient temperature nondimensionalized by the adiabatic temperature of combust ion products. (See [3].) The free-bound ar y problem admits a traveling-wave solut ion u( x , t) = exp(- x
+ t) ,
f(t)
=
t.
(1.5)
It is linearly unst able when v is less than t he critical value Vc = 1/3. (See, for example, [4, 10].) For the weakly nonlinear analysis, let 10 2 be a small deviation from the neut rally stable value of u, namely 10
2
=
Vc
1 3
- v = - - v.
(1.6)
We perturb the basic solution (1.5) by 10 tim es the most linearly unst able mode, evaluated at both t he neutrally stable parameter value v = 1/ 3 and the corresponding neutrally stable eigenvalue, together with complex-conjugate terms. In th e velocity expansion, we also includ e 10 tim es th e constant solution to th e linearized problem (alth ough we do not mention it explicitly in the sequel). See
[5] .
The normal-mode perturbation is modulated by a complex-valued, slowly varying amplitude function A(T), where T = €2 t. Th e amplitu de envelope sati sfies the solvability condition dA
dT =
2-
xA + ,BA A ,
(1.7)
where X and ,B are complex constants . (See [5] for details.) T he evolut ion equation (1.7) has circular limit cycles in the complex-A plane for all values of the kinetic parameter a in th e int erval 0 < a < 1 (Le. for all physical values of a ). To find A(T), we integrate the ord inary differential equation (1.7) using a four th- order Runge-Kut ta meth od.
3
Results and discussion
To compare quantitatively the asymptot ics with numerics, we first consider 10 = 0.1. The value of v remains at t he marginally unstable value V c - 10 2 , as in equation (1.6), so t/ ~ 0.323. We show in thi s section th at this choice of 10 corresponds to a mix of dynamics as a varies. Subsequently, we comment on th e impact on the front behavior of bot h decreasing and increasing 10. To start, take a = 0.48 in the kinetics function (1.4). For the remainder of t his pap er we take the initial condition A(O) = 0.1, unless ot herwise indicated. Figure 1 shows t he numerical (solid line) and asymptotic (dashed line) values of front speed perturbation as a function of time t in the interval 0 ::; t ::; 60. To find the numerical solution, we used t he Crank-Nicolson met hod to solve the prob lem in a front-attached coordinate frame, reformulating the boundary condit ion (1.3) for robustn ess. (See [5] for details.) As for the asymptotic
342
...
;----~-;---;---;;---:::----:
Figure 1: Velocity perturbation versus time: comparison between numerical (solid line) and asymptotic (dashed line) for Arrhenius kinetics, 0' = 0.48, E = 0.1, A(O) = 0.1 -) (/I ~ /Ie - E2 = 1/3 - (0.1 ) 2 = 0.323 solution , the previous section describes the order-s perturbation to the travelingwave solution (1.5). In the figure we have additionally included an order-e'' correction. Figure 1 reveals that from t = 0 to about t = 30, the small front speed perturbation is linearly unstable, and its amplitude grows exponentially in time. As this amplitude becomes large , nonlinearity takes effect. At around t = 30, the front speed perturbation has reached steady oscillation. The asymptotic solution accurately captures the period in both transient behavior for t = 0 to 30 and the long-time behavior after t = 30. The amplitude and phase differ somewhat. This is an example in which the weakly nonlinear approach describes well the marginally unstable large-time behaviors: A single modulated temporal mode captures the dynamics. To identify additional such regimes systematically, we calculate numerically the velocity perturbation data on the time interval 35 < t < 85, throughout the range of physical values of the kinetics parameter a (i.e. 0 < a < 1). Figure 2 summarizes the Fourier transformed velocity data. For each a value and each frequency, the color indicates the corresponding amplitude, with the red end of the spectrum standing for larger numbers than the violet end. For roughly 0.3 < a < 0.6, the figure shows the dominance of the lowest-order mode, suggesting the appropriateness of the weakly nonlinear analysis in this range. For other values of a , a single mode cannot be expected to capture the full dynamics of the solution. In particular, when a is greater than approximately 0.6, solutions have sharp peaks , even sharper than the numerical solution in Figure 1. Figure 2 shows that when a is smaller than approximately 0.3, the Fourier spectrum has a complicated character, starting with the emergence of a period-doubling solution for a ~ 0.25. Figure 3 gives a closer look at the dominant modes for the case of small 0' ; notice the bifurcation to a six-folding solution near 0' = 0.201. The four numerical solutions in Figures 4 and 5 illustrate the cascade of periodreplicating solutions, including doubling (a = 0.22), quadrupling (a = 0.21),
343
and six-folding (a = 0.20075). Note that Figure 2 reflects the breakdown of the numerical solut ion for a less than approximately 0.15.
,--
Figure 2: Amplitudes corres pond ing to each frequency of t he Fourier transformed velocity perturbation dat a for th e Arrhe nius kineti cs paramet er (7 in t he int erval (0,1 ), 2 E = 0.1, A(O) = 0.1,35 < t < 85 (v ~ V c - E = 1/ 3 - (0.1? = 0.323)
'--
.
Figure 3: Ampli tudes corres ponding to each frequency of t he Four ier transformed velocity perturbation dat a for t he Arrhenius kinet ics par ameter (7 in t he interval 2 2 (0.19,0.22 ) , E = 0.1, A(O) = 0.1, 35 < t < 85 (v ~ V c - E = 1/ 3 - (0.1) = 0.323)
Th e cascade of period-replicating solutions for decreasing a leads to chaos. Figure 6 (corresponding to a = 0.185) shows the sensitivity of the velocity perturbation to initi al conditions. In the figure, note that from t = 0 to approximately t = 25, the small front speed perturb ation is linearly unst able, and its amplit ude grows exponent ially in time, similar to the profile in Figure 1. As the amplit ude becomes large, nonlinear ity again comes into play. Still, the curves corresponding to two initial conditions (one with A(O) = 0.1 and the other wit h A(O) = 0.1000001) remain indistinguishab le for a long time. However, as time approaches 100 the two profiles begin to diverge, and as time evolves past 120 they disagree wildly.
344
10 , - - - - - . . . ,
Figure 4: Velocity perturbations versus time (E = 0.1, A(O) = 0.1, 1/ ~ I/c _E 2 = 1/3(0.1? = 0.323) clockwise from upper left: periodic solution for a = 0.48 (cf. Figure 1), period doubling (a = 0.22), period quadrupling (a = 0.21), period six-folding (a = 0.20075)
We propose a couple of techniques to improve model predictions in the chaotic case. One-ensemble forecasting-requires the generation of velocity profiles that correspond to slightly different initial conditions. The degree of agreement among curves in the collection (ensemble) demonstrates the level of reliability of predictions. In the spirit of the jet-stream forecasts in [9], additional data can be provided at the points at which the individual members of the ensemble diverge. For example , Figure 6, which shows an "ensemble" of only two curves, gives a preliminary indication of the need for more data at t = 100. Alternatively, we can more accurately represent solid combustion by using statistical methods to "train" the model. Comparisons with experimental data can reveal systematic and predictable error , as in Figure 7 (courtesy of [2]). The figure, which provides an analogy to the problem under consideration, shows that temperature forecasts near the sea surface off the coast of Japan are typically too warm [2] . That is, the actual temperatures minus the predicted temperatures are negative values, represented as yellow, blue , and violet in the figure. In describing combustion, as in describing sea temperatures, one can compensate methodically for such error. As the bifurcation parameter v approaches ever closer to the neutrally stable value (i.e. as e decreases) , the complex dynamics-including chaos-disappear. For example , when € = 0.06, the asymptotic and numerical solutions agree closely throughout the physical rang e of (J (0 < a < 1). By contrast, when € grows to 0.12, the (J interval in which one mode dominates strongly has a length of only 0.01. Varying € quantifies the domain of applicability of the weakly nonlinear analysis and delineates the role of (J in the dynamics . (See [5] .) In summary, linear instability provides a mechanism for transition to nonlinear coherent structures. Weakly nonlinear analysis allows the asymptotic study of the evolution of small disturbances during this transition, providing insight
345
Figure 5: Phase plots of the four solutions in Figure 4: velocity perturbation v(t) versus dv/ dt
...
, .....,...-
......
Figure 6: Velocity perturbation versus time: numerical solution for (J = 0.185, E A(O) = 0.1 and A(O) = 0.1000001 (II:::':;; lie - E2 = 1/3 - (0.1? = 0.323)
= 0.1,
into nonlinear dynamics, which can be investigated numerically. We also proposed techniques to improve predictions of solution behavior in the chaotic case. The ensemble method may provide accuracy over long time intervals. Also, given experimental data, statistical procedures can be used to train the model.
Bibliography [1] BRAILOVSKY, 1., and G. SIVASHINSKY, "Chaotic dynamics in solid fuel combustion", Physica D 65 (1993),191-198. [2] DANFORTH, C. M. , E. KALNAY , and T. MIYOSHI, "Estimating and correcting global weather model error" , Monthly Weather Review (2006), in press.
346
Figure 7: Curves of predicted (constant) near sea-surface temperature, along with colored bands of associated error (courtesy of [2])
[3] FRANKEL, M., V. ROYTBURD , and G. SIVASHINSKY, "A sequence of period doubling and chaotic pulsations in a free boundary problem modeling thermal instabilities" , SIAM J. Appl. Math 54 (1994), 1101-1112.
[4] GROSS, L. K.,
"Weakly nonlinear dynamics of interface propagation", Stud. Appl. Math. 108,4 (2002), 323-350.
[5] GROSS, L. K. , and J. Yu, "Weakly nonlinear and numerical analyses of dynamics in a solid combustion model", SIAM J. Appl. Math 65 ,5 (2005), 1708-1725. [6] MATKOWSKY, B. J ., and G. 1. SIVASHINSKY, "P ropagat ion of a pulsating reaction front in solid fuel combustion", SIAM J . Appl. Math. 35 (1978), 465-478. [7] MERZHANOV, A. G., "SHS processes: combustion theory and practice", Arch. Combustionis 1 (1981), 23-48. [8] MUNIR, Z. A., and U. ANSELMI-TAMBURINI, "Self-propagating exothermic reactions: the synthesis of high-temperature materials by combustion" , Mat . Sci. Rep. 3 (1989), 277-365 . [9] TOTH, Z., and E. KALNAY, "Ensemble forecasting at NMC: The generation of perturbations" , Bulletin of the American Meteorological Society 74 , 12 (1993), 2317-2330. [10] Yu, J. , and L. K. GROSS , "The onset of linear instabilities in a solid combustion model", Stud. Appl. Math. 107, 1 (2001), 81-101 .
Chapter 19
Modeling the Structural Dynamics of Industrial Networks Ian F. Wilkinson School of Marketing, University of New South Wales , Australia [email protected] James B. Wiley Faculty of Commerce Victoria University of Wellington, New Zealand [email protected] Aizhong Lin School of Computing Sciences University of Technology, Sydney
1. Introduction Market systems consist of locally interacting agents who continuously pursue advantageous opportunities. Since the time of Adam Smith, a fundamental task of economics has been to understand how market systems develop and to explain their operation. During the intervening years, theory largely has stressed comparative statics analysis. Based on the assumptions of rational, utility or profit-maximizing agents, and negative, diminishing returns) feedback process, traditional economic analysis seeks to describe the, generally) unique state of an economy corresponding to an initial set of assumptions. The analysis is static in the sense that it does not describe the process by which an economy might get from one state to another.
348
In recent years, an alternative view has developed. Associated with this view are three major insights. One of these is that market processes are characterized by positive feedback as well as the negative returns stressed in classical economics, (e.g . Arthur 1994). The second insight is that market systems may be studied in the framework of complex adaptive system theory. "A 'complex system' is a system consisting of many agents that interact with each other in various ways. Such a system is 'adaptive' if these agents change their actions as a result of the events in the process of interaction", (Vriend, 1995, p. 205). Viewed from the perspective of adaptive systems, market interactions depend in a crucial way on local knowledge of the identity of some potential trading partners. "A market, then, is not a central place where a certain good is exchanged, nor is it the aggregate supply and demand of a good. In general, markets emerge as the result of locally interacting individual agents who are themselves actively pursuing those interactions that are the most advantageous ones, i.e.,they are self-organized" (Vriend, 1995, p. 205). How self-organized markets emerge in decentralized economies is a question that formal analysis of such systems seeks to answer. The third insight associated with the alternative view is that there are parallels of economic processes with biological evolution. This insight, in turn, suggests that ideas and tools of biological evolution may fruitfully be applied to the study of economics. Among the promising tools are computer-based algorithms that model the evolution of artificial life. If the tools used to model artificial life may be applied to institutions, industries, or entire economies, then their evolution and performance may be studied using computer simulation.
1.1. Objectives The aim of our work is to apply the above insights to the study of Industrial market systems, IMS's). IMS's consist of interrelated organizations involved in creating and delivering products and services to end-users. The present paper describes computer models which are capable of mimicking the evolutionary process of IMS's, drawing in particular on theNK Models developed by Stuart Kauffman (1992 , 1995). The modeling effort has two interrelated but distinct purposes. The first is to help us to better understand the processes that shape the creation and evolution of firms and networks in IMS's. This will provide a base both for predicting, and perhaps influencing, the evolution of industrial marketing systems. Secondly, the models may be used for optimizing purposes, to help us designing better performing market structures. The specific objectives of this research are to examine: the processes by which structure evolves in IMS 's ; the factors driving these processes; and the conditions under which better performing structures may evolve .
2. Background It is typical of complex adaptive systems in general, and those that mimic life processes in
particular, that order emerges in a bottom up fashion through the interaction of many
349
dispersed units acting in parallel. No central controlling agent organizes and directs behaviour. "[Tlhe actions of each unit depend upon the states and actions of a limited number of other units, and the overall direction of the system is determined by competition and coordination among the units subject to structural constraints. The complexity of the system thus tends to arise more from the interactions among the units than from any complexity inherent in the individual units per se (Tesfatsion 1997, p. 534). IMS's may be described in terms of four sets of interrelatedelements or components, i.e, actors, activities, resourcesand ideas or schema, (Hakansson and Snehota 1995, Welch and Wilkinson 2000). The actors consist of various types of firms that operate in industrial markets. A "type" of firm, such as, wholesaler, drop shipper, manufacturer's agent, rack jobber, broker, and so forth) may be described in terms of the activities that it is capable of performingas well as in terms of their schemaor "theories in use" which underlyingactor's actions and reactions(Gell-Mann 1998). In IMS's, businessentities seldomperform allofthe necessary activities or functions requiredfor a transaction to take place. Rather,theyperform some of them. The firm and other firms with complementary specialization collectively perform requisite activities for transactions to occur. One way to look at the evolution of firms in market systems is to conceive that they evolve to establish competitive niches much as organisms do in natural environments. That is, firms retain, add, or drop activities and functions as part of an on-going process. The outcome of this process is, or is not) a set which gives the firm a competitive advantage. The resulting interdependence, however, makes the "fitness" of any firm's pattern of specialization dependent on the specialization patternsof other firms in the market system. Each firm of a given type may be further described in terms of its resources, such as inventories,cost structures, wealth,and the relationshipsit has established withotheractorsin the market). Patternsof relationships may be complex,and they will themselvesevolve. The patternof relationshipsthat determines firms' interdependencies also establishes thefitness of a specific pattern of activities and functions. Firms with which a firm has relationshipshave themselves relationships with other firms. These other firms have relationships with yet other firms and so forth, including the possibility of relationships, perhaps of different types) with the original firm. Relationship patterns may shift and become unstable. Forexample, the merger of two advertising agencies may result in their having to give up one of two clients in the same industry. The forsaken client may hire a new agency that, in turn, must give up a client in the industry. "Avalanches" of changes may follow in an analogous way to Per Bak's (1996) sand piles. Depending on the nature and degree of connectivity and responsiveness among firms and relationships, changes in one part of the network can bring forth "avalanches" of changes of different sizes (Hartvigsen et. al. 2000). Each agency that changes clients loses links to the suppliers and customersof the old client and gains links to the suppliersand customersof the new client. Relationships may be of different types. For example, two mergingautomobile firmsmay require management consulting services. The consulting company may be allied with, or
350
even a subsidiary of, an accounting firm, which in turn gains links to the merged auto firm though its relationship with the consulting firm . The consulting firm in turn may gain links to the merged company's public relations agency. Because of the services provided by the consulting and accounting firms, the auto firm may require the services of a computer service bureau. The service bureau in turn may gain links to the auto firm's investment bank . However, the exclusionary restrictions described in the previous paragraph may result in changes in consulting, accounting, banking, and service bureau industries. A second objective of this research, and the primary objective of the present paper, is to gain understanding of what drives the formation of relationship patterns and to look at the patterns of stable and unstable relationships that may occur.
3. Modeling A differentiating characteristic of this research is the way in which relationships are viewed. Typically, descriptions of relations among firms make the implicit assumption that it is organizations that "organize" and direct the flows of activities in IMS's. As Resnik (1998) points out such centralist thinking tends to be common in business: "People seem to have strong attachments to centralized ways of thinking : they assume that a pattern can exist only if someone, or something) creates and orchestrates the pattern. When we see a flock of birds, they generally assume the bird in front is leading the others - when in fact it is not. And when people design new organizational structures, they tend to impose centralized control even when it is not needed." (p.27). Recent developments in the science of complex adaptive systems show how structure emerges in bottom-up, self-organizing ways. The observed behaviour of the system is the outcome of independent actions of entities that have imperfect understanding of each other's activities and objectives and who interfere with and/or facilitate each others activities and outcomes. Structure emerges as a property of the system, rather than in a top-down , managed and directed fashion (Holland 1998). From the ongoing processes of interaction, actors' bonds, resource ties, activity links and mutual understandings emerge and evolve and these constitute the structure of the IMS 's. This structure can be more or less stable. Over time, the structure of the IMS's evolves and co-evolves as a result of interaction with other IMS' s. We make use of recent developments in computer-based simulation techniques to model the evolution of relationships in IMS's. We adopt this approach for two reasons. First, the complex interacting processes that take place in such systems are beyond the scope of traditional analytical techniques. Second, the actual IMS's we observe in the real world, no matter how diverse they may be, are only samples of those that could arise. They are the outcomes of particular historical circumstances and accidents. As Langton (1996) observes in a biological context: "We trust implicitly that there are lawful regularities at work in the determination of this set [of realized entities], but it is unlikely that we will discover many of these regularities by restricting ourselves only to the set of biological entities that nature
351
actually provided us with. Rather, such regularities will be found only by exploring the much larger set of poss ible biological entitie s" (p x). So it is with IMS's. Recent developments in the modeling and simulation of complex adapti ve sys tems suggest ways in which we may explore the range of possible networks in s that might arise . In the next section, we describe the model s of IMS ' s we have developed and are developing based on Kauffman' s NK Model s.
3.1. NKModels Kauffman ( 1992, 1995) has developed a way of repre senting a network of interacting elem ents in term s of a set of N elem ents, actors, chemicals, firm s or whatever) each of whose behaviors is affected by the behaviour of K other elements. The model is a discrete time simulation model; what exists in time t+ I is determined by what existed in time t. Two versions of the NK model are relevant to our research: NK Boolean Models, used to model relationship interaction; and NK Fitness Landscapes with Patch Formation, used to model the emergence of cooperating group s of entities, such as firms. Only the first has so far been implemented in our research and it is the version de scribed in the next section of the paper. A description of the approach we are taking using NK Fitness Land scapes foll ows this.
3.2. NK Boolean Models and Relationship Interaction An important que stion is how to character ize network struc ture in term s of NK Model s. We chose to use relationships between firm s as the unit of anal ysis. Th e pattern of relationships at time t+ I is determined by the pattern of relationships at time t and Boolean logic rules fo r transforming one to the other. Note that by choosing relationship s we are effectivel y operating at a second order level compared with traditional indu strial network models. Actors are defined by the relationship s they have and not, ab initio, as network entities in their ow n right. A full er account of the model is to be found in Wilkinson and Easton ( 1997). The behav ior of relation ship s is modelled in binary term s; they either exist, i.e., are active) and have the value I or they do not exist, i.e., are inactive) and take on the value O. The beha vior of a relationship at time t dep end s on the behav ior of K other relationships, possibly including its own behavior) in the previous period . Boolean operators spec ify the behaviorof a relation in period t+ I for each combination of behaviors of the K co nnected relations in the previous period . The focal relati on is called the output or regulated relation and the K conn ected relations affecting its behavior are called input relations. Our contention is that Boole an operators may be con structed that have conventional economic/business interpretations such as complementary supply relations, competing relati on s, temporally connected relation s, and so forth . If this is so, then both the type of relati on ships and the degree of interconnection K can be modelled, and the effect of both dimensions on the char acter of IMS attractors simulated.
352
Our model is an autonomous Boolean network because there are no inputs from outside the network itself, although this can be added, see below). For the purposes of our analysis, we assume that the Boolean network is a synchronous, parallel processing network in which all elements compute their behavior in the next period at the same time. Our general methodology for examining the behavior of our NK Boolean models follows that of Kauffman: "To analyze the typical behaviour of Boolean networks with N elements, each receiving K inputs, it is necessary to sample at random from the populations of all such networks, examine their behaviors, and accumulate statistics. Numerical simulations to accomplish this therefore construct exemplars of the ensemble entirely at random. Thus the K inputs to each element are first chosen at random and then fixed, and the Boolean function assigned to each element is also chosen at random and then fixed. The resulting network is a specific member of the ensemble of NK networks" (Kauffman 1992 p 192).
3.3. The role of K Kauffman has shown that K, the number of other entities to which an entity is linked, is a critical parameter in determining the pattern of behaviour of the network. For low values of K, say 0 or 1, each entity is essentially an isolated unit. Its state does not influence other elements and so the network consists of "frozen" relationships that do not change or regularly switch on and off. For high values of K , the interactions are very complex and destabilizing, resulting in chaotic behaviour. For values of around 3 to 6 self-organizing patterns emerge within the network that are stable with respect to small perturbations, i.e., random changes in the state of individual relationships do not change the state of the system as a whole). As K is increased, the isolated actors become isolated interacting groups. Gradually, as K increases, more of the elements are joined in the network. Attractors are the sequences of relatively stable states to which the system settles for protracted periods and, as Kauffman (1992) observes, attractors "literally are most of what the systems do" (p. 191). In the present context, attractors are interpreted as relatively stable patterns of relationships that may occur in IMS. The pattern of relationships that might be observed in an actual industrial market might correspond to the patterns on one such attractor, whereas IMS are likely to have many possible attractors depending on N, K, the mix of Boolean operators involved and system starting conditions. The patterns of relationship corresponding to other attractors might correspond to patterns observed in other market systems or, possibly, to feasible patterns that have never been observed in IMS's. System behaviour is also affected by a biasing or canalising factor, reflected in a parameter p. This reflects how sensitive an element is to the behavior of theK other elements to which a given element is connected. The value of p depends on the character of the Boolean operator and is measured in terms of the proportion of situations an entity will take on the most modal value, 0 or I. The lowest value for p is 0.5 , which occurs when an element will take on the value I in the next period in 50 % of situations and 0 in the other 50 %. If it will take on the value 1 in 90% of situations, p is 0.9. Higher K may lead to order
353
with higher values of p because p acts as a kind of damping function that "prevents" chaos. The p value of networks constructed using different combinations of Boolean operators will be used to aid our analysis of the behaviour of the network.
4. An Example of an NK Boolean Model of Network Relationships To keep things simple we will consider the network in terms of the N=8 possible output relationships between supplier and distributors, which can either be active, =I) in a period or inactive, =0) in any period. Figure I illustrates the network. The system comprises four suppliers, SO to S3) and the main competitor of each supplier is the one to its left e.g. the main competitor of SO is SI. For S3 the main competitor is SO. SO
... . Sl
~
S2
.....................
Output Relation
~
r--InPut Relation 2
········L. •
S3
: (I
0
Period 1
1 Period 2
~
1 Period3
Figure 3. A Four Period Attractor for Starting Condition 1011000 I
S. Next Steps Further developments of the Boolean model involve introducing lagged operators and interactions between different industrial networks. The latter step leads to larger scale industrial and economic organisation. The extension is straig htforward. For example, the behaviour of some relations, or actors) in one industrial network can be made to depend on the behavior of relations, or actors) in another network, as well as on those in their own network. Both the number of networks S and the extent of inter-coupling among networks C may be varied and evolve. Finally, an exogenous environment can introduce noise or other exogenous effects directl y on some or all parts of the network(s), Wiley 1999).
5.1. NK Fitness Landscapes and Patches Another realization of the NK model is in terms of NK fitness landscapes. Here , an ent ity 's behavior is modeled in binary form with I = on or acti ve and 0 = off or inacti ve. The specific fitness or utility of an entity at time t+ 1 is determined by the entity' s state in the period and by the states of K other entities. In terms of IMS's, firms ' , or relation ships) fitnes s depends
359
on its own behavior in the period as well as on the behavior of K other firms e.g. suppliers, complementors, customers and competitors, or connected relationships) . An entity changes its binary state from one period to the next, 0 to I, or I to 0) if it "expects" the change to improve its fitness . Entities are assumed to change their state based on the assumption that other entity 's behavior will remain unchanged in the next period. Fitness values, usually between 0 and 1, are allocated randomly to each possible combination of states that describe K+l entities, the actor and K other entities). Here we do not plan to model directly the behavior of relationships, as we did in the foregoing NK Boolean model. Instead, relations among actors emerge as a result of the formation of patches of cooperating actors . Kauffman (1995) introduced the concept of "patches" to refer to groups of actors coordinating their actions to achieve better group fitness. We use the concepts of patches in two ways. The first focuses on the development of cooperative relations among actors in a network and the second focuses on activities as the primary unit of analysis and how groups of activities emerge to define firms .
5.2. Modeling the Emergence of Patches of Cooperating Actors The first approach follows that of Kauffman, in the sense that patches define sets of cooperating elements. With a patch size of one, entities operate independently in deciding their behavior in the next period. Larger patch sizes correspond to multiple actors cooperating to jointly improve group fitness and it is assumed that gains and losses are distributed equally among groups members - therefore average fitness in the group is the driving force of behavior. Kauffman, 1995) shows that patch size in this sense has a dramatic effect on the ability of a system to achieve higher performing structures. We plan to use NK fitness landscape type models to model the patch formation process endogenously and to explore the way network dynamics and evolution is impacted by patch formation . Previous modelling work has tended to focus on patch size as an externally imposed parameter, whereas the formation of patches in IMS is a central feature of structural change and evolution. Firms and inter firm alliances in an IMS are examples of patches formed in real networks in a self-organizing way. Modeling the way patches may form and reform over time and how this shapes network performance will yield important insights into the deep processes of industrial network evolution. Four potentially fruitful approaches to modelling patch formation have been identified. The first uses payoffrules for patch/ormation, based on work by Kauffman and McCready. Here, an IMS may begin with individual behav ior i.e. a patch size of 1, or some other starting configuration of interest, and each period actors join and leave patches according to certain rules. Actors try to join a patch, and thereby leave their existing patch) if the average payoff is greater in the other patch than their existing patch. However, actors mayor may not be accepted into a patch. New patch members are accepted only if the average patch payoffis expected to increase as a result of their joining. These expectations are determined by
360
examining what the payoffs would have been if the actor was a member of the patch in the previous period. The evolution of patches can be modelled over time using different starting configurations, different values of K, different patch churning rules, as well as different types of fitness distributions, e.g. normal versus other distributions). The second approach is based on models of iterated prisoner's dilemma games with choice and refusal ofpartners, IPD/CR) pioneered by researchers at Iowa State University, e.g. Tesfatsion 1997). These models allow actors to form links with other entities and learn from the experience interacting with them. Depending on the outcomes of interaction using a fixed number of IPD plays, links may be strengthened, extended, undermined, or broken. Over time actors form groups, patches) of interacting actors, that are more or less cooperative, or they remain isolated "wallflowers" i.e, patches of size 1. Actors can employ different strategies in their interactions with others, which will affect the outcomes of interaction and the types of groups of interacting actors that form. Strategies can also be modified over time depending on the performance of an actor and its awareness of the performance of others. The third approach models the probability of cooperating with other actors directly and can be viewed as a variant of the IPD/CR approach, except that there is no modelling of interaction in terms of IPD games. This approach is based on the models developed by Hartvigsen et ai, 2000). Populations of actors are located on a two-dimensional aquare toroidal lattice and have a defined number of neighbours with whom they can interact Each actor has a probability of cooperating, Pi' and those that interact are considered cooperative and open to communication. Those with low PI interact infrequently and those with Pi =0 are pure defectors. A parameter specifies the amount each actor Pi is changed in response to the interaction experience each period . Interactions in any period are governed by two random numbers generated for each actor that determine if they interact and whether a neighbour cooperates or defects. If the chosen neighbour cooperates, the target actors Pi is increased by , otherwise it is decreased . By simulating the pattern of interaction over time, interacting groups, patches) emerge. The final approach is suggested by models of firm growth developed by Epstein and Axtell (1996) and Axtell (1999). In these models, actors join firms (patches) to gain economies of scale and the rewards are divided equally among members of a firm. Actors vary in terms of their work versus leisure trade offs and this affects their "cooperation" within a firm. Problems of shirking and free riders result from work-leisure tradeoffs and this leads to the breakup of firms and the formation of other firms. These models have been shown to be capable of mimicking the actual size distribution of firms in an economy.
5.3. Modeling Activity interactions using NK Fitness Landscapes The standard NK fitness model represents theN clements as scalars, binary values of l's and O's). Fitness is computed for the K+ 1 vector describing the state of the firm and the states of K other firms that influence it. The firm's state in period t+ 1 is determined to be the one that has the highest fitness.
361
The NK fitness model can be adapted to model a firm in which the states are vector valued, rather than scalar valued. For a given set of activities A, A; is the vector of Osand Is indicating whether firm "i" does or does not perform each activity. Firms can perform different combinations of activities which are the equivalent of patches i.e. they can alter the combination of activities performed in the expectation of improving overall fitness or the vector. The fitness of performing or not an activity depends on the K other activities it is connected to that are performed within the firm or by other firms. Patches here are a natural descriptor of a firm in terms of the set of activities included in the vector. Patch sizes vary in terms of the number of activities firms consider in combination. Firms can add and drop activities in their vector, i.e. change patch size) as well as alter the state of each activity in their vector or patch . The approach is similar in some ways to that proposed by McKelvey, 1999) in which he uses the activities specified in Porter's value chain as the basis for representing firms in terms of NK models.
6. Summary, Managerial Implications, and Conclusions Traditionally, the study of business and economic systems has been on the actions of agents as rational actors who interact with each other and their environment to control scarce resources. An alternative view has developed in recent years, which sees such systems as more like biological and ecological systems, as complex adaptive self-organising systems. This view profoundly changes the underlying metaphor of economic processes, from what might be called an engineering metaphor, to a biological one involving stochastic, dynamic processes, with myopic, "satisficing" agents . Formal representations of such processes can be difficult, if not impossible, to solve. There is an approach based on evolutionary computer algorithms that avoids the need for analytic solutions of a formal model. Among the promising tools are agent based computer models of complex adaptive systems and the emerging Science of Complexity. The relevance of the Science of Complexity to management is beginning to be appreciated by industry. One example is the Embracing Complexity Conferences run by Ernst and Young, which bring together scientists from various disciplines working on aspects of complex systems behaviour with business and the ICCS conferences run by the New England Complex Systems Institute which attracts business people as well as academics. New types of models are being developed to better understand, sensitize managers and perhaps control the behaviour of complex intra and inter-organizational systems. An important insight of the new types of models is that firms operate in systems in which control is distributed. While there may be exceptions, in the main, no actor or entity coordinates or directs the behaviour of the network. Instead, a firm is participating, learning and responding to the local circumstances it encounters and tries to achieve its particular objectives. Coordinated action results from interrelated yet separate operations occurring in parallel. This leads to a different concept of management and strategy; one in which the firm participates and responds to the system in which it operates rather than tries to control and
362
direct it (Wilkinson and Young 2002). Firms jointly create both their own destiny and the destiny of others . In this regard, firm s act to preser ve and create the ability to act through the futures they shape for other firm s on whom they depend as well as themsel ves. Together, they co-create the winning game s the winning actors play (Kauffman 1996). In the mainstream business academic literature, we are beginning to see the emergence of this type of thinking, as firms come to see them selves as parts of business ecosystems in which cooperative and competitive processes act to shape the dynamic s and evolution of the ecosystem (Anderson et al 1999, Haeckel 1999, Moore 1995, Moore 1996).
References Anderson, Philip, Meyer, Alan, Eisenhardt , Kathleen , Carley, Kathleen, Pettigrew, Andrew, 1999, Introduction to the Special Issue: Applications of Complexity Theory to Organization Science, Organization Science . 10, May-June 1999): 233-236. Arthur , Brian, 1994, Increasing Returns and Path Dependence in the Economy, Ann Arbour,University of Michigan Press (Ann Arbour ). Axtell , Robert , 1999, The Emergence of Firms in a Population of Agents Center on Social and Economic Dynamics , Brooking s Institution Washington , DC. Bak, Per, 1996, How Nature Works, Springer Verlag (New York). Easton, G., 1995, Methodology and Industrial Network s, in Business Marketing: an Interaction and Network Perspective, edited by K. Moller and D. T. Wilson, Kluwer (Norwell, Mass). Easton, Geoff, Wilkinson, Ian F. and Georgieva, Christina, 1997, On the Edge of Chaos: Towards Evolutionary Models of Industrial Networks, in Relationships and Networks in Internat ional Markets, edited by Hans Georg Gemunden and Thomas Ritter, Elsevier, 273-294 Epstein, Joshua M. and Axtel, Robert, 1996, Growing Artificial Societies, MIT Press (Cambridge, MA). Gadde, Lars-Eric and Mattsson, Lars-Gunnar, 1987, Stability and Change in Network Relations, International Journal of Research in Marketing, 4, pp29-41 Gell-Mann , Murray, 1995, Complex Adapti ve Systems, in The Mind, The Brain, and ComplexAdaptive Systems, edited by H. 1. Morowitz and 1.L Singer, Sante Fe Institute Studies in the Science of Complexity, Addison-Wesley (Reading), 11-24 Haeckel, Stephan H., 1999, Adaptive Enterprise Harvard Busines s School Press (Boston). Hakansson, Hakan, 1992, Evolution Processes in Industrial Networks, in Industrial Networks: A New View of Reality , edited by B Axel sson and G. Easton, Routledge (London), 129-143 Hakansson, Hakan and Ivan Snehota , 1995, Developing Relationships in Business Networks, Routledge (London) . Hartvigsen, G., Worden, L. and Levin , SA, 2000, Global Cooperation Achieved Through Small Behavioral Changes Among Strangers, Complexity 5: 3,14-19 Holland, John H., 1998, Emergence, Addi son-Wesley Publishing, Reading, MA. Kauffman , S., 1992, Origins ofOrder: SeljOrganisation and Selection in Evolution , Oxford University Press (New York). Kauffman, S., 1995, At Home in the Universe, Oxford University Press (New York). Kauffman, Stuart, 1996, Investigations: The Nature of Autonomous Agents and the Worlds they Mutually Create , Santa Fe Institute Working Paper 96-08-072. Langton , Chris, 1992, Adaptation to the Edge of Chaos, in Artificial Life II: Proceedings Volume in the Santa Fe Institute Stud ies in the Science of Complexity, edited by C. G. Langton, J.D.Farmer, S,
363
Rasmussen and C. Taylor, Addison Wesley. (Reading). Langton, Chris ed., 1996, ArtificialLife: An Overview. MIT Press (Cambridge). Levitan, Bennett,LoboJose, Kauffman, Stuartand Schuler,Richard, 1999,OptimalOrganization Size in a StochasticEnvironmentwith Externalities, Santa Fe Institute WorkingPaper 99-04-024. Moore, James, 1996, The Death ofCompetition, John Wiley (Chichester). Moore, Geoffrey, 1995, Inside the Tornado, Harper Collins (New York). McKelvey,William, 1999,AvoidingComplexity Catastrophein Coevolutionary Pockets: Strategies for Rugged Landscapes, Organization Science. 10, (May-June)294-321 Resnick,Mitchel, 1998,Unblocking the Traffic Jam in CorporateThinking, Complexity (3, 4): 27-30. Tesfatsion, L., 1997,How EconomistsCan Get Alife, in The Economyas an Evolving ComplexSystem II, edited by W. B. Arthur, S. Durlaufand DA Lane, Addison Wesley (RedwoodCity), 534-564 Vriend, N., 1995,Self Organizationof Markets: An Exampleof a Computational Approach,Journalof Computational Economics, 8, (3) 205-231 Welch, Catherineand Wilkinson, Ian F., 2000, FromAAR to AARI? Incorporating Idea Linkagesinto NetworkTheory, Industrial Marketing and Purchasing Conference, University of Bath,September. Wiley, James B., 1999, Evolutionary Economics, Artificial Economies, and Market Simulations, Working Paper, University of Western Sydney, School of Marketing, International Business and Asian Studies. Wilkinson,Ian F and G. Easton, 1997,Edge of Chaos II: Industrial Network Interpretation of Boolean Functionsin NK Models, in Interaction Relationships and Networks in BusinessMarkets: 13th IMP ConferenceVol 2, edited by F. Mazet, R. Salle and J-P Valla, Groupe ESC Lyon, 687-702 Wilkinson, Ian F. and Young Louise C; 2002, On Cooperating: Firms, RelationsandNetworks, Journal ofBusiness Research, 55 (2), 123-132. Wilkinson, Ian F., Hibbert, Bryn, Easton, Geoff and Lin, Aizhong, 1999, Boollean NK Program Version 2.0 A C++ program to simulate the behaviour of NK Boolean Networks, School of Marketing, University of New South Wales. Wilkinson, I. F., Wiley, James B. and Easton, Geoff, 1999, Simulating Industrial Relationships with Evolutionary Models, Proceedings ofthe 28,h European Marketing AcademyAnnual Conference, HumboldtUniversity, Berlin.
364
Appendix 1. A Suppl ier Exclusive Dealing Attractors for 108 Starting Conditions
Type a
10011001 WOo/. 00/. 0% 1000/. 00/. 00/. 00110011 00/. 00/. 1000/. 0% Do/. 1000/. Typeb
1000/. 1000/. 1000/. 1000/.
00111001 10010011 00011 011 10111011 10110001 50% 0"/0 500/. 100% 50% 0"/0 50% 100% 10 00 1 10 1 110111 01 100% j)% 0% j)% 100% j)% 0% j)% 011 00 011 01110111 0 0 10 01 1 1 0% 50"/0 1000/. j)% 0% 50"/0 100% j)% Type e
11011001 100111 01 1000/. 25% 0% WOo/. 25% 0% 00010011 10110011 00111011 25% 0"/0 75% 25% 0"/0 75% 11001101 100% 75% 0% 1000/. 75% 0% 0 01 00011 0 1110011 00 1 1 0 1 1 1 0% 25% 100% 0% 25% 100% o 1 100 1 1 1 0% 75"/0 100% 0% 75"/0 100% 10111001 10011011 75% 0"/0 25% 75% 0"/0 25%
Typed
000001 01 1 0 100 10 1 01010101 11110 101
75% 75%
100% 100% 25% 25%
75% 75% 25% 250/. 100% 100%
00101101 00111101 11000011 01001011 11101011 01011111 10101111 00001111 11010111 1000 0111 1001 0110 50% j)% j)o/. j)o/. 50% j)% j)o/. j)% Typee
Ll
000101 01001101 11101101 11001111 75% 75% 25% 75% 75% 25% 01100101 011011 11 11100111 01000111 0 0000110 10100110 11110110 25% 75% 75% 25% 75% 75% 00110101 000000 11 10100 011 01010011 11110011 0 0101011 0 1111011 0 0111111 10110111 00010111 25% 25% 75% 25% 25% 75% 01011001 11111001 100101 01 000111 01 101111 01 10001011 01011 011 10011111 75% 25% 25% 75% 25% 25%
250/. 25%
25% 25%
75% 75%
75% 75%
Typef
01111001 000 1 010 1 10110101 01111101 10000011 11010011 0 0001011 1 0 101 011 o 1 0 1 101 1 11111011 o 0 0 1 1 III 10 0 1 011 1 5Q+/o 25% j)% 50% 25% j)% 1 0000 101 110101 01 000011 01 10101101 o 1 0 1 11 0 1 111111 01 11001011 11 011111 1000111 1 01 001 001 75% j)% 25% 75% j)% 25% 010001 01 1 1 100 10 1 01 101101 11101111 0100111 1 11000111 01000110 11010110 5Q+/o 75% j)% 5Q+/o 75% j)% 00100101 0111 0101 01000011 1110 0011 o 1 101 011 00000111 o 0 101 III 11110111 o 1 0 1 0 11 1 10100111 25% jJ+/O 75% 25% j)% 75%
75% 75%
j)% j)o/.
25% 25%
j)% j)%
365
Appendix 2. Mutual Exclusive Dealing Attractors for 40 Starting Conditions Type a 10 0 1 10 0 1 100% 0% 0% 100010 100% 0% 0% 100% T)pl'b oI 0 1 100 I 1 I 0 1 10 0 1 o 0 1 1 1 00 1 1 0 1 1 10 0 1 000 1 010 1 1 00 1 0 I 0 1 1 000 110 1 o00 1 110 1 1 00 1 110 1 75% 13% 13% 75% 13% 13% 0 10 00 1 0 1 0110 0101 13% 75% 75% 13% 75% 75%
75% 75% 13% 13%
Tjpe c o 1 1 1 1 00 1 1 1 1 1 1 00 1 o0 I 1 0 10 1 I 0 11 0 10 1 I 0 10 1 10 1 o0 1 1 1 10 1 10 11 110 1 6)010 25% 25% 63% 6)0/0 25% 25% 63%
100 0 10 1 1 1 100 10 1 01 0 01 1 01 23% 62% 60% 25% 25% 60% 62% 23%
T)'Pl' .1 000 00 1 0 1 10 10 0 10 1 001 0 110 1 000 00 0 11 38% 38% 38% 38% 38% 38% 38% 38% Type e 10 00 0 0 0 0 01 50% 25% 50% 25%
10 1 10 1 25% 50% 25% 50%
T)pl' f 0 0 1 00 oI 10 1 25% 500/0 25% 500/0
101 I 0 1 50% 25% 50% 25%
T)pl'g 010 1 0 1 0 1 I 10 1 0 10 1 oI 1I 0 I 01 1 I 11 0 10 1 I 10 0 1 10 1 1 11011 01 o10 I 1 I 0 1 1 10 1 1 I 0 1 o111 110 1 1 1 11 I 1 0 1 50% 500/0 50% 50% 50% 50% 50% 50%
Chapter 20
Can Models Capture the Complexity of the Systems Engineering Process? Krishna Boppana, Sam Chow, Olivier L. de Week, Christian LaFon, Spyridon D. Lekkakos, James Lyneis, Matthew Rinaldi, Zhiyong Wang, Paul Wheeler, Marat Zborovskiy Engineering Systems Division (ESD) Massachu setts Institute of Technolog y (MIT)
Leonard A. Wojcik Center for Advanced Aviation System Development (CAASD) The MITRE Corporation
1.1. Introduction Many large-scale , complex systems engineering (SE) programs have been problematic; a few examples are listed below (Bar-Yam , 2003 and Cullen, 2004), and many others have been late, well over budget, or have failed: • Hilton/Marriott!American Airlines system for hotel reservations and flights ; 1988-1992; $125 million ; "scrapped" • Federal Aviation Administration Advanced Automation System; 1982-1994; $3+ billion; "scrapped" • Internal Revenue Service tax system modernization; 1989-1997; $4 billion ; "scrapped" • Boston "Big Dig" highway infrastructure project ; roughly $15 billion ; about $9 to $10 billion over budget and late.
367 Traditional systems engineering (TSE) defines a step-by-step planned approach to systems engineering, which has proven effective across many systems engineering efforts. However, some systems engineering efforts defy the TSE process, due to various complexity-related factors along a set of characteristics summarized in Figure I (White, 2005). In this paper, we compare three modeling approaches in terms of their ability to represent the complexity of the SE process. We apply two of these approaches to the specific case of the Federal Aviation Administration (FAA) Advanced Automation System program. System Context
Strategic Context
Stakeholder Context
Figure 1. Enterprise systems engineering environment characterization template (White, 2005). Greater SE complexity corresponds to greater distance from the circle's center.
1.2. The Advanced Automation System (AAS) Program as a Case Study to Model the SE Process The AAS program (1982-1994) was chosen as a case study to compare modeling approaches to the SE process because of AAS program complexity and difficulty, and because extensive information is available about the AAS program in the open literature - we base the model comparison entirely on open literature information. The AAS schedule, at the time of the contract award to IBM (1988), was broken up into five major portions. These were: • PAMRI- The Peripheral Adapter Module Replacement Item - This was the initial project of AAS and improved the radar and communication systems of the air traffic control system. This was completed on time. • ISSS- The Initial Sector Suite System- This is the project which was intended as the initial.replacement of the controller workstations .
368
•
TAAS- The Terminal Advanced Automation system- The updating of the terminal area departure and approach control. • TCCC- Tower Control Computer Complex- This was intended as the automation system for the control towers. • ACCC- Area Control Computer Complex - This was intended as the automation system for enroute aircraft. PAMRI, consisting of the peripheral hardware updates was completed on time. The schedule and budget problems were associated with the software development, notably the ISSS development effort. The AAS event and budget timeline, based on open literature sources (Krebs and Snyder, 1988; Scott, 1988, Levin, et aI., 1992; Del Balzo, 1993; Ebker, 1993; Lewyn, 1993; Barlas, 1996; Beitel, et aI., 1998), is briefly summarized below : • 1982: The FAA sets the initial requirements for AAS and seeks contractors. • 1984: IBM and Hughes named the finalists to build the prototype. At this point $500 million has been spent developing the bid. • 1988: The FAA awards the prime contract to IBM worth $3.6 Billion. Hughes protests the award causing an initial project delay. • 1989: IBM begins work on the AAS. The software component of the project is estimated to be 2 million lines of code. • 1990 Requirements are still unclear for ISSS as indicated by the 500-700 requirements change requests for the year. To help finalize requirements , IBM builds a prototype center in Gaithersburg , Maryland so that controllers can try out the software under development. Despite the fact that requirements were not clear, approximately 1 million lines of code have already been written. Estimates indicate that 150,000 lines of code will need to be rewritten due to the requirements changes and the resulting bugs. To date the cost overrun is $242 million . • 1992: The FAA announces a 14-month delay to the project completion. FAA and IBM shake up their management. • 1993 April: IBM and the FAA freeze the requirements for ISSS. • 1993: IBM announces that the project will not be ready until after year 2000. IBM starts working on more methodical , communication -oriented project management philosophy with new managers . • 1994: The AAS program ceases to exist as originally conceived, leaving its various elements terminated, restructured, or as parts of smaller programs.
1.3. Application of System Dynamics to the AAS Program System Dynamics (SD) is a methodology for understanding the structures which create dynamic behavior. On a project, those structures consist of (1) work backlogs and the cycling of work among tasks to be done, tasks completed but needing rework (often with significant delays in discovering the rework), and tasks really completed ; (2) the feedback control mechanisms by which managers attempt to accomplish the project (adding resources, exerting schedule pressure, and working overtime); and (3) the secondary and tertiary impacts of project control (experience dilution, overcrowding, "haste makes waste", fatigue, etc.) that undermine the recovery strategy. The interplay of these structures, in the face of project problems (e.g., changing requirements ,
369 changing scope, resource bottlenecks, etc.) determines how disruptive these problems become, and ultimately project cost and schedule overrun (Lyneis, 2001). An important part of the AAS program was the ISSS, and we focused on modeling that aspect of AAS with SD. A simple SD project model developed for the MIT course ESD.36 (System and Project Management) was adapted to represent ISSS. We did this by altering budgets, schedule, normal productivity, and normal work quality to achieve the budgeted ISSS program, and then exogenously specifying problems that the project experienced . It is very clear from the background research on AAS that uncertain and unrealistic requirements profoundly affected the program. In fact, GAO reports of the mid to late eighties indicate that requirements knowledge was very poor to the point where the GAO had little confidence the AAS would be delivered on time. Requirements were not frozen until 1993, which was approximately 48 months after the project started (Del Balzo, 1993). We represented these requirements problems in two ways: (1) a time-varying impact of uncertain requirements on work quality (a 50% reduction for the first 18 months of ISSS) which was significantly reduced when the Gaithersburg facility came on line' ; and (2) a constant "penalty" from unrealistic requirements to project work quality and productivity which persisted throughout the project (a 20% penalty). Work Done 800 600 400 200
84 Work Done Work Done Work Done Work Done
96
108
: Budget - -- - - - - - - -- : BUdget + Uncertain + Penalty - -- - - : Budget + 20% Penalty - - - - - - - : Budget + 50% Uncertainty - -- - - - -
120 Task Task Task Task
Figure 2. Impact of uncertainty and penalty on ISSS performance, Figure 2 shows four simulations from the SD model: (1) Budget; (2) 20% Penalty; (3) 50% Uncertainty; and (4) a combination of penalty and uncertainty. The budget finishes in month 58, approximately on schedule. However, with the requirements problems actually experienced by the project (represented by the combination simulation), the project is still not completed by month 120. Because we assume that staffing levels are limited to those actually experienced on the project, any unbudgeted rework generated by the requirements problems translates directly into delayed completion of the project. The other two simulations shown in the figure represent the impacts of uncertain requirements and unrealistic requirements operating alone on the project. Note that with the severe impact of uncertainty early in the project, very little work actually gets completed. However, once Gaithersburg comes online and uncertainty is eliminated, progress accelerates. In contrast, with a requirements penalty 1Gaithersburg coming on line also significantly reduced the time to discover any rework.
370 progress occurs sooner but the pace never really accelerates as the penalty persists throughout the project. With either the penalty or uncertain requirements alone, the project would have finished around month 88.2 We then conducted a series of sensitivity tests to see which aspect of the project problems had the greatest impact on project performance. We varied: sensitivity to penalty (to 10% and 30% from the base case of 20%); sensitivity to uncertainty (to 25% and 75% from the base case of 50%); sensitivity to Gaithersburg online (to month 9 and month 27 from the base case of month 18). The results are shown in Figure 3. Note that the severity of any penalty from unrealistic requirements has the greatest impact on performance . This is largely because the assumed impact persists throughout the project. Even strong uncertainty has little incremental effect as long as it is eliminated early. When Gaithersburg comes online has little impact within the range tested. This suggests (and we emphasize suggest) that the unrealistic requirements had a greater effect on the failure of ISSS than the skipped testing and the late opening of the Gaithersburg testing center.
Budget
10%
20%
50'/'
Budget
75%
Both
Both
a. Penalty.
b. Uncertainty.
Month 9
Budget
Month
27 Both . Month 18
c. Gaithersburg onlfft1t Dono 800 j Figure 3. Sensitivity to penalty, uncertainty and Gaithersburg online .
1.4. Application of HOT to the AAS Program Highly optimized tolerance (HOT) is a framework for understanding certain aspects of complexity in designed or engineered systems. Carlson and Doyle (1999) originally developed the HOT concept and applied it to forest fire management. They showed how power laws in event size emerge from minimization of expected cost in the face of The magnitude of the uncertainty and the penalty are both estimates . We choose these values for the base case simulation as they produced approximately equal impact on project progress.
2
371
design tradeoffs. Since then, HOT has been associated with tradeoff analyses in such systems as internet architecture (Alderson, et al., 2004). The HOT approach has been adapted to the SE process (Wojcik, 2004) , resulting in an open-loop control model. In the HOT model of the SE process, the function s(t) represents the progress of the SE program and may reach a maximum value of 1 if the program is completed. An incomplete SE program will have a maximum value of s(t) less than 1. The HOT model is based on minimizing total cost across program lifetime, where cost per unit time has three terms. The first term in the cost density function is the pressure to finish the project. Coefficients A and generate cost pressure to complete the program: the larger the value of A, the more cost pressure to complete the program. The parameter r is a constant representing how pressure builds up over time; the larger the value of r, the more time it takes for pressure to build up . The second term in the cost density function is the impact from random events, rolled up into a stochastic function of time to simulate stakeholder interactions, external factors and other unanticipated events. Two parameters, B (magnitude of random event cost impact) and p (probability per unit time of random events) combine to form a product coefficient Bp. The third term in the cost density function models the inherent technical difficulty of the SE program, parameterized by a coefficient D. Exper iments on projected cost of AAS based on appropriations
a(1 8
~------,.,....O::::::;;?2'"~.~=----l
99
Ye.
Figure 4. Modeled AAS cost compared to program appropriations (dotted curve). Each solid curve corresponds to a different set of HOT model parameters. To apply the HOT model to the whole AAS program, we used the project plan and original budget to calibrate the values of A, Bp, D and , as they appeared at the beginning of the AAS program. In particular, Bp is set equal to O. Then, a revised set of parameters A*, Bp*, D* and * was estimated from the actual trajectory of the AAS program, using a design of experiment (DOE) based on an orthogonal array for four factors at three levels and three confirmation runs . The results are summarized in Figure 4, which shows realized cumulative cost of the program for various parameter value combinations and a "best fit" to the actual cumulative cost. We found that the best fit B*p* has non-zero value, A* > A, and D* > D. From these parameter comparisons, combined with post-mortem assessments of the AAS program, our interpretation is that the original AAS plan underestimated both the level of uncertainty of the effort (B*p* > Bp) and the inherent technical difficulty of the AAS program (D*
372 > D). We also suggest that A * > A may indicate a problem with managerial effectiveness in the AAS program; more pressure to complete was applied from outside the program than originally anticipated.
1.5. Extensions to the Original HOT Model Based on COSYSMO COSYSMO is a parametric cost model for systems engineering projects derived from a combination of data from past systems engineering efforts and subjective inputs from systems engineering experts (Valerdi, Miller and Thomas, 2004). Data used to generate COSYSMO parameters comes from systems engineering efforts across multiple systems and companies (Center for Software Engineering, 2005). We did not attempt to apply COSYSMO to the AAS SE process; instead we extend the ESE Environment Characterization Template (Figure I) to include factors in the COSYSMO model. These potentially could be scored subjectively to permit estimation of HOT parameters A, Bp and D, as follows. COSYSMO drivers related to the impact of pressure to complete (parameter A in the HOT model) include productivity, motivation, experience and rework likelihood - these parameters characterize whether there are good people on board who can minimize the impact of schedule pressure. COSYSMO drivers related to uncertainty (parameter Bp in the HOT model) include the requirements environment, time to rediscover rework, system understanding, technical maturity, stakeholder environment, manageability, system reliability, and the nature of the contract. COSYSMO drivers related to cost due to build speed (parameter D in the HOT model) include quality, experience, productivity, process capability , degree of difficulty, scope/scale, system reliability , and the nature of the contract. In our initial attempt to relate COSYSMO drivers to the HOT parameters, we experimented with applying a score of either 0 or 1 to the COSYSMO driver and then adding up the driver scores corresponding to each HOT parameter to generate a complete set of HOT parameters . We did not draw any firm conclusions from these experiments; further work is needed to connect the HOT and COSYSMO models.
1.6. Conclusions SD emphasizes schedule and is validated against significant past experience. SD potentially can show emergent behaviors through interactions and feedback loops. HOT models the whole SE "iron triangle" (cost, schedule, performance) with a relatively simple model ; but it is not well-validated against experience. HOT has promise as a higher-level model showing emergent behaviors from planning and replanning cycles. COSYSMO is calibrated with expert inputs and past experience and covers the whole triangle, but has less potential for displaying truly emergent behaviors . Retrospective application of SD and HOT to the single example of the AAS program is inconclusive on whether modeling can provide useful insight into complexity and emergence in the SE process, but we see integration of the three approaches is a promising approach for future research into complexity in SE, to build on the strengths of all three approaches : SD for relatively detailed process modeling, HOT for coarse, higher-level modeling , and COSYSMO for calibration reference and tie-in of modeling to key factors in previous SE experience . Modeling of both successes and failures in past large-scale engineering programs will provide further insights.
373
References Bar-Yam, Y., 2003 , " When Systems Engineering Fails-- Toward Complex Systems Engineering ," International Confe rence on Systems, Man & Cybernetics , Vol. 2: 2021- 2028 , IEEE Press, Piscatawa y, NJ . Barlas, S., 1996, "Anatomy of a Runaway: What Grounded the AAS." IEEE Software 13(1): 105. Carlson, J . M. and Doyle , J., 1999 , "Highly Optimized Tolerance: A Mechanism for Power Laws in Designed Systems ," Physical Review E 60(2): 1412-1427. Center for Software Engineering, 2005, "CO SYSMO" web page, University of Southern California, http://www.valerdi.com/cosysmo/ Cullen, A., 2004 , "The Big Money for Big Proje cts," Harvard Business School Working Knowledge, June 14,2004, U.S. Del Balzo , 1993, Statement Before the House Committee on Appropri ations, Submittee on Transportation , Concerning the Advanced Automation System , April 19. Erbker, G. W., 1993, Statement of Gerald W. Ebker, IBM Vice President Chairman and CEO of IBM's Federal Systems Company Before the Subcommittee on ransportation and Related Agencies, House Appropriations Committee. Krebs, M. and J . Snyder , 1988, "Hughes Quits Fight with IBM for Air-Traffic-Control Pact." Automoti ve News: 8 . Levin , R. E., R. D. Wurster , et aI., 1992 , GAO/RCED-92-264 Air Traffic Control: Advanced Automation System Still Vulnerable to Cost and Schedule Problems, GAO: 5. Lewyn , M., 1993, "Flying In Place: The FAA's Air-Control Fiasco." Business Week: 90. Li, A. , 1994 , Advanced Automation System: Implications of Problems and Recent Changes: Statement of Allen Li, GAO: 10. Lyneis, J., Cooper, and Els, 200 1, "Strategic Management of Complex Projects: A Case Study Using System Dynamics," System Dynamics Review 17(3). Richard C. Beitel , J., M. P. Dunn, et aI., 1998 , AV- 1998- 113 Office of Inspector General Audit Report Advance(sic) Automation System, DOT: 8. Scott, W. B., 1988, "Protest Prompts Suspension of $3.6-Billion FAA Contract to IBM." Aviation Week and Space Technology 129(7): 27. Valerdi , R., Boehm , B. W. & Reifer , D. J ., 2003 , "COSYS MO: A Constructive Systems Engineering Cost Model Coming of Age ," submitted for the INCaSE 2003 Symposium, June 29 - July 3, Washington , DC. White , B., 2005 , "A Complementary Approach to Enterprise Systems Engineering," The MITRE Corporation, presented at the National Defense Industrial Association 8th Annual Systems Engineering Conference , San Diego, CA. Wojc ik, L., 2004 , "A Highly-Optim ized Tolerance (HOT)-Inspired Model of the Large Scale Systems Engineering Process," The MITRE Corporation, in Student Papers: Complex Systems Summer School , Santa Fe Institute , Santa Fe, New Mexico, U.S., June 6-July 2. The contents of this document reflect the views of the authors and The MITRE Corporation and do not necessarily reflect the views of the FAA or the DOT . Neither the Federal Aviation Administration nor the Department of Transportation makes any warranty or guarante e, expressed or implied, concerning the content or accuracy of these views. Copyright © 2006, The MITRE Corporation and Massachusetts Institute of Technolo gy. All rights reserved.
Chapter 21 Biological Event Modeling for Response Planning Clement McGowan, Ph.D. Fred Cecere, M.D. Robert Darneille Nate Laverdure
Noblis, Inc. 3150 Fairview Park South Falls Church, VA 22042 [email protected]
1.0. Introduction People worldwide continue to fear a naturally occurring or terrorist-initiated biological event. Responsible decision makers have begun to prepare for such a biological event, but critical policy and system questions remain: What are the best courses of action to prepare for and react to such an outbreak? Where resources should be stockpiled? How many hospital resources- 0, the first sum is over all the sites i in the system and the second one over all the sites j(i) that are nearest neighbors of the site i, the spins a, can take any integer value between 1 and q, and J(a,b) = 1 if a=b and 0 if arb. Hence, the local energy associated with a spin having z adjacent sites with the same spin is just -zJ. The probability w of a change in the spin at a site which involves an increase of energy LJE at temperature T was taken to have the standard form w = wo, LJEO Before describing the properties of this system, we will discuss briefly the reason why we were interested in it, since this affects the way that we analyze its results. The Potts model can be regarded as a schematic model for a plastic crystal, in which the molecules are often treated as thin rods with their centres fixed on the sites of a lattice [Bermejo et al, 2000]. If the possible positions of these rods are restricted to a finite number q of orientations, then each orientation can be represented by a different value of the spin in the q-spin Potts model, and the Hamiltonian of equation (1) favors all the rods being parallel. These plastic crystals have properties very similar to those of supercooled fluids [Benkhof, 1998], with random molecular orientations corresponding to the liquid state and a state with the orientations of the molecules frozen corresponds to the glassy (or crystalline) state. Hence, the behavior of the Potts model can be expected to shed some light on the properties of supercooled liquids, as we have shown elsewhere [Halpern, 2006]. For our simulations we considered a square lattice of 200 x 200 sites, which was large enough to give reproducible results for different runs, and we chose Wo = 0.5. In order to save CPU time, we only performed extensive calculations for the case of 3 spins, q = 3, for which Ti oo) = 1.005J/k B where ke is the Boltzmann constant [Baxter, 1973]. The simulation techniques used were the same as in our previous paper [Halpern, 2006], apart from starting the anneals from an initial state of all identical spins instead of one with random spins. Once a steady state was reached, a set of five successive simulation runs was performed on the system, with each run proceeding until the spin had changed at least once at 99% of the sites. In order to study the temperature dependence of the structure of the system and the nature of the transitions in it, we examined the fraction of sites cl, within clusters of identical spins, i.e. with z = 4, and the fraction of sites ch, at which the first change of spin occurs from within such a cluster. The reason for considering the first change of spin at a site rather than all the changes of spin is that the latter will be swamped by the sites at which the spin can change freely, even if their number is small. The other property that we study is the normalized fraction of sites at which the spin is unchanged , or correlation function, at the end of one run,fr], and at the end of five runs,fr5' Since for two completely random systems the probability that the spins at any given site are equal is Jlq, this correlation function is related to the actual fraction s of sites at which the spin is unchanged by fr = (s-l/q)/(l-Jlq) = lf2(3s-1), (3)
441 which is zero if there is no correlation between the initial and final states of the system and unity if these states are identical. In order to provide an indication of what our system looks like at various temperatures, the results that we show first, in figure I, are maps of the system, where the three different colors correspond to the three different values of the spin. The color green tends to dominate the figures because the initial state from which the systems were obtained by annealing (or thermalizing) was one with green at all the sites.
Figure 1 : Distribution ofstates for Tloo)/T= 0.95 (top), 1.00, and 1. 01(bottom)
442 From these maps, we can see clearly how the system consists of small clusters at high temperatures, and tends towards a single phase as the temperature is lowered. The big difference between the figures for Tloo)/T = 1.0 and for Tloo)/T = 1.0I suggests that the latter is close to the phase transition temperature . However, even for Tloo)/T = 1.05 we find that there are smalI pockets of spins that differ from the majority . The presence of these pockets is associated with the fact that for our system there is always a finite probability of a spin changing within a cluster, and any such change increases the probability that an adjacent spin changes.
. ........".-
0.6
.,,/
....
'5';. 0.4
_-... o
0.2
..
: ::~!:::::::f.:::::
.•..• ..
ch . ..................... .... ..
0.92
•
-
.*
.'
-
.:It . .. •...• "' .
..
,.>' . ... . ....
.
,
1.00
1.04
Figure 2 : The fraction ofsites in clusters cl, (black squares) , the fraction ofsites from which the first change ofspin is made from within a cluster ch, (red circles) , and their ratio r = chi cl, (blue stars) , asfunctions ofthe reduced inverse temperature Tloo)/T.
We now tum to the possible precursors of a phase transition listed above. Our previous results [Halpern, 2006], together with our subsequent extensions of them, do not show any dramatic slowing down in the rates at which spins change as the temperature is lowered towards and beyond Te• Accordingly we now tum to the second possible precursor, and examine whether ther is a change in the predominant type of transition . For our system, this corresponds to a change from the transitions of spins taking place on sites at the edge of clusters at temperatures well above T; to transitions from sites within clusters below T: In figure 2, we show the fraction of sites within clusters cl, and the fraction of sites ch, at which the first change of spin occurs from within such a cluster, as functions of the reduced inverse temperature Tloo)/T, and also their ratio r = chlcl4• For an ideal single phase system, alI these three quantities would be unity. The difference between ch; and cl4 is just the fraction of sites initially inside clusters that have changed their environments before the spins on them first change. As the temperature is lowered , so that Tloo)/T increases, cl, increases, as is to be expected from the thermodynamics of the system [Halpern, 2006]. There is also amore rapid increase in ch 4 , so that the ratio r of the number of sites at which the first transition occurs within clusters to the total fraction of sites within clusters increases. A detailed discussion of the source of these increases is beyond the scope of this paper. However, we do not find any dramatic increase in these quantities for values of Tloo)/T up to
443 1.05, which according to figure I should certainly include the phase transition temperature. This result is associated with the small pockets of different spins which are present in our systems at all temperatures, and which reduce considerably the fraction of sites cl, inside clusters. The changes of spin at sites on the edges of these pockets require less energy than at sites inside clusters, and so the presence of these pockets also reduces appreciably the fraction of sites ch, at which the first change of spin occurs from inside a cluster. Incidentally, we see from figure 2 that the rate of change with temperature of all these quanities does have a maximu (an inflexion point in the curves) close to Te, but its position is not easy to measure accurately.
/
1.0
....
0.9
...1 ..•.........
0.8 0.7
0.6
't::
0.5 ~. 0.4
/
0.1
../ ./ /(?
.,
0.3
0.2
..
::f~::;:>~:::
. .......
'•..•
. ~
Ai :
.':
.•
...-
O'01---'l'=::r:~~--r-~----,,---~_.,..-~ 0.88 0.92 0.96 1.00 1.04
TJr
Figure 3: The average correlation function (the reduced fraction ofsites at which the spin has not changed) after a single run (red circlest.and after five successive runs Irs (black squares) , and their ratio rf = fr/ (blue stars), as functions of the reduced inverse temperature Te(oo)/T.
Finally, we consider the changes in the average values of the correlation function jr, between the original values of the spins and their values after n successive runs, where in each run the spins at 99% of the sites have changed at least once. In figure 3, we show, as functions of the reduced inverse temperature Tloo)/T, the mean value of the correlation function Ir. for five successive runs and its valuejr, after these five successive runs. The increase in both these quantities as the temperature is lowered is associated with the increase in cl., since it can be shown that the return of a spin to its original value is more probable for sites inside clusters than for sites outside them. One expects that after five runs the correlation function will be less than after a single run unless most of the system (apart from the pockets of different spins) is in a frozen state, in which case they will tend to be equal. Thus the increase in these correlations and their tendency to equalize could be a precursor of the phase transition. As we see from figure 3, the ratio rf = fr/ increases rapidly with decreasing T, and is close to unity for Tloo)/T ~ 1.02, i.e. below the critical temperature for the finite system in accordance with the conclusions about its value that we deduced from figure 1.
444
3. Discussion In the results presented in the last section, we found that for the Potts model there are three significant properties of the system that change as the temperature is reduced towards that of the phase transition . Firstly, the size of the clusters of sites having the same spin increases, as shown in figure I , and there is a corresponding increase in the fraction of sites cl, inside such clusters, as shown in figure 2. Secondly, the fraction of the sites initially inside a cluster at which the first transition occurs before any of the spins on adjacent sites have changed, the ratio r in figure 2, also increases. Finally, as shown in figure 3, the comelation function (the reduced fraction fr of sites at which a spin returns to its original value rather than to a random value) increases as the temperature is lowered, and becomes independent of the total number of changes of spin (rf= frl =1) at temperatures below the critical temperature . Of these three precursors, only the last one shows a dramatic increase as T approaches T, and so provides a clear indication of the actual phase transition temperature . In order to interpret and appreciate the significance of these precursors, we return to the analogies of the glass transition in supercooled liquids and of the freezing of a liquid and melting of a solid. A change in the spin at a site in our model corresponds to the motion of an atom or of a molecule, and a cluster of sites with identical spins corresponds to a droplet of liquid or to a solid-like region, depending on how long its size and shape remain unchanged. In that case, for the freezing transition the first of our three precursors corresponds to an increase in the fraction of molecules in droplets and in the size of the droplets, which is rather obvious and does not provide a sensitive test of the location of the critical temperature. The second of our precursors corresponds to an increasing fraction of molecules starting to diffuse via interstitial sites within a droplet, as proposed by Granato [Granato, 1992 ; Granato and Khonik, 2004] rather than freely between droplets. However, this does not show any dramatic change as the temperature was lowered, a result that corresponds to some recent experimental resdults on transitions in supercooled liquids as the glass temperature is approached [Huang and Richert, 2006] . By contrast, our third precursor corresponds to molecules that have left their original position (or orientation, as represented by the spin in our model) tending to return to it rather than diffusing freely away. This property is measured by the average value of the correlation function after different time intervals, and their ratio (corresponding to fr) is not affected by the presence of small pockets of unfrozen regions (provided that the first time interval is long enough for the molecules in these regions to diffuse away). Thus, just as for the Potts model, a rapid increase in this ratio is a clear precursor of the transition from a liquid to a frozen state, with the ratio reaching unity when the system freezes. We now tum to the implications of our results for other complex systems, and consider as a typical example a closely coordinated community of individuals. For such a community, the energy J in our model corresponds to the attractiveness (or reduction in unpleasantness) of belonging to the community , and the temperature T (which entered our transition probabilities in the ratio exp[-iJE/kBTJ) to the stimulus to leave it. Just as our system is only in a frozen state for T < Te, where Tiro) = 1.005J/k B, so for a community even if it is very difficult to leave (low T) the community, it will not be stable (T > Tc ) and will eventually disintegrate if there are insufficient attractive forces to bind it together (very low 1). Similarly, in order for a group of individuals to unite to
445 form a stable community (or political party) it is essential that the advantages of belonging to that community outweigh those of being completely free. While these statements are fairly obvious (although not always remembered by dictators) our model shows that the existence of small pockets of dissenters does not necessarily lead to the dissolution of the community. In fact, it follows from our results for fr that such a community can be stable even if most of the members leave it temporarily, provided that most of those that leave return to it rapidly. Our model can easily be extended to allow for different values of J and T at different sites or in different regions, and an examination of the stability of a "frozen" state (jr = 1) under these conditions can shed light on the stability of a community in which the attractiveness of belonging and the temptation to leave is not uniform among all its members.
4. Conclusions Our results show that the clearest precursor of the phase transition in the 3-state ferromagnetic Potts model is provided by the correlation function between the spins , which increases rapidly as the critical temperature is approached and becomes independent of the number of transitions made by the spins below the critical temperature. This result suggests that in other complex systems the corrrelation function between the state of the system at different times can also be used as a reliable test of whether the system is approaching a dramatic change in its properties.
References Baxter, R. J., 1973, "Potts Model at the Critical Temperature", 1. Phys. C: Solid State Phys. 6 L445 Benkhof, S., Kudlik, A., Blochowicz, T. and Rossler E., 1998, "Two glass transitions in ethanol: a comparative dielectric relaxation study of the supercooled liquid and the plastic crystal", J. Phys : Condens. Matter 10 8155 . Bermejo, F. 1., Jimenez-Ruiz M., Criado, A., Cuello, G. 1., Cabrillo, c., Trouw, F. R., Fernandez-Perea, R., Lowen, H. and Fischer, H. E., 2000, "Rotational freezing in plastic crystals: a model system for investigating the dynamics of the glass transition" , 1. Phys: Condens. Matter 12 A391 Granato A. V., 1992, "lnterstitialcy model for condensed matter states of facecentered-cubic metals", Phys. Rev. Lett. 68,974. Granato A. V. and Khonik , V.A., 2004, "An Interstitialcy Theory of Structural Relaxation and Related Viscous Flow of Glasses", Phys . Rev. Lett . 93. 155502 Halpern V, 2006 , "Non-exponential relaxation and fragility in a model system and in supercooled liquids", J. Chem Phys. 124214508. Huang, W. and Richert, R., 2006 , "Dynamics of glass-forming liquids. XI. Fluctuating environments by dielectric spectroscopy", J. Chem. Phys. 124 164510 Priman, V. (ed), 1990, Finite Size Scaling and Numerical Simulation ofStatistical Systems, World Scientific (Singapore) Wu, F. Y., 1982, "The Potts Model" , Rev. Mod . Phys . 542
Chapter 30
Universality away from critical points in a thermostatistical model c. M. Lapilli, C. Wexler, P. Pfeifer Department of Physics and Astronomy - University of Missouri-Columbia. Columbia, MO 65211
Nature uses phase transitions as powerful regulators of processes ranging from climate to the alteration of phase behavior of cell membranes to protect cells from cold, building on the fact that thermodynamic properties of a solid, liquid, or gas are sensitive fingerprints of intermolecular interactions. The only known exceptions from this sensitivity are critical points. At a critical point, two phases become indistinguishable and thermodynamic properties exhibit universal behavior: systems with widely different intermolecular interactions behave identically. Here we report a major counterexample . We show that different members of a family of two-dimensional systems -the discrete p-state clock model- with different Hamiltonians describing different microscopic interactions between molecules or spins , may exhibit identical thermodynamic behavior over a wide range of temperatures. The results generate a comprehensive map of the phase diagram of the model and, by virtue of the discrete rotors behaving like continuous rotors, an emergent symmetry, not present in the Hamiltonian. This symmetry, or many-to-one map of intermolecular interactions onto thermodynamic states, demonstrates previously unknown limits for macroscopic distinguishability of different microscopic interactions.
1
Introduction
A far-reaching result in the study of phase transitions is the concept of universality, stating that entire families of systems behave identically in the neighborhood of a critical point, such as the liquid-gas critical point in a fluid, or the Curie point in a ferromagnet, at which two phases become indistinguishable. Near the
447
critical point, thermodynamic observables, such as magnetization or susceptibility, do not depend on the detailed form of the interactions between individual particles, and the critical exponents, which describe how observables go to zero or infinity at the transition, depend only on the range of interactions, symmetries of the Hamiltonian, and dimensionality of the system. The origin of this universality is that the system exhibits long-range correlated fluctuations near the critical point, which wash out the microscopic details of the interactions [1,2,3,4].
In this paper, we report a different type of strong universality. We present the surprising result that, in a specific family of systems, different members behave identically both near and away from critical points-we refer to this as extended universality, if the temperature and a parameter p, describing the interaction between neighboring molecules, exceed a certain value. In this regime, the thermodynamic observables collapse, in the sense that their values are identical for different values of p. No thermodynamic measurements in this regime reveal the details of the microscopic interaction in the Hamiltonian. This demonstrates intrinsic limits to how much information about the microscopic structure of matter can be obtained from macroscopic measurements. As the collapse maps Hamiltonians with different symmetries onto one and the same thermodynamic state, the system exhibits a symmetry not present in the microscopic Hamiltonian . The added symmetry at high temperature is the counterpart of broken symmetry at low temperature. To the best of our knowledge, no such collapse of thermodynamic observables and added symmetry have been observed before. The family under consideration is the p-state clock model, also known as pstate vector Potts model or Zp model [5], in two dimensions, with Hamiltonian
n; = -JOI)i 'Sj = -Jo.z:::cos(Bi-Bj ) , (i ,j)
(1.1)
(i,j )
where the spins, Si, can make any of p angles Bi = 21fn;j p, (ni = 1, ... , p), with respect to a reference direction ; the sum is over nearest neighbors of a square lattice; and the interaction is ferromagnetic , Jo > O. (In what follows we set J o = kg = 1.) The number of directions, p, may be thought of as discrete orientations imposed on each spin by an underlying crystallographic lattice. The model interpolates between the binary spin up/down of the Ising model [6] and the continuum of directions in the planar rotor, or XY, model [7,8]. The model is of interest to study how the ferromagnetic phase transition in the Ising model, with spontaneously broken symmetry in the ferromagnetic phase, gives way to the Berezinskii-Kosterlitz-Thouless (BKT) transition, without broken symmetry, in the rotor model. For any p, neighboring spins in low- and high-energy configurations are parallel, Si . Sj ~ 1 and antiparallel, s, . Sj ~ -1, respectively. This model has been extensively studied since its conception [5]. Elitzur et el. [9] showed that it presents a rich phase diagram with varying critical properties: for p S; 4 it belongs to the Ising universality class, with a low-temperature ferromagnetic phase and a high-temperature paramagnetic phase; for p > 4 three phases exist: a low-temperature ordered and a high-temperature disor-
448 dered phase, like in the Ising model, and a quasi-liquid intermediate phase. Duality transformations [9, 10] and RG theory gave much insight into the phases in terms of a closely related, self-dual model, the generalized Villain model [11],
n; = L[l- COS(Oi (i ,j )
OJ)]
+ LLhpcos(POi) '
(1.2)
p
where the Oi's are now continuous and the hp's are p-fold symmetry-breaking fields, similar to the crystal fields that limit the spins to p directions in the clock model. Jose et el. [12] have shown, via RG analysis, that for p < 4 the fields were relevant and the low-temperature phase was ordered, and that the fields were irrelevant for p > 4. While the p-state clock model is obtained in the limit of an infinite symmetry-breaking field, hp ----- 00, some properties of Eq. (1.2) for finite fields are still valid for its discrete counterpart, Eq. (1.1) [12,9] . But (1.1) is no longer self-dual for p > 4, and RG approximations regarding the influence of the discreteness of the angular variables are delicate near p = 6. As a result, the transition points of (1.1) in the three-phase region are not precisely known. The collapse of thermodynamic observables, or extended universality, sets in at temperature Teu , at which the system switches from a discrete-symmetry, p-sensitive state to a continuous-symmetry, p-insensitive state, indistinguishable from p = 00, as the temperature increases and crosses Teu , for p > 4. For p :=:; 4, there is no collapse and the system retains its discreteness at arbitrarily high temperatures. The collapse (non-collapse) is responsible for the BKT (nonBKT) behavior of the transitions that lie above (below) Teu . In what follows , we focus on the determination of the phase diagram, including the curve T eu (p), the characterization of each phase , and the critical properties of the two transitions present at p> 4.
2
The Phase Diagram
We performed MC simulations on a square 2D lattice of size N = L x L with periodic boundary conditions. Lattice sizes ranged from L = 8 to 72, and averages for the computed quantities involved sampling of 105 -10 6 independent configurations, with equilibration runs varying from px (1,000-5,000) MC steps (a MC step is one attempt to change, on average, every lattice element). Figure 1 shows a summary of our results. The Ising model (p = 2) shows a single second order phase transition at Tlsing = 2/1n[1 + V2] c::: 2.27, in units of JO/kB . The p = 4 case also shows a single transition (in the Ising universality class) at T; = Tl sing /2 c::: 1.13. Most interesting is the case for p> 4, which exhibits a low-temperature ordered phase (Ising-like), which turns into a phase with quasi-long-range order at T I , and finally disorders at T2 • For T > Teu the identity of the original symmetry of the problem is lost, and all systems behave strictly like the planar rotor model (p = 00), with a BKT transition at TBKTc:::O.89 [8].
The transition temperatures are determined from our MC simulations as follows (for details see Ref. [13]) . For the high-temperature transition T2 we use
449
87 6
P = 32
Teu,"
1
4
PHASE T2:.t4.
-.,'
,
0.8
QUASI l IQUID
,,, , , ,,,• , I , P
0.6 0.4 0.2 0
,
DI SORD ~RED
1.2
r-,
,
5
0
~SE
ORDERED PHASE
0.02
-2 0.04
P
0.06
Figure 1: Phase diagram of t he p-st ate clock model. The Ising mod el, p = 2, exhibits a single second- order phase transition , as does the p = 4 case which is also in the Isin g universality class. For p ~ 6 a quasi-liquid phase appears, and the transitions at T 1 and '12 are second-order . The line T e« sepa ra tes the phase diagram into a region where th e th ermodynami c observabl es do dep end on p, below '1 ~u ; and a region where t heir values are p-independ ent , above T e« (collapse of observabl es, exte nded univ ersality). T hus for p ~ 8, we obser ve 'Is« < '12 = TBKT .
Binder's fourth order cumulants [14] in magnetization, UL == 1- (m4 ) /3 (m2 ) 2 , and energy VL == 1- (e4)/ 3(e2 ) 2 . The fixed point for UL is used to det ermine th e critical temperature T2 , whereas th e latent heat is proportional to limL->oo[(2/3 ) - minT VL] (in all cases min-- VL ---+ 2/3 , signaling a second-order phase tr ansition). Th e transition between th e ordered and the quasi-liquid phases, T 1 , is analyzed via the temperature derivatives of t he magnetization, 8(lm l)/ 8T, and 8UL /8T, both of which diverge in the th ermod ynamic limit [13] . Finite size scaling (FSS) applied to the location of the minima of these quantities yields T 1 = limL->oo T 1(L). We find T 1 = 4Ti 2/ ('1'2p2), with '1'2 ~ 1.67 ± 0.02, whereas for the Villain model, in the limit of an infinite h p and large p, Jose et al. [12] found T }JKKNl ~ 4Ti 2/(1.7p 2). Th e ordered phase vanishes rapidly as p ---+ 00 , see Fig. 1.
450
3
.
rr
128
"""2 1>=3
2
."
1 . . . .:: ..: / ... , .
X 10-3
",
...
4
0.5 r --
- - --
--,
.' ,'I
: li P;Teu ) occurs (see Fig. 1).
3
Summary, Implications, and Open Questions
In our study we obtained valuable evidence from the macroscopic properties of the Zp model, a model that, although completely discrete, shows regimes with continuous-like thermodynamic behavior. We have presented the phase diagram for the Zp model analyzing its critical properties. The 3-phase regime was observed for p > 4 in agreement with earlier predictions. Of particular interest is the surprising extended universal behavior above some temperature Teu , where the identity of the Zp model is completely lost as all observables become indistinguishable from those of the XY model. In fact, the presence of an "exact" BKT transition at the point T2 for p ? 8 is now firmly established as a consequence of the existence of this temperature line Teu , which divides the phase diagram in two regions: with and without a collapse of the thermodynamic observables. This extended universal behavior is not present at T2 (p < 8), since T 2 (p < 8) > Teu . These conclusions were confirmed by studying the critical properties at T2 (indeed, for p < 8 critical exponents and the helicity do not behave as expected from the BKT RG equations) . Our studies raise important questions: If observables below T 1 show ferromagnetic ordering with a significant p-dependence, and for T > Teu all information about p is lost, what is the nature of the region T 1 < T < Teu? What are the collective excitations that make the system thermodynamically indistinguishable above Teu ? Why is the extended universal behavior approached so rapidly (for p > 4), and what is qualitatively different for smaller p since no temperature exists that makes this degeneracy be achieved? A very broad variety of systems from confined turbulent flows to statistical ecology models show collapsing probability distribution functions in finite-sized systems, suggesting that scaling is independent of various systems attributes such as symmetry, state (equilibrium/not equilibrium) , etc [16] . We present a stronger result in the sense that all observables become identical for T > Teu : the critical properties' collapse is a consequence of the extended universality. The existence of this collapse of the thermodynamic observables implies that,
452
experimentally, any observable (0) of the system measured at temperatures above Teu will fail to show any signature of the underlying discreteness, i.e. (0) p = (0) 00 ' The corollary is that in the presence of this extended universality, lower-temperature measurements are necessary if a complete characterization of the symmetry of a system is desired , as may be expected in a wide range of experimental situations where the XY-like behavior is observed [17]. Experiments in a wide variety of physical systems-from ultra-thin magnetic films, to linear polymers adsorbed on a substrate-may show signatures of these effects. It may imply, e.g., that the critical properties at the melting transition of certain adsorbed polymer films may be unaffected by the symmetry of the substrate. Further details about the results presented in this letter and additional properties, plus some ideas on how to address the questions posed above will be published elsewhere [13]. We would like to thank H. Fertig, G. Vignale, and H. Taub for useful discussions. Acknowledgment is made to the University of Missouri Research Board and Council, and to the Donors of the Petroleum Research Fund, administered by the American Chemical Society, for support of this research.
Bibliography [1] Landau, L.D., & Lifshitz, E.M., 1966, Statistical Physics, MIT Press (Lon-
don). [2] Kadanoff , L.P. et al., 1967 Static Phenomena Near Critical Points: Theory and Experiment, Rev. Mod. Phys. 39, 395; Migdal, A.A., Phase transitions in gauge and spin lattice systems, 1976, Sov. Phys. JETP 42, 743. [3] Kadanoff, L.P., Green, M.S. (ed.), 1970, Proceedings of 1970 Varenna Sum-
mer School on Critical Phenomena, Academic Press (New York); Griffiths, R.B ., 1970, Dependence of Critical Indices on a Parameter, Phys. Rev. Lett. 24, 1479. [4] See, e.g., Wigner, E., 1964, Symmetry and conservation laws, Physics Today March, p. 34. [5] Potts, R., 1952, Proc. Camb . Phil. Soc. 48, 106. [6] Onsager, L., 1944, Crystal Statistics. 1. A Two-Dimensional Model with an Order-Disorder Transition, Phys. Rev. B 65, 117. [7] Mermin, N.D., & Wagner, H., 1966 , Absence of ferromagnetism or antifer-
romagnetism in one- or two-dimensional isotropic heisenberg models, Phys. Rev. Lett. 17, 1133; Hohenberg, P.C., 1967, Existence of long-range order in one and two dimensions, Phys . Rev. 158, 383. [8] Berezinsky, V.L., 1970, Destruction of long-range order in one-dimensional and two-dimensional systems having a continuous symmetry group 1. Classical systems, Sov. Phys. JETP, 32, 493; Kosterlitz, J.M ., & Thouless, D.J.,
453 1973, Ordering, metastability and phase transitions in two-dimensional systems, J . Phys. C 6, 1181; Kosterlitz, J.M ., 1974, The critical properties of the two-dimensional xy model, J . Phys. C 7, 1046; Nelson, D.R, & Kosterlitz, J .M., 1977, Universal jump in the superfluid density of two-dimensional superfluids, Phys . Rev. Lett. 39, 1201.
[9] Elitzur, S., Pearson , RB., & Shigemitsu, J., 1979, Phase structure of discrete Abelian spin and gauge systems, Phys. Rev. D 19, 3698. [10] Savit, R , 1980, Duality in field theory and statistical systems, Rev. Mod. Phys . 52, 453. [11] Villain, J ., 1975, Theory of one- and two-dimensional magnets with an easy magnetisation plane. II. the planar, classical, two-dimensional magnet, J . Physique 36, 581. [12] Jose, J.V., Kadanoff, L.P., Kirkpatrick, S., & Nelson, D.R, 1977, Renormalization, vortices, and symmetry-breaking perturbations in the twodimensional planar model, Phys . Rev. B 16, 1217. [13] Lapilli, C.M., Pfeifer, P., & Wexler, C., in preparation. [14] Binder , K., & Heermann, D.W., 2002, Monte Carlo Simulation in Statistical Physics, Springer (Berlin), 4t h ed. [15] Bramwell, S.T., & Holdsworth, P.C.W ., 1993, J . Phys.: Condens. Matter 5, L53. [16] See, e.g., Aji, V., & Goldenfeld, N., 2001, Fluctuations in Finite Critical and Turbulent Systems, Phys. Rev. Lett. 86 , 1007, and references therein. [17] E.g., FaBbender, S. et a1. , 2002, Evidence for Kosterlitz-Thouless-type otientational ordering of CF3Br monolayers physisorbed on graphite, Phys. Rev. B 65, 165411; observed a BKT-like transition for CF 3Br adsorbed on graphite.
Chapter 31
Quantum Nash Equilibria and Quantum Computing Philip Vos Fellman Southern New Hampshire University Jonathan Vos Post Computer Futures
In 2004, At the Fifth International Conference on Complex Systems, we drew attention to some remarkable findings by researchers at the Santa Fe Institute (Sato, Farmer and Akiyama, 2001) about hitherto unsuspected complexity in the Nash Equilibrium . As we progressed from these findings about heteroclinic Hamiltonians and chaotic transients hidden within the learning patterns of the simple rock-paper-scissors game to some related findings on the theory of quantum computing , one of the arguments we put forward was just as in the late 1990's a number of new Nash equilibria were discovered in simple bi-matrix games (Shubik and Quint, 1996; Von Stengel , 1997,2000; and McLennan and Park, 1999) we would begin to see new Nash equilibria discovered as the result of quantum computation . While actual quantum computers remain rather primitive (Toibman, 2004), and the theory of quantum computation seems to be advancing perhaps a bit more slowly than originally expected , there have, nonetheless, been a number of advances in computation and some more radical advances in an allied field , quantum game theory (Huberman and Hogg, 2004) which are quite significant. In the course of this paper we will review a few of these discoveries and illustrate some of the characteristics of these new "Quantum Nash Equilibria" . The full text of this research can be found at http://necsi .org/events/iccs6/viewpaper.php?id=234
455
1.0 Meyer's Quantum Strategies: Picard Was Right In 1999, David Meyer, already well known for his work on quantum computation and modeling (Meyer, 1997) published an article in Physical Review Letters entitled "Quantum Strategies" which has since become something of a classic in the field (Meyer, 1999). In this paper, in a well known, if fictional setting, Meyer analyzed the results of a peculiar game of coin toss played between Captain Jean Luc Picard of the Starship Enterprise and his nemesis "Q". In this game, which Meyer explains as a two-person , zero-sum (noncooperative) strategic game, the payoffs to Picard and "Q" (P and Q hereafter) are represented by the following matrix showing the possible outcomes after an initial state (heads or tails) and two flips of the coin (Meyer, 1999):
NNNF FNFF
N
F
1-11
1
1 -11 I
-1 -1
... The rows and columns are labeled by P's and Q's pure strategies respectively ; F denotes a flipover and N denotes no flipover, and the numbers in the matrix are P' s payoffs, 1 indicating a win and -1 indicating a loss (p. 1052) Meyer notes that this game has no deterministic solution and that there is no deterministic Nash equilibrium. However, he also notes (following von Neumann) that since this is a two-person , zero sum game with a finite number of strategies there must be a probabilistic Nash equilibrium which consists of Picard randomly flipping the penny over half of the time and Q randomly alternating between his four possible strategies. The game unfolds in a series of ten moves all of which are won by Q. Picard suspects Q of cheating. Meyer's analysis proceeds to examine whether this is or is not the case. There follows then, an analysis, using standard Dirac notation of the quantum vector space and the series of unitary transformations on the density space which have the effect of taking Picard's moves (now defined not as a stochastic matrix on a probabilistic state, but rather as a convex linear combination of unitary, deterministic transformations on density matrices by conjugation) and transforming them by conjugation (Q's moves) .This puts the penny into a simultaneous eigenvalue I eigenstate of both F and N (invariant under any mixed strategy), or in other words, it causes the penny to "finish" heads up no matter what ("All of the pairs ([PF + (I p)N], U (1I--12,1/V2), U (1 /--12,11--12)]) are (mixed, quantum) equilibria for PQ penny flipover, with value -I to P; this is why he loses every game." (p. 1054). A more detailed treatment of this game can be found in InterJournal Complex Systems, 1846,2006.
456
2.0 Superpositioning The PQ coin flip does not, however, explore the properties of quantum entanglement. Landsberg (2004) credits the first complete quantum game to Eisert, Wilkens and Lewenstein (1999), whose game provides for a single, entangled state space for a pair of coins. Here, each player is given a separate quantum coin which can then either be flipped or not flipped. The coins start in
the maximum entangled state: H 0 H+ T 0 T
Which in a two point strategy space allows four possible states (Landsberg, 2004): 1
NN=H 0 H+T 0T NF=H0 T+T 0H FN=H0 T-T 0H FF=H0H -T0T The strategies for this game can then be described in terms of strategy spaces which can be mapped into a series of quatemions. The Nash equilibria which occur in both the finite and (typically) non-finite strategy sets of these games can then be mapped into Hilbert spaces where, indeed new Nash equilibria do emerge as shown in the diagram from Cheon and Tsutsui below (Cheon and Tsutsui, 2006). As in the case of the quantum Nash equilibria for the PQ game, unique quantum Nash equilibria are a result of the probability densities arising from the action of selfadjoint quantum operators on the vector matrices which represent the strategic decision spaces of their respective games.
Above: Solvable Quantum Nash Equilibria on Hilbert Spaces (Cheon and Tsutsui). Note the positioning of new Nash equilibria on a projective plane from the classical solution. A more detailed description of this solution can be found at http://arxiv.org/PS cache/guant-ph/pdf/0503/0503233.pdf
3.0 Realizable Quantum Nash Equilibria Perhaps the most interesting area for the study of quantum Nash equilibria is coordination games. Drawing on the quantum properties of entangled systems quantum coordination games generate a number of novel Nash equilibria. Iqbal and Weigert (2004) have produced a detailed study of the properties of quantum correlation games, mapping both invertible and discontinuous g-functions and Noninvertible and discontinuous g-functions (as well as simpler mappings) arising purely from the quantum coordination game and not reproducible from the classical games.
1 We have slightly altered Landsberg and Eisert, Wilkens and Lewenstein's notation to reflect that of Meyer's original game for purposes of clarity.
457 Of possibly more practical interest is Huberman and Hogg's study (2004) of coordination games which employs a variant of non-locality familiar from the EPR paradox and Bell's theorem (also treated in detail by Iqbal and Weigert) to allow players to coordinate their behavior across classical barriers of time and space (see schematic below). Once again, the mathematics of entangled coordination are similar to those of the PQ quantum coin toss game and use the same kind of matrix which is fully elaborated in the expression of quantum equilibria in Hilbert space.
I create
el\h.n~fid photons
I
Isend to deCISIOn ntaJ:ml
Decisions are coordinated even if made at different limes (Above) Huberman and Hogg' s entanglement mechanism. In a manner similar to the experimental devices used to test Bell' s theorem, two entangled quanta are sent to different players (who may receive and measure them at different times) who then use their measurements to coordinate game behavior. (Huberman and Hogg, 2004).
4.0 Quantum Entanglement and Coordination Games In a more readily understandable practical sense, the coordination allowed by quantum entanglement creates the possibility of significantly better payoffs than classical equilibria. A quantum coordinated version of rock-paper-scissors, for example, where two players coordinate against a third produces a payoff asymptotic to 1/3 rather than 1/9. Moreover, this effect is not achievable through any classical mechanism since such a mechanism would involve the kind of prearrangement which would then be detectable through heuristics such as pattern matching or event history (Egnor, 2001). This kind of quantum Nash equilibrium is both realizable through existing computational mechanisms and offers significant promise for applications to cryptography as well as to strategy.
458
4.1 The Minority Game and Quantum Decoherence Two other areas which we have previously discussed are the Minority Game, developed at the Santa Fe Institute by Challet and Zhang and the problem of decoherence in quantum computing. J. Doyne Fanner (Fanner, 1999) uses the Minority Game to explain learning trajectories in complex, non-equilibrium strategy space s as well as to lay the foundation for the examination of complexity in learning the Nash equilibrium in the rock-paper-scissors game (Sato, Akiyama and Fanner, 2001). Adrian Flitney (Flitney and Abbot , 2005 ; Flitney and Hollenberg, 2005), who has done extensive work in quantum game theory combines both of these areas in a recent paper examining the effects of quantum decoherence on superior new quantum Nash equilibria.
4.2 Flitney and Abott's Quantum Minority Game Flitney and Abbot then proceed through a brief literature review, explaining the standard protocol for quantizing games, by noting that "If an agent has a choice between two strategies, the selection can be encoded in the classical case by a bit." And that "to translate this into the quantum realm the bit is altered to a qubit , with the computational basis states 0) and 1) representing the original classical strategies." (p. 3) They then proceed to layout the quantum minority game , essentially following the methodology used by Eisert , Wilkens and Lewenstein for the quantum prisoner's dilemma, specifying that (p.3) : The initial game state consists of one qubit for each player, prepared in an entangled GHZ state by an entangling operator j acting on 00 ... 0). Pure quantum strategies are local unitary operators acting on a player 's qubit. After all players have executed their moves the game state undergoes a positive operator valued measurement and the payoffs are determined from the classical payoff matrix . In the Eisert protocol this is achieved by applying jt to the game state and then making a measurement in the computational basis state. That is, the state prior to the measurement in the N-player case can be computed by:
I¢o) =
Il}l) l
=
100 ...0) ) 1'/10)
0 i' 12 0
1¢2) =
(Jff l
I¢f ) =
j t lt/J2),
.. · ® Al/'{ )I¢d
Where 1J1 0) is the initial state of the N qubits , and Mk, k = I .. .,N is a unitary operator repre senting the move of player k. The clas sical and pure strategies are represented by the identity and the flip operator. The entangling operator j continues with any direct product of classical moves, so the classical game is simply reproduced if all players select a classical move.
459
4.3 Decoherence Flitney and Hollenberg explain the choice of density matrix notation for decoherence, and the phenomena which they are modeling explaining dephasing in terms of exponential decay over time of the off-diagonal elements of the density matrix and dissipation by way of amplitude damping. Decoherence is then represented and generalized as follows : NE that arises from selecting 1(4N) and lJ = 0 may serve as a focal point for the players and be selected in preference to the other equilibria. However, if the players select 8N E corresponding to different values of n the result may not be a NE. For example, in the four player MG, if the players select f/A; f/s; nc, and n» respectively, the resulting payoff depends on (nA + ns + nc + nD)' If the value is zero, all players receive the quantum NE payoff of v.., if it is one or three , the expected payoff is reduced to the classical NE value of 1/8 , while if it is two, the expected payoff vanishes. As a result , if all the players choose a random value of 11 the expected payoff is the same as that for the classical game (18 ) where all the players selected 0 or 1 with equal probability. Analogous results hold for the quantum MG with larger numbers of players . When N is odd the situation is changed. The Pareto optimal situation would be for (N -1 ) = 2 players to select one alternative and the remainder to select the other. In this way the number of players that receive a reward is maximized. In the entangled quantum game there is no way to achieve this with a symmetric strategy profile. Indeed, all quantum strategies reduce to classical ones and the players can achieve no improvement in their expected payoffs. The NE payoff for the N even quantum game is precisely that of the N 1 player classical game where each player selects 0 or I with equal probability. The effect of the entanglement and the appropriate choice of strategy is to eliminate some of the least desired final states, those with equal numbers of zeros and ones . The difference in behaviour between odd and even N arises since, although in both cases the players can arrange for the final state to consist of a superposition with only even (or only odd) numbers of zeros, only in the case when N is even is this an advantage to the players .
5.0 The Many Games Interpretation of Quantum Worlds So, after a rather roundabout journey from the bridge of the enterprise, we now have a number of quantum game s with quantum Nash equilibria which are both uniquely distinguishable from the classical games and classical equilibria (Iqbal and Weigert, Cheon and Tsutsui, Flitney and Abbott, Flitney and Hollenberg, Landsberg)
460 but we also have an interesting question with respect to quantum computing , which is what happens under conditions of decoherence . Not unexpectedly, the general result of decoherence is to reduce the quantum Nash equilibrium to the classical Nash equilibrium, however, this does not happen in a uniform fashion. As Flitney and Hollenberg explain : The addition of decoherence by dephasing (or measurement) to the four player quantum MG results in a gradual diminution of the NE payoff, ultimately to the classical value of 1/8 when the decoherence probability p is maximized, as indicated in figure 6. However, the strategy given by Eq. (12) remains a NE for all p < 1. This is in contrast with the results of Johnson for the three player "EI Faro1 bar problem" and" Ozdemir et al. for various two player games in the Eisert scheme, who showed that the quantum optimization did not survive above a certain noise threshold in the quantum games they considered . Bit, phase, and bit-phase flip errors result in a more rapid relaxation of the expected payoff to the classical value, as does depolarization , with similar behaviour for these error types for p < 0.5
6.0 Conclusion Quantum computing does, indeed, give rise to new Nash equilibria, which belong to several different classes. Classical or apparently classical games assume new dimensions, generating a new strategy continuum, and new optima within and tangential to the strategy spaces as a function of quantum mechanics. A number of quantum games can also be mathematically distinguished from their classical counterparts and have Nash Equilibria different than those arising in the classical games. The introduction of decoherence, both as a theoretical measure, and perhaps, more importantly, as a performance measure of quantum information processing systems illustrates the ways in which quantum Nash equilibria are subject to conditions of "noise" and system performance limitations. The decay of higher optimality quantum Nash equilibria to classical equilibria is itself a complex and nonlinear process following different dynamics for different species of errors. Finally, non-locality of the EPR type, and bearing an as yet incompletely understood relationship to Bell 's Theorem offers a way in which quantum communication can be introduced into a variety of game theoretic settings, including both strategy and cryptography, in ways which profoundly modify attainable Nash equilibria. While the field has been slow to develop and most of the foundational research has come from a relatively small number of advances, the insights offered by these advances are profound and suggest that quantum computing will radically impact the fields of decision-making and communications in the near future.
461
Bibliography [I] Benjamin, S.c., and Hayden, P.M. (2001) "Multiplayer Quantum Games", Physical Review A, 64 03030I(R) [2] Cheon, T. and Tsutsui, I (2006) "Classical and Quantum Contents of Solvable Game Theory on Hilbert Space", Physics Letters, A346 http://arxiv.org/PS cache/quant-ph/pdf/0503/0503233.pdf [3] Eisert, J ., Wilkens , M., and Lewenstein, M. (1999) "Quantum Games and Quantum Strateg ies", Physical Review Letters 83, 1999,3077. [4] Flitney , A . and Hollenberg, LrC. (2005) Muitiplayer quantum Minority Game with Decoherence, quant-ph/051 0 I08 http://aslitney.customer.netspace.net.au/minority game qic .pdf [5] Flitney, A. and Abbot D., (2005) "Quantum Games with Decoherence", Journal of Physics A 38 (2005) 449-59 [61 Huberman, B. and Hogg, T. (2004) " Quantum Solution of Coordination Problems", Quantum Information Processing, Vol. 2, No.6, December, 2004 [7] Iqbal, A . and Weigert, S. (2004) "Quantum Correlation Games", Journal of Physics A, 37/5873 May, 2004 [8] Landsberg, S.E. (2004) "Quantum Game Theory", Notices of the American Mathematical Society, Vol. 51 , No.4, April, 2004 , pp.394-399. [91 Landsberg, S.E. (2006) "Nash Equilibria in Quantum Games", Rochester Center for Economic Research , Working Paper No. 524, February, 2006 [IOJMclennan, A and Park, I (1999) "Generic 4x4 Two Person Games Have at Most 15 Nash Equilibria" , Games and Economic Behavior, 26-1 , (January , 1999) , 111-130. [Il] Meyer , David A. (1997) "Quantum strategies", Physical Review Letters 82 (1999) 1052-1055. [I2J Quint, T. and Shubik, M. (1997) "A Bound on the Number of Nash Equilibria in a Coordination Game", Cowles Foundation , Yale University [13]Toibman, H. (2005) "A Painless Survey of Quantum Computation" http://www .sci.brooklyn .cuny .ed u/~meti s/papers/h toi bman I .pdf [I4]Von Stengel, Bernhard (2000) "Improved equilibrium computation for extensive two-person games", First World Congress of the Game Theory Society , Bilbao, Spain, 2000 II5]Yuzuru, S., Akiyama, E., and Farmer, J . D., (2001) "Chaos in Learning a Simple Two Person Game", Santa Fe Institute Working Papers, 01-09-049 .
Part III: Applications
Chapter 1
Teaching emergence and evolution simultaneously through simulated breeding of artificial swarm behaviors Hiroki Sayama Department of Bioengineering Binghamton University, St at e University of New York P.O. Box 6000, Binghamton, NY 13902-6000 [email protected]
We developed a simple interact ive simulation tool th at applies th e simulated breeding method to evolve populations of Reynolds' Boids system. A user manually evolves swarm behaviors of artificial agents by repeatedly selecting his/her preferr ed behavior as a parent of th e next generation. We used thi s tool as part of teaching materials of th e course "Mathematical Modeling and Simulation" offered to engineering-major j unior student s in t he Department of Human Communication at the University of Elect roCommunic ations, J apan, during t he Spring semester 2005. Students actively engaged in th e simulated breeding pro cesses in th e classes and voluntarily evolved a rich variety of swarm behaviors th at were not initially ant icipated.
1
Introduction
Emergen ce and evolution a re the two most important concept s that account for how complex systems may be come organized . They a re intertwined d eeply to each other at a wide range of scales in real comp lex systems. Typical examp les of such connec t ions include evolut ion of swarm behaviors of insects and animals [1] a nd formation of large-scale patterns in sp ati all y extended evolut iona ry sys-
464 tems [2] . Th ese two concepts are often t reated, however , as somewhat dist ant subjects in typical educational settings of complex systems related programs, being taught using different examples, models and/or tools. Sometimes t hey are even considered as antagonistic concepts, as seen in the "nat ural selection vs. self-organizat ion" controversy in evolut ionary biology. There is a need in complex systems educat ion for a tool wit h which students can acquire a more int uitive and integrated understanding of these concepts and the ir linkages. To meet with t he above need, we developed a simple interactive simulatio n tool BoidsSB , which applies t he simulate d breeding meth od [3] to evolve populations of Reynolds' Boids syste m [4]. In BoidsSB , each set of parameter settings that describe local interaction rules among individu al agents in a swarm is considered as a higher-level individual. A user manually evolves swarm behaviors of art ificial agents by repeat edly selecting his/her preferred behavior as a parent of t he next generation. While some earlier work implemented Boids with interactivity [5] , our syste m is unique in that it dynamically evolves parameter set tings that determine the characte ristics of swarm behaviors. In this article we introduce the model and briefly report our preliminary findings abo ut a wide variety of dynamics of the model and its potential educational effects indicated by the responses of part icipat ing st udents who played with it.
2
Model
Our model BoidsSB simulates a swarm behavior of artificial agents in a continuous two-dimensional square space using local interaction rules similar to those of Reynolds' Boids system [4] . Each individual agent in BoidsSB , represented by a direct ionless circle, perceives relative positions and velocities of ot her agents wit hin its local perception range and changes its velocity in discrete t ime steps according to the following rules: • If t here is no local agents within its perception range, steer randomly (St r aying).
• Otherwise: - Steer to move toward the average position of local agents (Cohesion) . Steer towards the average velocity of local agents (A lign m e nt) . Steer to avoid collision wit h local agents (Se pa r at ion) . - Steer rand omly with a given probability (W him) . • Approximate its speed to its normal speed (P ace keeping). Th e size of space is assumed to be 600 x 600. Parameters used in simulat ion are listed in Table 1. This system can produce a natural-looking swarm behavior if those parameters are app ropriately set (Fig. 1).
465
Table 1: Parameters used in each simulation run. Narne Min Max Meaning N 1 500 Number of agents R a 300 Rad ius of perception range Vn a 20 Norma l speed Vm a 40 Maximum speed Cl a 1 St rengt h of t he cohesive force Cz a 1 St rength of th e aligning force C3 a 100 St rengt h of th e separating force C4 a 1 St rengt h of t he pace keeping force Cs a 0.5 P robability of th e random steering
....
Figure 1: Example of swarm behavior that looks like fish schooling. Parameter values used are (N, R, Vn , Vm , CI , C2 , C3 , C4 , C5 ) = (200,30, 3,5, 0.06, 0.5, 10, 0.45, 0.05).
BoidsSB was developed using J ava 2 SDK St and ard Edition 1.4.2. It runs as a stand-alone applicat ion on any computer platform equipped with J ava 2 Run time Environ ment . It s source code is freely available from the ICCS webs ite", In BoidsSB , the simulated breeding method [3] is used to better engage st udents in the simulated phenomena, where st udents will interact wit h th e syste m and act ively particip ate in t he evolutionary process by subject ively selecting their preferred swarm behaviors. Each set of parameter values is considered as a higher-level individu al subject to selection and mut ation. The simulated evoluti onary process is an interactive search within a nine-dimensional parameter space. Figure 2 shows a screen shot of BoidsSB. The numb er of candidate populati ons used for simulat ed breeding is fixed to six due to th e limit ations in computational capacity for simulation and display space for visualization. A st udent can left-click on any of t he frames to select his/her preferred swarm behavior. Th e parameter settings used in a frame can be out put by right-clicking on it , which is helpful for st udents to consider how the observed swarm behavior depends on the parameter values. Once one of t he six populati ons is selected, a new generation of six offspring is produ ced by randomly mut atin g (i.e., adding rand om noise to) each parameter value in t he selected parent , among which one inherits exactly the same parameter settings from the parent wit h no mut ation. I
ht t p:/ / necsLorg/ community/ wiki/index.php/ ICCS06/191
466
~
•
..
•
..
. ..
.'. .'
. .0-. ·..· r~
.. '
"..
;
01
..
, :1... ~, :--~."
Figure 2: Screen shot of BoidsSB. Swarm behaviors of six different populations are simultaneously simulated and demonstrated in six frames. A user can click on one of the frames to select a population that will be a parent for the next six populations. To enha nce t he speed of exploration, t he mutat ion rate is set rather high to 50%, i.e., noises are added to about half t he par ameters in every reproduction. This selection and repro duction cycle cont inues indefinitely until the st udent manually quits t he application.
3
Implementation in classes
We used BoidsSB as par t of teac hing materials of the course "Mathematical Modeling and Simulation" offered to engineering-ma jor junior st udents in t he Depar tment of Hum an Communicati on at t he University of Electro-Communications, J apan , during the Spring semeste r 2005. This course aimed to int rodu ce various exa mples of self-orga nizat ion to st udents and help them underst and these phenomena experientially and construct ively by act ively playing with simulat ions and modifying simulator codes. It also aimed to help st udents acquire fund ament al skills of object-oriented programming in J ava. Class meet ings were held in a comp uter lab once a week for four teen weeks. Each class was for 90 minutes, starting with a brief explanation abo ut t he to pical models and simulator codes followed by supervised lab work. T he act ual class schedu le is shown in Table 2. BoidsSB was used as one of t he materials for t he Week 10 "Evolut ion and Adaptation" , where it was discussed how dyn amic emergent patterns (swarm behaviors) could evolve using explicit fit ness criteria, either har d-coded or inte ractively selected. By this time students were already acquainted with the concept of swarm behaviors of Boids and other aggregate systems.
Droplet formation and phase transition in its dynamics Majority rule, Turing pattern formation, dynamic clusters in host-parasite systems, spiral formation Random network, random and preferential network growth Tree growth, simple self-replication, cell division Random walk, diffusion-limited aggregation, garbage collection by ants Swarm behaviors of insects, Boids, traffic jams Evolution of swarm behaviors by genetic algorithm and simulated breeding Spontaneous evolution in closed systems (Boids ecosystem, Evoloop)
Cellular automata Cellular automata
Network growth models Network growth models Individual-based models in discrete space Individual-based models in continuous space Evolutionary models with explicit fitness
Spatial Pattern Formation (2)
Networks (1)
Networks (2)
Co llective behaviors (1)
Co llective behaviors (2)
Evolution and Adaptation (1)
Evolution and Adaptation (2)
Team Projects Team Projects Final Presentations
5
6
7
8
9
10
11
12 13 14
Evo lutionary models with implicit fitness
Examples
Models
Subject Course introduction Intro to Java Programming Int ro to Java Programming Spatial Pattern Formation (1)
Week 1 2 3 4
Table 2: Class schedule of the course "Mathematical Modeling and Simulation" that was offered to juniors in the Department of Human Communication at VEC during Spring 2005 . The proposed BoidsSB was used as a teaching material for the Week 10: Evolution and Adaptation .
~
0:>
0
• . ID
-- .
j ':,. ~. . .,. " I
.
0
_ A ~W. I TIln.
-
Plllft)tnanc.
Bounct..,
.
"
..
l
,.,. 1
.... ..
0
'0
n- S."
. IHour)
"
I
.
P... . O
..
33
..
Time ser". (How)
••
.
57
Initial Consul1atlon Tlrre
,
l2
I
~·uUJ:u
'0 0
• "
I -- .- ()Ic... 00C~0l'........ I E............- .
IS
'00
"
Note K\olal ~
0
H· g•
~
=::"-l
0
S
.4QenISUtIIlsaUon
Plo" C
Numb r of On Can Doclora per Hour
PloI . B
--..... • •
1 ~ ~ 1n
" _2S ... t...ao.-,/.. W (start-gp right-pen (/ CHAR-WIDTH 2) ) base of L (--> (start-gp lift-pen :right (/ CHAR-WIDTH 2)) ; invisible (->output txt-out)))) use endpoint as output point (at line-in (--> (start-gp base-line CHAR-WIDTH) propagate base line (->output line-out)))) The at command allows us to initiate growing points from a network 's input. The --> symbol is an abbreviation for the connect command, which allows the
termination point of the first growing point started in the expression to serve as the initial point of the growing point(s) th at follows. The ->output command causes a point to behave as the output of a network . The expressions CHAR-HEIGHT and CHAR-WIDTH are constants that determine the dimensions of a character in terms of neighbourhood hops. Figure 2 illustrates the connection between the implementation and the emergent pattern caused by the L network .
r~pen
is an abbreviation for cascade . The name of the network is LIFE, it has two inputs (txt-in and line-in) and two outputs (txt-out and line-out) . In order to start the program, we will need to supply two points
500
-
-
-
l,\
l,\f
l,lf~
1
l,l
-l, .
L
-
l. \ . azlL¥
l,lft: .....
Figure 3: The evolutionof a self-organizing text pattern . Colours represent materials that have been deposited at each point (here, there are only three materials: none, ink, line)
of the domain that will behave as these two input locations. The expression starting with ==> says to use the two inputs to the LIFE network as inputs to the L network (defined previously) , then use its outputs as inputs to the t xt -di r e ct or network, whose outputs are then used as inputs to the I network and so on, until the outputs of the E network are supplied as the outputs of the overall LIFE network . The purpose of the txt-director network is to restore the conditions necessary to draw the next character; it will be explained in more detail shortly. Note that the cascade combinator allows us to build even bigger networks by combining word networks in the same way that letters are combined to make words.
3.4
P ropagati ng direction information
Given a process of constructing individual characters, the problem of making the process scale to producing arbitrarily long strings is reducible to replenishing the source of dir-pheromone after each character is drawn . In other words, we desire a kind of inductive property on our text strings: if the current character has sufficient information to determine left / right and up/down, then the subsequent character will also. Then, if we ensure that our initial conditions provide enough information for the first character to be formed properly, then the remainde r of our text will be also. The earliest attempt at generating self-organis ing text with GPL did not attempt to establish this inductive property: it used a one-time initial secretion of dir-pheromone over a range far enough to cover the distance of all of the characters to be drawn . The initial attempts at achieving this inductive property,
501
described in [3], embedded replenishing secretions of dir-pheromone within the definitions of the character networks, usually defined to occur from somewhere near the centre of the character. This had two problems: the definition of a character was now more complex, and the replenishing secretion interfered with the construction of the current character. In our current approach, we defined txt-director network, to be cascaded with character networks, whose sole function is to secrete dir-pheromone far enough to provide guidance to one character. We then cascade an instance of the txt-director network in between successive character networks. This takes care of the replenishment of dir-pheromone, but not of its interference with the current character.
(
/\\
.-.
\~,,/ /
I
.~--.
• Figure 4: The Synchronizing Mechanism. Top row: the synchronizing growing point within the 'L' network. Bottom row: the role of the txt-director network (shown with a dotted line) between consecutive characters. The circles show the extents of homing pheromone (smaller) and dir-pheromone (larger).
Our solution to this second problem was to use a pair of growing points to implement a synchronization mechanism that ensured that a character's network would be completely drawn before the subsequent network started. Figure 4 illustrates the mechanism in action . Specifically, the point to become the output of the character network secretes a homing pheromone. The final segment of the character's pattern is then sequenced (using connect) with a growing point that seeks out the homing pheromone. Upon finding the source of the homing pheromone, the growing point terminates and yields the output point of the network . Since this network is cascaded onto a txt-director network, the overall effect is that the secretion of dir-pheromone for the subsequent character does not take place until the current character has been completely drawn .
4
Discussion
We have presented an improvement to a GPL implementation of self-organising text. Our implementation is modular, in that both the production of characters
502 as well as t he maintenance of t he necessary signals have been capt ured by GPL networks. T his gives us th e power to combine them freely and robustly, in much t he same way that a digit al circuit designer may work with logic gates over transistors. The pot ential for interference between networks will always be present because th ere are a finite numb er of pheromones and an unb ounded numb er of uses of th em in generating a pat tern t hat has unbounded extent. In thi s implementation, we resolved this inte rference by synchronizat ion, but there are alternative approac hes, which we intend to explore. Qualitati vely, we have shown t hat non-trivial pat terns can be engineered to emerge in a complex system from relatively simple interactions between t he system's elements. We believe t hat t he techniques we have used can be genera lised to solve ot her types of pattern formation problems. At t he very least , it is clear t hat with th e appropria te abstractions, we can appl y traditional engineering techniques to cont rolling complex syst ems.
Bibliography [1] ABELSON, Harold, Don ALLEN, Daniel COORE, Chris HANSON, George HOMSY, J r. T HOMAS F . K NIGHT, Radhika NAGPAL, Erik R AUCH , Gerald J ay SUSSMAN, and Ron WEISS, "Amorphous comput ing", CACM 43 ,5 (2000), 74- 82. [2] BEAL, J acob , "P rogra mming an amorphous computational medium ." , Unconventional Programming Paradigms (J .-P. BANATRE, P . FRADET, J .-L . GIAVITTO , AND O . MICHEL eds.), (2004), 121-136. [3] COORE, Daniel, Botanical Compu ting: A Developmental Approach to Generating Int erconnect Topologies on an Amorphous Comp uter, PhD t hesis MIT (January 1999). [4] COORE, Daniel, "Abstractions for directing self-organ ising pat terns." , Unconvent ional Programming Paradigms (J .- P . BANATRE, P . FRADET, J .-L . GIAVITTO, AND O . MICHEL eds.), (2004), 110-120. [5] COORE, Daniel, and Radhika NAGPAL, "Implement ing reaction-diffusion on an amorphous computer.", 1998 MIT Student Workshop on HighPerformance Computing in Science and Engineering , (1998), 6-1 - 6-2. [6] KONDACS, Attila, "Biologically-inspired self-assembly of two-dimensional shap es using global-to-local compilat ion", Int ernational Joint Con ference on Ar tificial Intelligence (2003). [7] NAGPAL, Radhi ka, Programm able Self-Assem bly: Const ructing Global Shape using Biologically-inspired Local Interactions and Origami Mathematics, PhD th esis MIT (J une 2001).
Chapter 6
SELF-LEARNING INTELLIGENT AGENTS FOR DYNAMIC TRAFFIC ROUTING ON TRANSPORTATION NETWORKS Adel Sadek Dept. of Civil and Environmental Engineering and Dept of Computer Science University of Vermont [email protected] Nagi Basha Dept. of Computer Science University of Vermont [email protected]
Abstract Intelligent Transportation Systems (ITS) are designed to take advantage of recent advances in communications, electronics, and Information Technology in improving the efficiency and safety of transportation systems. Among the several ITS applications is the notion of Dynamic Traffic Routing (DTR), which involves generating "optimal" routing recommendations to drivers with the aim of maximizing network utilizing. In this paper, we demonstrate the feasibility of using a self-learning intelligent agent to solve the DTR problem to achieve traffic user equilibrium in a transportation network. The core idea is to deploy an agent to a simulation model of a highway. The agent then
504
learns by itself by interacting with the simulation model. Once the agent reaches a satisfactory level of performance, it can then be deployed to the real-world, where it would continue to learn how to refine its control policies over time. To test this concept in this paper, the Cell Transmission Model (CTM) developed by Carlos Daganzo of the University of California at Berkeley is used to simulate a simple highway with two main alternative routes. With the model developed, a Reinforcement Learning Agent (RLA) is developed to learn how to best dynamically route traffic, so as to maximize the utilization of existing capacity. Preliminary results obtained from our experiments are promising. RL, being an adaptive online learning technique, appears to have a great potential for controlling a stochastic dynamic systems such as a transportation system. Furthermore , the approach is highly scalable and applicable to a variety of networks and roadways .
1.0 Introduction In recent years, there has been a concerted effort aimed at taking advantage of the advances in communications, electronics, and Information Technology in order to improve the efficiency and safety of transportation systems . Within the transportation community, this effort is generally referred to as the Intelligent Transportation Systems (ITS) program. Among the primary ITS applications is the notion of Dynamic Traffic Routing (DTR), which involves routing traffic in real-time so as to maximize the utilization of existing capacity. The solution to the DTR problem involves determining the time-varying traffic splits at the different diversion points of the transportation network. These splits could then be communicated to drivers via Dynamic Message Signs or In-vehicle display devices. Existing approaches to solving the DTR problem have their limitations . This paper proposes a solution for highway dynamic traffic routing based on a self-learning intelligent agent. The core idea is to deploy an agent to a simulation model of a highway. The agent will then learn by itself through interacting with the simulation model. Once the agent reaches a satisfactory level of performance, it could then be deployed to the real world, where it would continue to learn how to refine its control policies over time. The advantages of such approach are quite obvious given the fact that real-world transportation systems are stochastic and ever-changing, and hence are in need of on-line, adaptive agents for their management and control.
1.1. Reinforcement Learning Among the different paradigms of soft computing and intelligent agents, Reinforcement Learning (RL) appears to be particularly suited to address a number of the challenges of the on-line DTR problem . RL involves learning what to do and how to map situations to actions to maximize a numerical reward signal (Kaelbling, 1996; Kretchmar, 2000; Abdulhai and Kattan, 2003; Russell and Norvig, 2003). A Reinforcement Learner Agent (RLA) must discover on its own which actions to take to get the most reward. The RLA will learn this by trial and error. The agent will learn from its mistakes and come up with a policy based on its experience to maximize the attained reward (Sutton and Barto, 2000). The field of applying RL to transportation management and control applications is still in its infancy. A very small number of studies could be identified
505
from the literature. A Q-Iearning algorithm (which is a specific implementat ion of reinforcement learning) is introduced in Abdulhai et. al. (2003) to study the effect of deploying a learning agent using Q-Iearning to control an isolated traffic signal in realtime on a two dimensional road network. Abdulhai and Pringle (2003) extended this work to study the application of Q-Iearning in a multi-agent context to manage a linear system of traffic signals. The advantage of having a multi-agent control system is to achieve robustness by distributing the control rather than centralize it even in the event of communication problems . Finally, Choy et al. (2003) develop an intelligent agent architecture for coordinated signal control and use RL to cope with the changing dynamics of the complex traffic processes within the network.
2.0 Purpose and Scope The main purpose of this study is to show the feasibility of using RL for solving the problem of providing online Dynamic Route Guidance for motorists, through providing a set of experiments that show how an RL-based agent can provide reasonable guidance for a simple network that has two main routes. Figure I shows the simple network used in this study. It should be noted that this network is largely similar to the test network used by Wang et al. (2003) in evaluating predictive feedback control strategies for freeway network traffic.
Dl
01
Figure 1: Network Topology The network has three origins: 0 I, 02, and 03 and three destinations D I, D2, and D3. Each origin generates a steady flow of traffic. Traffic disappears when it reaches any of the three destinations. The length in miles of each link is indicated on the graph. For example, LO has a length of 2 miles. The capacity of all links is 4000 veh/h except for LO that has a capacity of 8000 veh/h. All links have two lanes except LO that has 4 lanes. As can be seen from Figure I, there are two alternate routes connecting origin 0 I to destination D I. The primary route (route A) has a total length of 6.50 miles, whereas the secondary route (route B) is 8.50 miles long, is therefore longer than route A. The intelligent RL agent is deployed at the J I junction. The goal of the agent is to determine an appropriate diversion rate at JI so as to achieve traffic user equilibrium between the two routes connecting zones 0 I and DI (i.e. so that travel times along routes A and B
506
are as close to each other as possible), taking into consideration the current state of the system .
3.0 Methodology 3.1 Cell Transmission Model In this study, we selected the Cell Transmis sion Model (CTM) to build the simulation model , with which the RL agent would interact to learn for itself the best routing strategie s. The CTM was developed by Daganzo to provide a simple representation of traffic flow capable of capturing transient phenomena such as the propagation and dissipation of queues (Daganzo , 1994; 1995). The model is macroscopic in nature, and works by divid ing each link of the roadway network into smaller, discrete, homogeneous cells. Each cell is appropriately sized to permit a simulated vehicle to transverse the cell in a single time step at free flow traffic conditions. The state of the system at time t is given by the number of vehicles contained in each cell, ni(t). Daganzo showed that by using an appropri ate model for the relation ship between flow and density, the cell-tran smission model can be used to approximate the kinemat ic wave model of Lighthill and Whitham (1955). For this study, a C++ implementation of the CTM was developed and used to simulate the test network . 3.2. The Intelligent Agent As previously mentioned , RL was the paradigm chosen to develop the intelligent, learning agent that will be used for dynamic traffic routing . Specifically, the learning algorithm implemented in the agent is based on the SARSA algorithm, which is an onpolicy Temporal Difference (TD) learning implementation of reinforcement learning (Sutton and Barto, 2000). SARSA is a temporal difference algorithm because - like Monte Carlo ideas - it can learn directly from the experience without requiring a model of the dynamics of the environmen t. Like Dynamic Programming ideas (Bertsekas, 2000), SARSA updates the desirability of its estimate s of state-action pairs, based on earlier estimates; i.e. SARSA does bootstrapping. For a complex, unpred ictable, and stochastic system like a transportation system, SARSA seemed to be very suitable to adapt with the nature of an ever-changing system. The implementation of the SARSA algorithm is quite simple. Each state action pair (s,a) is assigned an estimate of the desirability of being in state s and doing action a. The desirability of each state action pair can be represented by a function Q(s,a). The idea of SARSA is to keep updating the estimates of Q(s,a) based on earlier estimates of Q(s,a) for all possible states and all possible actions that can be taken in every single state. Equation I shows how the Q(s,a) values are updated . [Equation I] where a is the step-size parameter or the learning rate. According to Equation [I] , at time t, the system was in state s., and the agent decided to take action at. This resulted in moving the system to state S,+I and obtaining a reward of rl+l. Equation [I] is thus used to find new estimates for the Q-value s for a new iteration as a function of the
507
values from the previous iteration . The algorithm typically would go through several iterations until it converges to the optimal values of the Q-estimates.
3.3. Experiment Setup As was previously mentioned, the objective of the experiment presented in this paper is to have an agent that is capable of recognizing the state of the system and deciding upon a diversion rate at junction J 1• If this diversion rate is followed, the system should eventually move to a state of equilibrium where taking any of the two routes will result in the same travel time . For example, if route A (L I) is totally blocked because of an accident, the agent should guide all motorists to take route B (L2).
3.3.1 State, Action, Reward Definition State representation is quite important for the agent to learn properly . Ideally, using the CTM, each state is represented by the number of vehicle in each cell. Computationally, it is impossible to use this representation for the states for a RL agent, since this would make the problem state space very large. In this experiment, the state of the system is represented by the difference in the instantaneous time between taking the short route through LI (route A) and the longer route through L2 (route B). Based on some empirical experiments, the state space was discretized to a finite number of states . Table I shows how the state was discretized based upon the difference in instantaneous travel time between the longer route, route B, and the shorter route, route A. Time difference (dif) in minutes 0< dif < 2 2 -7 -7 > dif >-IO -10 >dif >-15 - 15 > dif> -20 -20> dif > -30 -30 > dif > -40 -40 > dif > -60 -60 > dif
State code -10
-I -2 -3
-4 -5 -6 -7 -8 -9
Table 1: State Space
As can be seen, State 0, for example, refers to the case when both routes are running at free flow speed. For this case, the difference in travel time between routes B and A is in the range of +2 minutes, since route B is 2.0 miles longer than route A. On the other hand, state -9 refers to the case when route A (the shorter route) is extremely congested (totally closed), while the longer route is doing fine. In our experiments, the instantaneous time is determined from speed sensor readings along the two routes . For actions, ideally the diversion rate is a real number between 0 and 100%; which means an infinite set of actions . In this experiment, the set of actions were reduced to only six actions . Table 2 indicates the six different actions used in this experiment. Action code 0 1
Action meaning Divert 100% of the flow to Ll & 0% to L2 Divert 80% of the flow to Ll & 20% to L2
508
2 3 4 5
Divert 60% of the flow to LI & 40% to L2 Divert40% of the flow to LI & 60% to L2 Divert 20% of the flow to LI & 80% to L2 Divert 0% of the flow to LI & 100% to L2
Table 2: Action Set
For the SARSA algorithm, all the values of Q(s,a) are initialized to zero. As the agent experiments with different actions for the sates encountered, the environment responds with a reward, which in our case is equal to the negative of the difference in instantaneous travel time between the two routes. In other words, the goal of the agent is to take the proper action to ensure that the instantaneous difference in time between the two routes does not exceed 2 minutes; i.e. reaching the state of equilibr ium. In the experiment, the authors simulated running the system for around 90 hours of operation . Every five hours and half, an accident is introd uced on link L1. The accident lasts for half an hour causing the reduction of Ll capacity to half its original capacity assuming that one of the two lanes on LI is blocked because of the accident. The purpose of introducing the accident repetitively is to make sure that the agent encounters many different states and lasts in each state long enough to learn the best action for it.
4.0 Results and Discussions The results of the experiment show that the agent managed to learn the right actions for all the states it has encountered. For example, for the state of complete congestion on link LI (state -9), the agent learned that the proper action is to divert 100% of the traffic to the longer route (action 5). On the other hand, for the free flow state (state 0), the agent learned that the best action is to divert all the traffic to the shorter path (action 0). The agent also learned the correct actions for the intermed iate states. For example, when the travel time on the longer route was 10 minutes less than the shorter route, the agent learned to divert only 20% of the traffic to the shorter route and 80% to the longer one. Once the situation improves and the difference is only 7 minutes instead of 10, the agent adjusts the diversion rate to 40 to the shorter route and 60% to the longer one. Figure 2 shows the convergence to the right action for state - 9. The time is represented in seconds* I O. Notice that the system got into state - 9 after almost six hours of the simulation ; 30 minutes after the accident. The agent chose action 0 (diverting all the traffic to the shorter route) the first time the agent encountered state 9. This decision is actually the worst decision that can be made in this situation. But as time passes, the agent decided to switch to action 1 which is still not the proper action to take. By the end of the simulation model , the agent converged to the right action which is diverting all traffic to the long route that exhibits shorter travel time.
509
Conversion for State -9
6..-- - - - -
..
-......
5 +------~_
~_+_ _
-. _
- -___;
....... ~.........t_- - _l = 3 +------.... +H...........~--_l o ·tlng ObjeetJves
"
AntiCipate"
q:¥::'~~;p \
"'" " "
-,
\
\com~!~rlnteraetJons
to.~c~~~:o.... p